Main
Main
Main
DIPLOMA THESIS
2009
I would like to express a great gratitude to my supervisor RNDr. Barbara Zitová, PhD.
and thank her for her professional feedback and help during my work. I would also like to
thank RNDr. Janka Hradilová who has introduced the problem of fresco restoration to me.
I am much obliged to Institute of Archaeology of the Academy of Sciences of the Czech
Republic and Academy of Fine Arts in Prague for providing us with all materials necessary
for our work.
Last, but not least, I am very grateful to all my linguistic advisors for their help with the
final stylistic polishing of this thesis.
I declare that I wrote this diploma thesis independently using exclusively sources cited
herein. I agree with lending of this thesis.
Jan Blažek
Contents
1 Introduction 9
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Idea of restoration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Environmental effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Goals 12
3 Methodology 13
3.1 Process overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Colour Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4 Image enhancement 16
4.1 Denoising . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Subpixel Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Contrast Enhancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
4.4 Edge Sharpening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4.1 Sobel operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
4.4.2 Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5 Registration 22
5.1 Perspective transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.2 Alignment of referential points . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2.1 Covariance measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2.2 Mutual information measure . . . . . . . . . . . . . . . . . . . . . . . 24
6 Image fusion 26
6.1 Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6.2 Fusion Based on Wavelets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
7 Segmentation 30
7.1 K-mean Colour Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
7.2 Ohlander Price Reddy Segmentator . . . . . . . . . . . . . . . . . . . . . . . 31
7.3 Region growing methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
7.4 Morphology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6 CONTENTS
8 Alternative approach 37
8.1 Diffusion model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
9 Results 40
10 Conclusion 44
10.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Bibliography 45
A Appendix 47
Title: Digital restoration of fresco paintings
Author: Jan Blažek
Department: Department of Theoretical Computer Science and Mathematical Logic
Supervisor: RNDr. Barbara Zitová, Ph.D.,
Institute of Information Theory and Automation
Supervisor’s e-mail address: zitova@utia.cas.cz
Abstract: In this thesis we present the application of digital image processing algorithms
for the process of fresco restoration such as image registration, image fusion,
and image segmentation. We have worked with images of various modalities
(visual and ultraviolet spectra) and at different times. Moreover, during the
image analysis we also have taken the local chemical analysis into account.
The robustness of proposed algorithms is required to be high with respect to
the bad state of the fresco. The achieved results provide a better insight into
the evolution of the fresco aging to the art conservators and show the way
how a proper conservation method can be chosen. All developed methods are
illustrated by generated output images.
Keywords: virtual restoration, wall painting in the fresh plaster, fresco painting, image
analysis, image fusion, image registration
Computer technology is so advanced nowadays, that even such a complicated task as the
image processing or the restoration of old paintings can be solved virtually on your monitor.
This virtual restoring is a very useful and powerful tool especially in the field of culture
heritage. Every real interference with the work of art is very risky and simulations can
increase the quality of restorers’ final product. In addition to great usability of such virtual
processing, virtual restoration is also cheaper than other types of extraneous analysis.
Despite the obvious utility of such vir-
tual restoration there is still no specialised
software suited to that purpose, even though
there is a very long list of image processing
algorithms already available. That includes
algorithms for image recognition and algo-
rithms for processing of multimodal data as
well as plenty of algorithms for improving vi-
sualization of noisy images especially from CT
or MR medical devices. At first glance there
seems to be no connection between ancient
art and CT images, but on closer look we
find many mutual issues such as image regis-
tration, contrast enhancing, segmentation, or
texture synthesis. Of course there is a lot of
differences as well, e.g. dissimilar erosion of Figure 1.1: The church of st. George in Kos-
images (lacunas and scratches on the paint- tolany pod Trı́bečom (Slovakia)
ings compared to the noise on CT images). So even though there are many specialised
algorithms for image processing, they are not exactly what we need.
1.1 Motivation
As our first motivation we gained the images from a church in Kostolany pod Trı́bečom in
Slovakia (see Figure 1.1). The church of st. George is pre–Romanesque firstly mentioned in
1113 A.D. It is decorated with a very large set of wall paintings on the fresh plaster. Our
approach to virtual restoring is based on those paintings and all examples mentioned in this
thesis shows results obtained on Kostolany paintings. Presented methods were developed
10 CHAPTER 1. INTRODUCTION
in close cooperation with conservator scientists from ALMA1 , therefore we propose methods
which facilitate work of conservators. The presented method would be useful also for another
freco paintings.
1
Academic Materials Research Laboratory for Painted Artworks – joined laboratory of the Academy of
Fine Arts, in Prague, and The Institute of Inorganic Chemistry, The Academy of Sciences of the Czech
Republic
1.3. ENVIRONMENTAL EFFECTS 11
Figure 1.3: Original input data set. Photos of wall painting on the fresh plaster.
Image in visible spectrum (a), old photo from ‘1960s (b), wide–band UV spectrum (UVN) (c), narrow–band
UV spectrum with maximum at 369nm (d). All can be seen different photographic angle and also different
quality.
The main purpose of this work is to propose a set of methods suitable for virtual restoration
of old paintings and helpful in the analysis procedure. We want to enable more effective
object identification in wall painting, to simplify orientation between different modalities of
the same painting and show differences between layers in the sense of different modalities.
This goal essentially contains the testing of known methods and their improvement as well
as development of new ones, that should react better within this type of work.
From the view of great complexity of such goal we propose systematic consecution of
digital image processing which contains image enhancement (noise reducing, contrast en-
hancement, edge sharpening), image fusion (especially multimodal data) and segmentation.
For every part of this model we suggest methods that are suitable foremost for our type of
input data.
Chapter 3: Methodology
formation by the colour mean or synthesized by using the artificial texture for creation final
virtual look of the art work.
9
8 7
8
7
6
7
6
5
6
5
5 4
4
4
3
3
3
2
2
2
1 1
1
0 0 0
0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250
For our purpose we often employ HSV, HSL, Yuv, or CIElab colour space [1]. Broadly
used are also images in gray spectra. Among these spectra we can simply convert our image
using functions mentioned in Chapter 6 in [1].
Chapter 4: Image enhancement
First we concentrate on methods that can improve our input image before further processing
as well as emphasise differences between measured values. All these methods scale measured
values to possible maximum of defined range so that we can better recognise objects and also
make the input smoother. Such operations do not need to show us any “real” image but can
positively affect all other algorithms.
Ideally we use data that can be described by a continuous two-dimensional function, where
the function range is the pixel colour. Unfortunately, reality is different. Images are affected
by omnipresent noise and truncation error and are completely discrete due to splitting into
pixels. Our goal is to reduce noise by smoothing the image (Section 4.1). Then we can use
an interpolation function for subpixel points (see Section 4.2). This process guarantees that
the image function behaves as continuous in spite of being based on discrete values. Finally
we equalize histogram of the image to increase contrast of input data (Section 4.3).
4.1 Denoising
Denoising is standard image processing method which clears off high–frequency information
with or without respect to edges. Most common and used approximations of noise are white
Gaussian noise and salt and pepper noise. For our input data we assume white Gaussian
noise approximation, which means gauss function added to image function f . Image obtained
after digitalization process is then f 0 :
f 0 = f + n, (4.1)
where n = N(0, σ 2 ). For reducing noise we can use many different filters (see [1]). Modern
denoising methods are using the decorrelation properties of wavelet decomposition [2] or
pleasible beavior of dictionary-based approaches [3]. The result depends mainly on the quality
of images. In our case we can expect that the purchase of high quality images is possible,
because the creation of fair input data is a part of the conservation process. Using this
assumption we can use arbitrary and a very simple filter with variable size and weights.
The formula of denoising filter is as follows:
1
M[x, y] = , (4.2)
(x2 + y 2 + 1)
4.2. SUBPIXEL INTERPOLATION 17
1 0.5
0.9 0.45
0.8 0.4
0.7 0.35
0.6 0.3
0.5 0.25
0.4 0.2
0.3 0.15
0.2
0.1
0.1
0.05
0
5 0
0 5
0 5 5
−5 −5 0 0
−5
(a) (b)
where x, py ∈ {−R, . . . , −1, 0, 1, . . . , +R} and M[x, y] represents weight of pixel value at a
distance x2 + y 2 from the denoised pixel. Graphically shown in Figure 4.1. After convolu-
tionPof pixel neighbourhood with the weight matrix M we have to normalize obtained value
by M[x, y].
In some cases we need a filter independent on the original pixel value and so we use the
same formula as in equation 4.2 but with M[0, 0] = 0. Surface of this function is shown in
Figure 4.1(b).
Described pre–processing should improve efficiency of algorithms sensitive to noise i.e.
algorithms which mostly run only on local range (region growing methods in segmentation).
We choose radius size for the smoothing filter with respect to actual noise in input data.
8000
8000
7000
7000
6000
6000
5000
5000
4000
4000
3000
3000
2000
2000
1000 1000
0 0
Method presented in [5] combines histogram equalization with saturation and desatu-
4.4. EDGE SHARPENING 19
ration process, which should improve colouring of the image. Process affect only u and v
components in uvY colour space, where u and v specify chromaticity and Y stands for bright-
ness. Histogram equalization is then applied on Y component. Principle of saturation and
desaturation is enhancing pixel chromaticity, which have some colour value (i.e. those that
are not near a white point). These colours are saturated - two of RGB colours are maximized
proportionally in ratio of the original pixel and after saturation are desaturated allow the
center of gravity by the law of the colour mixing. Details can be found in [6].
Influence of these improving algorithms is shown in Figure 4.3.
(a) (b)
(c) (d)
Figure 4.3: Original and enhanced images. The three enhanced images were made using
adaptive histogram equalization algorithm.
Used ranges were 2 (b), 150 (c) and 800(d) respectively.
gradient for each spectrum of used colour space and improve each spectrum separately or
to examine the “global” gradient measure. These algorithms can lead to different gradient
measures, so we have to choose the best one experimentally.
For both matrices we compute the convolution with the original image
G = Ω ∗ S. (4.5)
p
Finally from Gx and Gy we receive the value of G = G2x + G2y . The interpretation of this
value is then the image containing all the found edges or a matrix for image improvement.
The operator G is a discrete representation of the first derivative splitted into two matrices
- each matrix represents the partial derivative.
4.4.2 Laplacian
A edge detection method using Laplacian was firstly mentioned in [7] and is often known as
Marr–Hildreth algorithm. The second derivative of a function is represented by the Laplacian
4.4. EDGE SHARPENING 21
−4
x 10
−1
−2
−3
20 20
10 10
0 0
−10 −10
−20 −20
(or the laplace operator in discrete digital image space). The Laplacian of a function f is
defined as:
Xn
δ 2 f (x)
2
∆f = ∇ f = ∇ · ∇f = 2
, (4.6)
i=1
δx i
As mentioned earlier we use different types of data (chemical samples of colour layers, pho-
tos of different light spectra) for virtual restoration. Data are acquired in misaligned two-
dimensional meshes of coordinates (except for special cases when tripod or another position
fixing method is used). These meshes have different scales and can be deformed towards
each other. If we want to use universal coordinate system for images obtained from different
camera position we have to register them.
There are two approaches to such registration [8]: local and global. Let’s assume, for
the sake of simplicity, that the paintings are completely flat, making the second type of
registration better suited to our purpose, because we can transform each point using the same
transformation parameters. Local registration could still be used for corners of buildings, but
such data input can be split into two flat images.
For this reason we need to have four referential points [x1 , y1 ], [x2 , y2 ], [x3 , y3 ], [x4 , y4 ] in
referential image and four target points in aligned image [u1 , v1 ], [u2 , v2 ], [u3 , v3 ], [u4 , v4 ] from
which we can enumerate values a, b, c, d, e, f , g and h from the formula (5.1). The equation
system obtained from (5.1) can be expressed also in matrix notation:
−1
x1 y1 1 0 0 0 −x1 u1 −y1 u1 u1 a
0 0 0 x1 y1 1 −x1 v1 −y1 v1 v1 b
x2 y2 1 0 0 0 −x2 u2 −y2 u2 u2 c
0 0 0 x2 y2 1 −x2 v2 −y2 v2 v2 d
× = . (5.2)
x3 y3 1 0 0 0 −x3 u3 −y3 u3 u3 e
0 0 0 x3 y3 1 −x3 v3 −y3 v3 v3 f
x4 y4 1 0 0 0 −x4 u4 −y4 u4 u4 g
0 0 0 x4 y4 1 −x4 v4 −y4 v4 v4 h
5.2. ALIGNMENT OF REFERENTIAL POINTS 23
where X represents the pixel values in vicinity {[−R, −R], . . . , [+R, +R]} around new target
pixel A and Y represents pixel values in the same range around referential pixel B center.
Both, X and Y , are random values. E represents the expected value and it can be calculated
as follows:
1 XR XR
E(X) = colour[Ax + i, Ay + j], (5.4)
(2R + 1)2 i=−R j=−R
1 XR XR
E(Y ) = colour[Bx + i, By + j]. (5.5)
(2R + 1)2 i=−R j=−R
These values are calculated separately for each colour band and then added together (with
equal weights). Also often used similar measure is correlation which is also usable and where
is covariance normalized by standard deviation:
cov(A, B)
cor(A, B) = (5.6)
σx σy
X
H(Y |X) = − p(Y = yi |X) ∗ log[p(Y = yi |X)], (5.8)
yi ∈ΩY
X
H(Y, X) = − p(Y = yi , X) ∗ log[p(Y = yi, X)]. (5.9)
yi ∈ΩY
According to this formula we want to maximize I(X, Y ); that means maximizing entropy of
the image Y and minimizing joint entropy of Y and X, with H(X) constant for our purpose.
Now we concern a small vicinity of the referring point and we calculate in this vicinity mutual
information to target point Ii,j (Y, X) for each pixel, where X and Y do not represent whole
image, but a small vicinity of our actual pixel [i, j] only. Then we choose the pixel with
Ii,j (X, Y ) maximal.
A discrete version of Ii,j (X, Y ) would be:
b
p(Y = I)=p(Y = I|Y = colour[x, y], [x, y] ∈ A), (5.11)
#colours
X
H(A) = − p(Y = I) · log[p(Y = I)], (5.12)
I=0
#colours #colours
X X
H(A, B) = − p(Y = I, X = J) · log[p(Y = I, X = J)], (5.13)
I=0 J=0
In order to maximize I(X, Y ), we focus only on the term H(Y ) − H(X, Y ). The pre-
sumption for using of the mutual information (and covariance also) as a measure is that the
difference after transformation on vicinity of referring pixel would be insignificant. If this
assumption is not valid we must split transformation to small steps where the assumption
valid is.
Now we finish pre–processing phase of image enhancing. In the next few chapters we propose
methods for the image fusion and uncovering connections between different layers of wall
painting. For the connection we use segment masks, which can be compared and fused
images, which contains the most important information from single images.
Chapter 6: Image fusion
As it was mentioned above as an input we obtain different images. We want to extract from
these images all the useful information which is important for conservator analysis. From
n input images we want to have only one image with all valuable information. The most
important is to decide what is the valuable information. Let us have a closer look on following
two methods: the principal component analysis (PCA) and the fusion based on wavelets.
Values of each pixel are normalized according with the next formula:
xi [b] − X[b]
xd
i [b] = p . (6.3)
var(X)[b]
After normalization we can use the Oja’s iterative algorithm which counts in each step the
value of the principal component −
→
w . In the first iteration we initialize this component by a
6.1. PRINCIPAL COMPONENT ANALYSIS 27
normalized random value and in every next step we actualize value as follows:
−
→
w k+1 = −
→
w k + γk φ(xbi − φ−
→
w k ), (6.4)
b·−
where φ = x → and γ is a learning factor. This step guarantees normalization of vector −
w k
→
w
and converges to its local optima if is valid:
∞
X ∞
X
γk = ∞ and γk2 = C.
k=0 k=0
To reach the global optima is necessary to use an extra learning method such as simulated
annealing. The details can be seen in [11].
Assumed that Oja’s algorithm converges, it is necessary to project the n–dimensional
space generated by input images to n − 1-dimensional which is orthogonal to our vector −
→w.
Then we can compute the next component for projected xbi .
The recursive use of this methods generates up to n orthogonal vectors with descending
importance. We can simply use the first three components which we interpret as RGB or
HSV colours. These colours are synthetic such as interpretation of the first component in
grayscale. The output image give us only the information about the position of interesting
objects. One example of the transformation can be seen in Figure 6.1 and each of principal
components in a grayscale in Figure 6.2
28 CHAPTER 6. IMAGE FUSION
The last part of our work consists of the segmentation which constitutes splitting of the real
image Ω in to regions Ri containing pixels of similar characteristics. In those regions hold
two conditions:
Sn T
i=0 Ri = Ω and Ri Rj = ∅ for i 6= j,
however, the splitting criteria can be very variable. For our purposes it is very advisable
to obtain segments which had originally the same colour or texture. We also want to mark
regions where the information has been lost (scratches, lacunas). As well as the comparison
of segment masks in different layers of image is desirable to a great extent. It means exactly
to compare the segment masks in the images in visible spectrum and in UV spectrum.
There are three main ways how to segment images. A global look represents the K–
mean algorithm which compare the information at every point in the whole image. And
the top–down and the bottom–up approach, where the segments are splitted in accord with
their inconsistency or where the segments with local similarity among neighours are joined
together, respectively.
where xj is a pixel value and xj ∈ Ci means that the nearest colour mean for xj is Ci . This
minimization is even more complicated when we use three colour bands i.e. RGB. The first
possibility is C i three dimensional colour and term (7.1) minimized for all the three colour
bands at once. In this case we obtain C i without chromaticity close to any level of gray
colour especially for images with more than one colour. The other way, to split quantisation
for each band, is more sensitive to chromaticity, but we take a risk of splitting the image
into too many segments. More precisely there are K × K × K different segments (we obtain
for each band K independent segments which). For the minimisation of term (7.1) we use
7.2. OHLANDER PRICE REDDY SEGMENTATOR 31
iterative algorithm. At first we randomly set the value of C i = color[x, y] where x, y are set
randomly. In each loop of this iterative algorithm we act using the next scheme:
As was said above, we can use different means for each colour or universal for all colour
bands together. The effect of both possibilities shows Figure 7.1
Another variant of the K-mean algorithm is add to the colour means also coordinates of
pixels. Then the algorithm keeps more continuous segments with this information.
we split into segments by using any peak in its histogram. The splitting algorithm works in
a loop:
4. Count left and right threshold of each peak and choose the best peak in
the histogram.
As we show in Figure 3.3 it is very important here in which colour space we use the
histogram. Each operation as smoothing 4.1 or the histogram equalization 4.3 have changed
our histogram. In general, the contrast equalization adds a lot of peaks into the histogram
therefore we obtain oversegmented image. On the contrary, the smoothing reduces number
of peaks. So the small peaks vanish in spite of their crucial sense for deeper hierarchical
splitting. Therefore the utilization of these enhancing methods before OPR Segmentator is
not easy and needs some experience.
The peak selection is based on the multispectral searching. We prepare several histograms
– in red, green and blue spectra and also in hue, saturation and value spectra. We can add
also another colour histogram as uvY and thus improve the effect of the peak selection. At
first we smooth the histograms in order to reduce noise effect. A peak candidate we define
as a point in the histogram where the previous and the next value are smaller than the peak
candidate value in the histogram and it is valid:
1. The difference of the histogram values between a peak candidate and its
nearest local minima is sufficient
2. The ratio of a peak candidate value and its nearest local minima value
is sufficient
8000
7000
6000
5000
4000
3000
2000
1000
0
0 50 100 150 200 250 300
Figure 7.2: Splitted histogram by peak value and borders on left and right
one as a background. The entropy we count as in equation 5.12, a splitted histogram we can
see in Figure 7.2
Finally we have to show how the borders of the best peak are selected. The first condition
is the local minima, the other conditions are similar to the peak selection:
1. The difference between the histogram value of local minima and the next
local maxima is sufficient
2. The ratio between the local minima value and the nearest local maxima
value is sufficient
3. The distance between the nearest peak candidate and our cadidate is
sufficient
Figure 7.3: The original and the segmented image with using Ohlander Price Reddy algorithm
On the segmented image can be seen a problem of segmentation with the parts with different brightness.
The segmented image consists of 107000 segments
proposed measure for the similarity is based on the colour mean difference, the variance of
colours in segments, the size of segments and on the size ratio of joined segments.
The mean colour in each segment is counted as an average in each band by equation 6.1.
The mean difference we count simply as:
A = |X − Y |. (7.3)
The variance is counted by equation 6.2. The aim is to minimize variance in the segments.
We want to minimize variance after joining. Clear counting of variance for each pair of
candidates is very time consuming. Therefore we reduce this expensive enumeration and
instead of the variance var(X) we compare standard deviation σX and colour mean distance
A. The standard deviation is given by:
p
σX = var(X). (7.4)
RX = h↓ RX , ↑ RX i = hX − σX , X + σX i. (7.5)
(a) (b)
All these measures A, . . . , D we join in one, where the results in A, . . . , D represent votes
for the best candidate. We compare all the neighbouring segments and take the best result
according with used combined measure.
Effect of segment reduction is on Figure 7.4.
7.4 Morphology
After a pure segmentation are segments often jagged and aliased. Those segments need not
to be in the harmony with real borders in the image. For reduction of small segments on the
borders and for border smoothing we can use morphology algorithms. Two basic algorithms
is the segment opening and the segment closing. Both of this methods are composed from
the segment dilatation and the segment erosion in different order.
The erosion of the segment S by the structuring element E is defined by:
For our purpose we take the structuring element as circle and from the segment cut border
of width of the radius of our circle.
The dilation of S by the structuring element E is defined by:
As in the erosion we take circle as the structuring element and add border of width of
the radius of our circle.
By a combination of these two operators we obtain the segment closure as:
A • B = (A ⊕ B) B, (7.9)
As an alternative approach which has appeared during our work is the study of degradation
processes. The wall painting is for a long time under the effect of the atmosphere, the acids
created in the plaster or under the effect of a mechanical damage. Based on contemporary
images and a good degradation model we can reconstruct the probable look of art work
many years ago. A complex model of the degradation process is not easy to create due to the
unpredictable effects like changes in humidity, possible previous restoration or the information
lost (a hole in plaster etc.). We have to simplify our model but on the other hand, model
have to reflect the most of the important influences. There we present a modelation of the
water diffusion effect.
X
M ×N
I(x, y, τ, p) = pi,p (x, y, τ ) · I(xi , yi , 0, p), (8.1)
i=0
P
where pi,p (x, y, τ ) represents a distribution function and represents additive joining of
colours. This distribution function we can try to compose from the gaussian diffusion and
a shifting vector which represents gravitation or another preferred direction of humidity
proliferation.
At first we suppose that the granularity of the plaster is constant for each point of the
image and therefore the diffusion is homogenous gaussian. For gaussian distribution we
obtain equation:
1 (xi −x)2 +(yi −y)2
pi,p (x, y, τ ) = √ e− 2σ 2 (8.2)
2πσ 2
where [xi , yi] is position of our pixel in the image and variance σ 2 depends on τ and on the
type of pigment p particularly σ = f (τ, p). This equation is valid for both, continuous image
and discrete digital image difference, only in domain of i.
38 CHAPTER 8. ALTERNATIVE APPROACH
Fg
Figure 8.1: Weight distribution between shift function and diffusion function
0.03 −1
−2
−3
0.025 −4
−4 −3 −2 −1 0 1 2 3 4 5
0.02 5
3
0.015
2
0.01 0
−1
−2
0.005 −3
−4
−4 −3 −2 −1 0 1 2 3 4 5
0
20 5
4
10 20 3
0 10 2
0 1
−10 0
−10 −1
−20 −20 −2
−3
−4
−4 −3 −2 −1 0 1 2 3 4 5
The implementation of the shifting vector can be used as follows. We suppose that
distance of pigment from the original position is given by the type of pigment p and by time
of degradation τ . Then the length of the vector [xi − x, yi − y] used in pi,p (x, y, τ ) we divide
to the shifting vector and the original direction by the ratio of water viscosity in plaster and
gravitation. This ratio will be represented by the coefficient α ∈ h0, 1i. Finally, we obtain:
1 xi −x)2 +(c
(c yi −y)2
î,p (x, y, τ ) = √
pc e− 2σ 2 , (8.3)
2πσ 2
where [xbi , ybi] = (1 − α)[x, y − F [y]] + α[xi , yi]. The effect of the vector splitting can be seen
in Figure 8.1. An example is shown in Figure 8.2.
In spite of the first look our model is not as robust as we desire. The parametres which we
include are the preferred direction F , the period of degradation τ (with unspecified transfor-
mation function to variance σ 2 ) and the effect of a preferred direction α. On the other hand,
8.1. DIFFUSION MODEL 39
we suppose that there is no colour change and that the process of diffusion is homogeneous.
Both of these presumptions are not realistic but they are necessary for the sake of simplifica-
tion. The colour change is very common phenomenon that we cannot solve without a special
knowledge about dyes. The diffusion also can be homogenous only when special conditions,
like constant humidity in the atmosphere hold. If these conditions are not guaranteed, all of
our parametres can differ in times and strongly affect the diffusion function.
Chapter 9: Results
In this chapter we present the results of our approach to the virtual restoration. Each image
contains the description of used algorithms and order of images keeps chronology of used
algorithms. All presented result was made on original images (see Figure 9.1)
Figure 9.2: Contrast enhancement of input data with region size of 200 × 200 pixels.
41
(a) (b)
Figure 9.6: The fusion of all input images. (b) shows enhanced verison
We can see lacunas and a collar of the person from UV spectrum and shape from visible spectrum. Some
details are visible in lacunas and also on improved look of wrist.
43
(a) (b)
(c) (d)
Our work presents an approach to the virtual restoration. Our exploration in restoring shows
useful algorithms from image processing which can improve the quality and uncover possible
hidden parts of the art work. During our work we fight with unspecified parts of our work
which was uncovered subsequently and often change the direction of our work. Similarly the
conservators uncover the possibilities of the virtual processing. The possibility of a simulation
of the degradation process appear a short time before the thesis finalization and also input
data were continuously filled up. For that reason there remain a great motivation for the
future work.
As our contribution we consider
[1] Rafael C. Gonzalez and Richard Eugene Woods. Digital image processing. Prentice Hall,
3rd edition, 2007. 954 pages.
[2] D. Donoho, I. Johnstone, and Iain M. Johnstone. Ideal spatial adaptation by wavelet
shrinkage. Biometrika, 81:425–455, 1993.
[3] Michael Elad and Michal Aharon. Image denoising via sparse and redundant represen-
tations over learned dictionaries. IEEE Transaction on Image Processing, 15(12):3736–
3745, December 2006.
[4] Stephen M. Pizer, E. Philip Amburn, John D. Austin, Robert Cromartie, Ari Geselowitz,
Trey Greer, Bart Ter Haar Romeny, and John B. Zimmerman. Adaptive histogram
equalization and its variations. Comput. Vision Graph. Image Process., pages 355–368,
1987.
[5] Soo-Chang Pei, Yi-Chong Zeng, and Ching-Hua Chang. Virtual restoration of ancient
chinese paintings using color contrast enhancement and lacuna texture synthesis. In
Image Processing, IEEE Transactions, volume 13, pages 416 – 429, March 2004.
[6] L. Lucchese, S. K. Mitra, and J. Mukherjee. A new algorithm based on saturation and
desaturation in the xy chromaticity diagram for enhancement and re–rendition of color
images. In Proc. Int. Conf. Image Processing (ICIP 2001), volume 2, pages 1077–1080,
September 2001.
[7] D. Marr and E. Hildreth. Theory of edge detection. In Proceedings of the Royal Society
of London. Series B, pages 187–217, February 1980.
[8] Barbara Zitová and Jan Flusser. Image registration methods: a survey. Image and
Vision Computing, 21:977 – 1000, October 2003.
[10] Barbara Zitová, Filip Šroubek, and Jan Flusser. An application of image processing in
the medieval mosaic conservation. Pattern Analysis and Applications, 7:18–25, February
2004.
46 BIBLIOGRAPHY
[12] Ingrid Daubechies. Ten lectures on wavelets. SIAM, 3rd edition, 1992. 357 pages.
[13] Hui Li, B. S. Manjunath, and Sanjit K. Mitra. Multisensor image fusion using the wavelet
transform. CVGIP: Graphical Model and Image Processing, 57(3):235–245, 1995.
[14] Ron Ohlander, Keith Price, and Raj Reddy. Picture segmentation using a recursive
region splitting method. In Computer Graphics and Image Processing, pages 313 – 333,
1977.
Chapter A: Appendix
• Figures used in this thesis in encapusulated post–script format (in the directory figures)