Independent Dose Calculations Concepts and Models
Independent Dose Calculations Concepts and Models
Independent Dose Calculations Concepts and Models
software
(Appendix 1) and require very little commissioning. A common place for IDC as part of a QA
program consisting of various diversifed checks is during the frst week of treatment, pre-
ferable before the frst treatment and when a treatment plan has been modifed. The resulting
data is recommended to be stored in databases for further statistical analyses.
13
3. dosIMetrIC toleranCe lIMIts and aCtIon
lIMIts
For a given treatment unit with a specifc treatment beam setting, including collimator set-
tings, monitor units, etc, and a specifc irradiated object there exists a true dose value for each
point within the object. The true dose cannot be determined exactly but can be estimated
through measurements or calculations. The algorithms in modern treatment planning systems
should be able to reach an accuracy of 2-4% (1SD) in the monitor unit calculations (Vense-
laar and Welleweerd, 2001). A more recent review of the total uncertainties in IMRT (Mijn-
heer and Georg 2008) indicates somewhat narrower uncertainty distributions. However, this
range of uncertainties is probably adequate for most radiotherapy applications today. There
is however always a risk for an algorithm failure, caused by a bug or user mistake. To avoid
mistreatments the true dose should be estimated a second time through an independent calcu-
lation. The goal of the comparison between the primary and the verifcation calculations is to
judge the reliability of the primary calculation. If the deviation between the two calculations
is too large it is necessary to perform a third estimation of the true dose, for example through
a measurement, before it can be considered safe to start the treatment. The intention of this
chapter is to propose a procedure and to quantify what is meant by too large a deviation.
The actual purpose of this type of theoretical analysis of dose deviation and the establish-
ment of action limits should always be considered carefully when designing the models. In
the current model we have decided to omit the predictive effect by the TPS on the resulting
uncertainty distribution. Under conditions when everything behaves correctly this approach
would somewhat overestimate the total uncertainty but for the normal case when every-
thing performs correctly a high accuracy IDC would indicate results well within the action
limits anyway and the small systematic errors which may pass is suggested to be analysed in
a more sensitive database model instead (chapter 4). The focus of the action limit concept is
instead to alarm when the TPS or some other source of errors fails by unpredictable errors.
In these cases the error sources, including the TPS, should be regarded as random and no
predictive effect by the TPS should be applied to the uncertainty distribution in the models.
In a somewhat idealised scenario the verifcation procedure will be as follows: together with
the dose prescription of the oncologist to the individual patient both an upper and lower to-
lerance limit are prescribed. A treatment plan will be prepared according to this prescription
and the dose will be verifed by an independent method. This independent dose verifcation
is assumed to be performed with an uncertainty that is known or possible to estimate. Thus,
a probability distribution for the true dose will be defned by the IDC and the uncertainty in
the IDC.
When the probability distribution for the true dose is known it is possible to express the ac-
tion limit in terms of a confdence level for the tolerance limits. If the uncertainty is assumed
to follow a normal distribution it is not reasonable to set the confdence level to 100%. To
14
keep the right balance between risk and workload, the procedures must include an accepted
risk level for doses delivered outside the specifed tolerance limits.
Figure 3.1 Illustration of parameters related to an IDC procedure. It is important to realise that
an IDC is associated with an uncertainty distribution and a condence interval C
. The Gaussian
curve represents the assumed probability density function of the true dose to the target.
The prescribed dose D
P
is identical to the dose specifcation in the TPS and is the prescribed
dose to be delivered to the patient.
The true dose D
T
is the true value of the delivered dose.
The IDC dose D
IDC
is the dose value obtained by the independent dose calculation. The beam
parameters and monitor unit settings as calculated by the treatment planning system are used
as input parameters in the independent dose calculation.
The true dose deviation D is defned as the difference D
P
- D
T
. The normalised true dose
deviation
is defned as D normalized to a reference dose, e.g. D
P
for verifcation points in
the tumour volume.
The observed dose deviation D is defned as the difference between the prescribed dose
D
P
and the dose obtained by the independent dose calculation system D
IDC
. The normalised
observed dose deviation is the normalised difference
D
P
D
IDC
= (3.1)
D
IDC
The dose calculation uncertainty is here defned as the estimated one standard deviation of
the D
IDC
estimation of the true dose D
T
.
The dosimetric confdence interval C
C
D
p
TL
+
TL
-
D
D
IDC
Dose
P
r
o
b
a
b
i
l
i
t
y
d
e
n
s
i
t
y
Risk for
injury
Risk for
no cure
Prescribed
dose
CCC
/2
IDC-
calculation
D
p
TL
+
TL
-
D
D
IDC
Dose
P
r
o
b
a
b
i
l
i
t
y
d
e
n
s
i
t
y
C
15
Dosimetric Tolerance Limits and Action Limits
the confdence interval in a normal distributed dataset. The one-sided deviation (/2) is de-
fned for applications where only one tail of the statistical distribution is of interest. Typical
values are CL=95% giving =5% and /2=2.5%.
The dosimetric tolerance limits TL
and TL
+
are defned as the lower and upper maximum
true dose deviations from the prescribed dose which could be accepted based on the treat-
ment objectives, treatment design and other patient-specifc parameters. When the dosimetric
tolerance limits are applied as offset from the prescribed dose and TL
+
= - TL
-
the symbol
TL
and
the confdence interval C
for the true dose. When the action limits are applied as offset from
the prescribed dose and AL
+
= - AL
the symbol AL
.
These dose-related parameters can be given either in absolute terms, Gy, or in relative terms.
In general presentations of deviations, tolerance limits and action limits the normalized re-
lative dose concept is often preferred. Patient specifc data as applied in the clinic may be of
either type or a combination. However, special care is required when transferring parameters
from relative to absolute or when transforming parameters from one relative reference sys-
tem to another.
3.1 deterMInatIon oF dosIMetrIC toleranCe lIMIts
The dosage criteria used in radiotherapy should ideally be based on population data descri-
bing probability of cure and complication rate in a patient cohort and the biological parame-
ters describing these effects should be determined by statistical methods. The prescribed dose
D
P
and the dosimetric tolerance limits (TL
, TL
+
) will thus be based on these distributions
of clinical data, see fgure 3.2.
In the statistical analysis the tolerance limits are defned as limits within which we expect
to fnd a stated proportion of the population. In this special case the upper tolerance limit
represents the risk for unacceptable complications and the lower tolerance limit represents
the risk of too low tumour effect. These tolerance limits are based on probabilistic measures
16
and should thus be treated as stochastic variables. This will in principal have an impact on the
determination and interpretation of the action limits for the observed dose deviation. This is
however out of the scope in this work and will not be discussed further.
Figure 3.2 Illustration of the procedure to obtain dosimetric tolerance limits from TCP and NTCP
data. TL
AL
= TL
+ (3.2)
2
where C
describe the uncertainty of the IDC. Figure 3.3 illustrates the relation between the
IDC uncertainty and the proper action limits. The dosimetric tolerance limits are in all cases
set to TL
= 8% and the confdence level is set to 95% (/2 = 2.5%). Figure 3.3a) illustrates
a case where the standard deviation of the IDC is 1% ( = 1%). The 95% confdence interval
for the true dose around the IDC calculation will in this case be 2% (C
= 2%). According
to equation 3.2 the resulting action limits will in this case be 6% (AL
= 6%) as illustrated
in the fgure. Figure 3.3 b illustrates a more realistic case with an IDC uncertainty = 2%
resulting in action limits AL
= 4%. A rather unrealistic case with no assumed uncertainty
=1%
=2%
=0%
P
r
o
b
a
b
i
l
i
t
y
d
e
n
s
i
t
y
+
TL -
TL
/2
/2
/2
-
AL +
AL
+
AL
+
AL -
AL
-
AL
a)
b)
c)
=1%
=2%
=0%
P
r
o
b
a
b
i
l
i
t
y
d
e
n
s
i
t
y
+
TL -
TL
/2
/2
/2
-
AL +
AL
+
AL
+
AL -
AL
-
AL
=1%
=2%
=0%
P
r
o
b
a
b
i
l
i
t
y
d
e
n
s
i
t
y
+
TL -
TL
/2
/2
/2
-
AL +
AL
+
AL
+
AL -
AL
-
AL
a)
b)
c)
19
Dosimetric Tolerance Limits and Action Limits
( = 0 and C
= 0) is illustrated in fgure 3.3c, which puts the action limits equal to the dosi-
metric tolerance levels.
Assuming a normally distributed uncertainty in the IDC, the risk of exceeding the dosimetric
tolerance limits at different observed dose deviations can be calculated. Figure 3.4 illustrates
a case with the dosimetric tolerance limit, TL
set to 6%. Figures 3.4a and 3.4b illustrate the
probability distribution at different observed dose deviations with IDC uncertainty of =1%
in panel a) and =3 % in panel b). Figure 3.4 c shows the risk for a true dose outside the
dosimetric tolerance limits as a function of the observed dose deviation, for IDC uncertain-
ties of =1, 2 and 3%. In this example it is obvious that the IDC uncertainty is of crucial
importance and that the achievable action limit is critically dependent on the accuracy of the
IDC. If the standard deviation of the dose verifcation is larger than 3% a dosimetric tolerance
limit of +-6% cannot be achieved with a 95% confdence level (a=5%) even when no dose
deviation is detected by the IDC!
As seen in fgure 3.4c the risk of a true dose outside the dosimetric tolerance limit will incre-
ase with increasing dose deviations () and will reach the clinically accepted limit when
approaches the action limits. It is important to realize that setting /2 = 2.5% does not mean
that 2.5% of the patient cohort will receive doses outside the dosimetric tolerance limits. The
correct interpretation with this approach is that 2.5% of the patients with observed deviations
equal to the action limit will actually receive a dose outside the dosimetric tolerance limits.
This interpretation can also be formulated as: The probability of identifying cases where the
true dose is inside the dosimetric tolerance limits is always larger or equal to 1-.
The task group will not suggest any specifc method to defne the confdence level for clinical
procedures but we strongly recommend that clinics analyse the procedure in use. Such ana-
lyses will reveal weak points in the QA system or unrealistic assumptions on the dosimetric
tolerance limits used in the clinic.
20
Figure 3.4 Illustration of the probability of patients exceeding the assumed tolerance limits of 6%
as a function of observed dose deviation, . Panels a) and b) illustrate the IDC uncertainty when
the observed dose deviation is 0 and +-6%. The IDC uncertainty is set to =1% in panel a and
in panel b to =3%. Panel c) illustrates the probability of exceeding the tolerance limits as a
function of the observed dose deviation, for IDC uncertainties =1, 2 and 3%.
The prescribed dose to the tumour and the tolerance limits are in general applied in the high
dose region but in a more detailed analysis of the treatment plan there is a need for methods
that can be applied also in the low dose regions as well as in gradient regions. For the sur-
rounding normal tissue separate dose criteria may be used, often represented by only the
upper tolerance limit applied to some equivalent uniform dose quantity dependent on the
characteristics of the tissue.
Historically the low dose regions have been regarded as less signifcant and have in general
not been simulated to the same level of accuracy in the treatment modelling. This may be
clinically relevant for the target dose but in modelling of side effects correct dose estimations
in the low dose regions may also be crucial. In the current report we suggest that action limits
related to target tissue should be applied to the absolute dose deviation or relative dose nor-
malized to the prescribed dose. By this method the importance of deviations in the low dose
= 1%
P
r
o
b
.
d
e
v
i
a
t
i
o
n
>
T
L
/%
+
TL
-
TL
P
r
o
b
a
b
i
l
i
t
y
d
e
n
s
i
t
y
= 3%
= 3%
= 2%
= 1%
=5%
a)
b)
c)
= 1%
P
r
o
b
.
d
e
v
i
a
t
i
o
n
>
T
L
/%
+
TL
-
TL
P
r
o
b
a
b
i
l
i
t
y
d
e
n
s
i
t
y
= 3%
= 3%
= 2%
= 1%
=5%
a)
b)
c)
21
Dosimetric Tolerance Limits and Action Limits
regions will automatically decrease. It is further suggested that the action limits related to
normal tissues should be specifed as absolute dose limits. However, care must be taken to ap-
ply correct dose deviation uncertainties for the IDC in the application of these action limits.
For IMRT-methods the combined uncertainty will be more complex to analyse. The total
uncertainty of the IDC will thus be a result of the combined uncertainties of the individual
dosimetry points in the contributing beams. In IMRT and other more complex applications
these beam combinations will include the increased uncertainties at off-axis positions, the
uncertainty in high dose gradient regions and even the dose uncertainty outside the beam.
This combined uncertainty effect may be described by a pure statistical approach where ef-
fects in gradient regions due to a number of clinical error sources may be included (Jin et al.,
2005) or by more detailed analysis of the underlying physics (Olofsson et al., 2006a; Olofs-
son et al., 2006b). The latter method will inevitably give more details regarding the actual
IDC calculation. However, in the full prediction of the overall uncertainty other error sources
such as set-up uncertainties will also affect the gradient regions and should thus be included.
For this purpose a combination of methods may produce more realistic over-all uncertainty
estimations when performing dose verifcation in dose gradient regions.
3.3 applICatIon oF the aCtIon lIMIt ConCept
In the ClInIC
When utilizing the action limit concept in clinical practice as illustrated in fgures 3.2 and 3.3
the relationship between the action limit, AL
, the correspon-
ding -value, and the estimated one standard deviation uncertainty, , of the independently
calculated dose by the IDC not trivial. For illustration the relations between these parameters
have been plotted in fgure 3.4 for different assumed -values and observed dose deviations.
When the parameters TL
and have been selected and the of the independent dose verif-
cation procedure is known the action limits, AL
total
for the independent dose calculation, IDC. The horizontal axis represents . The six curves
represent varying relations between the dosimetric tolerance limit TL
and
total
.
The use of fgure 3.5 may be illustrated by some numerical examples. If TL
= 3.6%, a= 5%,
the standard deviation
tot
for IDC calculation is estimated to 1.2% (i.e. TL
/ =3), then AL
/ 1.4 which gives the proper action limit AL
and remain
unchanged). Consequently, the estimated risk for the true dose being outside prescribed to-
lerance interval TL
will always be larger than the established probability, a=5%, even when
no deviation is found by the independent dose calculation, e.g. = 0. The conclusion of this
exercise is that the accuracy of the independent dose calculation is of crucial importance
when applying a strict action limit concept.
When the data for TL
, , and
tot
are known the action limits, AL
, can be directly calculated
by equation 3.2 or interpolated from fgure 3.5. Selecting the commonly used 95% conf-
dence level, /2 = 0.025, implies a confdence interval of 1.96
tot
. For evaluation of treat-
ment planning systems Venselaar et al (Venselaar and Welleweerd, 2001) suggested to use a
confdence interval of 1.5
tot
, which corresponds to a one-sided , /2 = 0.065. This more
relaxed choice of confdence interval may be practical for many clinical applications. For
a fxed value of /2 = 6.5% the action limits can be directly determined from equation 3.4.
1.5
tot
AL
= TL
(3.3)
2
23
Dosimetric Tolerance Limits and Action Limits
If the IDC calculation utilizes a simple phantom geometry that is very different from the
anatomy of the patent the total uncertainty of the IDC will increase. However, there may be
a signifcant systematic component in these deviations that should be recognized as such.
Georg et al (Georg et al., 2007a) illustrate deviations when radiological depths were applied
and when they were not. Application of radiological depth corrections signifcantly redu-
ced the observed dose deviations. The resulting IDC uncertainty must however include the
uncertainty in the radiological depth correction but the resulting total uncertainty is now ap-
proximately of a random nature and if known could be used to determine proper action limits.
In general, complicated treatments should not have larger action limits than conventional
treatments. A strict application of the action limit concept is a reasonable way of handling de-
viations from a patient perspective. However from a more practical clinical perspective this
may in some special cases not be realistic. The fnal decision to clinically apply a treatment
plan, in spite of the fact that an action limit has been exceeded, must be thoroughly discussed
and documented for every treatment. Under special circumstances, when the source of a de-
viation cannot be found with available resources, the overall patient need must be weighed
against the risk of a true dose deviation larger than the tolerance limits. In these cases the
action limit can no longer be considered as restrictive. Since the clinical relevance of a pa-
rameter can differ considerably from one treatment to another, it is impossible to implement
action limits as a mandatory requirement. When these treatments are correctly documented
the size and frequency of dose deviations larger than the action limit should be stored in a
database and used as a quality parameter in the clinic to be considered in the planning of QA
resources at the clinic.
The concept of uncertainty applied in this booklet is based on the ISO Guide to the expres-
sion of uncertainty in measurement GUM 1995, revised version (ISO/GUM, 2008) . For
further reading and application of GUM, see e.g. www.bipm.org or www.nist.gov.
25
4. statIstICal analysIs
It is strongly recommended to combine an IDC tool with a database for retrospective analysis
of deviations and - when found - their causes. Systematic differences in dose calculations can
originate from inherent properties of the algorithms, or from errors/uncertainties in the com-
missioning data for the calculation systems (Figure 2.1). Small systematic errors may not af-
fect the treatment of individual patients to a signifcant extent, nevertheless they are of great
importance for the overall quality of radiotherapy, for evaluation of clinical studies, or for
any comparison between departments. Systematic errors of larger magnitude could affect the
treatment also for individual patients; an example is the use of pencil kernel based algorithms
in lung (Kns et al., 2001).
An arrangement with one global database for storage of data from several clinics, and one
local database at each clinic (fgure 4.1), provide a basis for analysis of data for the individual
clinics without compromising the high demands on integrity and protection of patient data.
The confdentiality aspects of a database solution are further discussed in section 4.4.
Figure 4.1 Illustration of the overall architecture for local/global database design. Information
is gathered and stored in local databases which are synchronized with a global database. The
idea is to make it possible for the individual clinics to compare their own data with the rest of the
community without making it possible for anyone outside the clinic to connect specic data to a
specic clinic or patient.
Clinic A
Clinic B
Clinic C
Clinic D
Clinic E
Global
database
Local database
Clinic A Clinic A
Clinic B Clinic B
Clinic C Clinic C
Clinic D Clinic D
Clinic E
Local database
Local database
Local database
26
The basic concept behind the database solution proposed here is that all relevant information
related to the commissioning of the system and generated during the verifcation process
should be transferred to both a local statistical database and a global database. The local
database will only contain data generated locally in a department while the global database
contains data from users of all applications interfacing with the global database server. The
users can thus compare results obtained locally with the results of the community through
such a solution.
The clinical value of a global database is strongly related to the amount of stored information
and the quality of the information (see section 4.4). A global database should be equipped
with a standardized interface towards the outside to enable different vendors and applications
to take advantage of such a solution. This is a natural step in the globalization and standardi-
zation of healthcare data storage without compromising the integrity of patients or hospitals.
During the current ESTRO task project a global database for IDC data has been made avai-
lable, see appendix 1 for further details.
4.1 database applICatIon For CoMMIssIonIng data
A fundamental part of any TPS or IDC system is a beam model describing the dose distribu-
tion in the patient (c.f. chapter 5). This beam model is often optimized against a set of com-
missioning measurements (c.f. chapter 6). Calculations based on the beam model can never
be more accurate than these commissioning measurements. It is therefore of great importance
that the commissioning of the TPS and IDC tools is performed with great care, and that the
commissioning data are checked for irregularities before they are applied in the beam model.
Provision of generic commissioning data is a method for helping the users of a system to
avoid errors. This is however somewhat problematic as each treatment unit has individual
characteristics and use of generic commissioning data for TPSs is in principle not allowed
in the clinic according to IEC 62083(IEC, 2009). Another method of user guidance is to
provide expected intervals for the different commissioning quantities. This kind of guidance
is required according to IEC 62083, in the sense of maximum and minimum values for the
physical quantities. This is however a rough method as the extremes of the allowed intervals
are by defnition highly unlikely and commissioning data close to the limits most probably
are a result of errors. The vendors of TPS and IDC tools today have problems providing the
users with adequate guidance for the expected values for the commissioning.
A global database of commissioning data provides more adequate guidance for the user in
terms of expected intervals for the different commissioning quantities. The representation of
commissioning data in statistical charts can be made in numerous ways. In the two examples
(fgure 4.2, 4.3) the local beam qualities are compared to the distribution of beam qualities
from machines of the same type and energy collected from the global database.
27
Statistical analysis
Figure 4.2 Proles measured for Siemens 6MV beams are collected from a global database and
compared to a locally measured prole. The 10% and 90% percentiles are provided to indicate a
normal interval for the measured proles.
Figure 4.3 Histogram showing the distribution of TPR20/10 for Elekta 6MV beams. The local
observation is indicated with the thin bar.
-20 -10 0 10 20
96
98
100
102
104
Position (cm)
R
e
l
a
t
i
v
e
d
o
s
e
(
%
)
Local measurement
Global Median
Global 10 resp. 90% percentile
0.66 0.67 0.68 0.69 0.7
F
r
e
q
u
e
n
c
y
TPR 20/10
0.66 0.67 0.68 0.69 0.7 0.66 0.67 0.68 0.69 0.7
F
r
e
q
u
e
n
c
y
TPR 20/10
28
Adequate quality assurance of the integrity of the global commissioning database will be
very diffcult to maintain. For that reason it is not recommended to rely completely on the
database for the commissioning. The database should merely be used to check the likelihood
of individual measurements. The workfow utilizing the database concept is illustrated sche-
matically in fgure 4.4.
Figure 4.4 A typical workow when using the global database as a verication tool for treatment
unit characterization measurements. Prior to the optimization of the beam model a manual check
of the commissioning data for the beam is performed against the data stored in the global data-
base. If unexpected discrepancies are found re-measurements should be considered.
Commissioning
measurements
Enter treatment
unit
characteristics
into application
Local database
Centralized
database
Check
commissioning
data
Optimize beam
model
Data transfer Workflow
29
Statistical analysis
4.2 DATABASE APPLICATION FOR TREATMENTS
There are different options for the representation and presentation of the deviations stored
in the global database. In this booklet we have chosen to exemplify the concepts with the
representation illustrated in fgure 4.5. The global database contains distribution of deviations
from all the clinics connected to the global system. Each of these distributions is characteri-
zed by its mean value and associated standard deviations. The set of mean deviations from
the individual clinics forms a distribution which is presented together with the local mean
deviation. The standard deviation is handled equivalently. This method provides a simple and
intuitive representation of the data, but other representations are possible, such as visualiza-
tion of the complete distribution of deviations.
Figure 4.5 Statistical presentation of observed deviations. The global database ( the big circle)
contains the deviation distributions from all clinics using the system (each circle in the global
database represents one clinic). The deviation distributions can be represented through a mean
deviation and the standard deviation S
are presen-
ted as histograms. The local database contains all the observed deviations from the local clinic.
The principal strategies of analysis are very simple but yet powerful. Data fltered (discri-
minated) with respect to parameters such as treatment technique, treatment region and TPS
from both the local and the global databases can be compared. A large difference between the
data for the individual clinic and global community should be taken as a trigger to perform
further investigations.
s
s
30
Figure 4.6 The mean deviations for clinics from the global database are presented as a histogram
and the mean deviation for the local clinic is represented trough the green bar. The conclusion
from this specic plot is that the clinic has a higher mean deviation than the average clinic without
being extreme.
31
Statistical analysis
Figure 4.7 The standard deviation for the observed discrepancies for all individual clinics can be
collected from the global database and presented in the histogram. The standard deviation for
the deviations at the individual clinic is given by the green bar. The conclusion in this case is that
the intra-patient variation at the clinic is on the lower side of the global distribution.
The data selection/discrimination is an essential part of the analysis. The overall distribution
includes samples from sub-distributions as illustrated in fgure 4.8. A discrepancy between
the results at an individual clinic and the rest of the community does not automatically mean
that something is wrong at the individual clinic. It could be caused by differences in verifca-
tion routines or differences in treatment technique and equipment. For instance, a clinic with
a focus on stereotactic treatments of metastasis in the brain should expect to get different de-
viation patterns than clinics that mainly perform whole brain treatments. The analysis of the
deviation pattern does therefore need to be performed by qualifed personnel able to interpret
the results and refne the comparison until relevant conclusions can be made. An exemple of
such a procedure is provided in fgure 4.9.
F
a
b
r
ic
a
t
e
d
d
a
t
a
32
Figure 4.8 The overall distribution of deviations in the top row of the gure represent the pooled
data of the sub-distributions presented at the bottom. The information that is visible in the sub-
distributions in the bottom is concealed in the overall distribution at the top. It is important to use
evaluation lters and visualize sub-sets of the total database information in order to adequately
use the database concept.
F
a
b
r
ic
a
t
e
d
d
a
t
a
33
Statistical analysis
Figure 4.9 This example of a possible analysis pathway starts with a comparison on the treat-
ment level for treatments towards the pelvis region (A). A tendency of lower doses than average
is noticed. In order to identify the reason for this tendency a comparison for individual beams
towards the pelvis region is performed (B,C). In this comparison an additional classication with
respect to the gantry angle is added. (B) shows the deviation for beams entering from the anterior
or posterior of the patient. The tendency of lower doses than average cannot be seen here. (C)
shows the deviations for the lateral beams. Based on this investigation a possible reason for the
initially observed low doses for the pelvis treatments could be that the clinic does not adjust for
the radiological depth in the verication calculation. In the pelvic region this could typically cause
an error of a few percent in the lateral beams.
F
a
b
r
ic
a
t
e
d
d
a
t
a
34
Information regarding the local stability, for instance investigations of the effect of TPS
upgrades and changed routines can be performed using a local database solely. The informa-
tion in a local database could be seen as a time series of deviations, and should be visualized
and analyzed as such. Time series analysis has been used in process industries since the early
30s for production surveillance. These methods are often referred to as statistic process con-
trol or statistic quality control.
4.3 QualIty oF the database
The purpose of a database solution for IDC is to enable detection of systematic dose calcula-
tion errors in individual departments. As it is assumed that systematic errors exist, they will
also be represented in the database of stored deviations between calculations performed with
the TPS and the IDC tool. The basic assumption is that the majority (mean) of the radiothe-
rapy community follows the current state-of-the-art, and that the comparison between the
individual department and the global database therefore is a comparison versus the state-of-
the-art.
The usefulness of a database solution for the detection of systematic errors in dose calcula-
tions is highly dependent on the quality of the submitted data. There are at least three iden-
tifed cases of corrupt or irrelevant data in the database for which the application needs to
be protected: (1) users getting experience with the application with non-clinical data but the
data are accidentally pushed to the database, (2) outdated and therefore irrelevant data and (3)
selected data including an ad hoc bias. A full control and maintenance of the database would
be very costly and is unnecessary if the system is prepared for the cases mentioned above.
For example, the risk of non-clinical data being pushed to the local and global database
could be reduced by setting up both a clinical and non-clinical mode of the software, and
to force the user to sign the data prior to the database submission. Alternative or combined
methods can reduce the risk of outdated data being used inadventently. One possibility is to
use time-discrimination where only data collected within a specifed time interval is presen-
ted. Another option is to include only data for treatment units that are currently in use at the
departments. This could be achieved through regular updating and synchronization of the
global database against the local databases and the confgurations at the individual clinics.
The most diffcult quality aspect to control is the risk for selected (or biased) data coming
from the different users, i.e. departments excluding specifc patient groups or treatment tech-
niques for particular reasons. No general solution is suggested to avoid this. It is basically a
matter of policy at the clinics and a challenge for the developer of the systems that use data-
base solutions to make the evaluation tools more selective.
35
Statistical analysis
4.4 NORMALIZATION OF DOSE DEVIATIONS
The quality of the collection of deviations between calculations performed with the TPS and
the IDC in a common global database is highly dependent on the properties of the IDC tool.
In order to describe the actual information contained in such a database an in-depth analysis
of the factors behind the deviations is required. In the following the patient anatomy is taken
into account as a starting point. The discussion is then transferred into a more traditional case
where the patient anatomy is not considered for independent dose calculations.
The dose at a point calculated by the TPS can be written as a product of factors taking dif-
ferent effects into account
B.M. Algo
D
TPS
= F (A)F (A;P) (4.1)
TPS TPS
where D
TPS
is the dose calculated by the treatment planning system,
B.M.
F
TPS
(A) is a factor
describing the specifc Beam Model used including the beam commissioning applied with
the treatment settings A, including collimator settings, gantry, collimator and table angles,
wedges etc.
Algo
F
TPS
(A;P) describes the algorithms in use which depends both on the treatment
settings A and the representation of the patient stored in P.
The dose calculated by the IDC tool (D
IDC
) is expressed in the same format as used in equa-
tion 4.1 according to
B.M. Algo
D
ICT
= F (A)F (A;P) (4.2)
IDC IDC
The relative deviation between the TPS and the IDC is defned according to
D
TPS
D
IDC
D
TPS
= = 1 (4.3)
D
IDC
D
IDC
which can be rewritten through the factors as
B.M.
F
TPS
(A)
Algo
F
TPS
(A;P)
= 1 (4.4)
B.M.
F
IDC
(A)
Algo
F
IDC
(A;P)
The frst term of equation (4.4) can be considered as normalization of the TPS dose calcula-
tion, using the IDC calculation and corresponds to removal of the individual characteristics
of the TPS dose calculation in terms of the treatment design (A) and patient (P). This enables
comparison of dose calculation results between individual patients, clinics and treatment
planning systems. After such normalization, the results of equation 4.4 refect the difference
in the way the IDC and the TPS use the treatment settings and patient information to calculate
the absorbed dose. If the IDC and the TPS are not completely independent the factors may
cancel out, leading to a risk for undetectable errors. One example could be use of common
commissioning data, which in principal leads to a cancellation of
B.M.
F
TPS
(A) and
B.M.
F
IDC
(A),
36
and thus disable detection of errors in the beam commissioning. This illustrates the impor-
tance of complete independence of the IDC from the TPC.
The value of the scored deviation is highly dependent on the accuracy of the IDC. A poor
normalization makes any comparison between different treatments diffcult. If the IDC has
known limitations in specifc situations, these can be dealt with using selective data compari-
sons (data discrimination). An example represented in equation 4.5 is if the IDC does not take
the patient geometry into consideration and instead employs calculations in a water phantom:
B.M.
F
TPS
(A)
Algo
F
TPS
(A;P)
= 1 (4.5)
B.M.
F
IDC
(A)
Algo
F
IDC
(A;W)
where W indicates water. In these situations it is obvious that the deviation will be highly de-
pendent on the treatment area of the patient. Treatments in the thorax region and the pelvis re-
gion should not be compared. Treatments in the same treatment region however can in many
cases be assumed to give similar deviations. This is one situation when a tool for data dis-
crimination is of importance, e.g to enable comparisons that include only pelvis treatments.
Another reason for including data discrimination is the usefulness as a tool for investigations
of observed discrepancies as has been discussed in previous chapters.
4.5 ConFIdentIalIty and IntegrIty oF the database
Confdentiality is an important issue in terms of the possibility to identify individual patients,
clinics, staff members and equipment. Regulations and traditions differ among countries and
the over-all purpose of using an independent dose verifcation tool may also differ conside-
rably.
Patient confdentiality within the European Union is regulated in EU Directive 95/46/EC.
Storage of patient data in a global database in the context of dose calculation QA is in princi-
pal prohibited by the 95/46/EC directive through Article 7, unless legislation in the individual
country forces the storage. The only obvious reason for storage of personal information is
the scenario of third party supervision of clinics on an individual patient basis. The analysis
scenarios do not depend on access to personal data. The general design of both the local and
the global database can provide complete patient confdentiality, i.e. no personal data need to
be stored (or sent over the internet).
The identity of the treatment planning system is important information in the global database
as it makes it possible for the individual clinic to compare selectively with users of their own
treatment planning system. This is a natural frst step in the investigation around suspect de-
viations. The suggestion is that the treatment planning system as well as the software version
used for primary calculation of a treatment plan should be mandatory information in both the
local and the global database.
37
Statistical analysis
Identifcation of individual clinics within a global statistical database is another type of issue
that needs to be handled in an adequate manner. One would wish to keep the identities of in-
dividual clinics within the database, as nobody within this application should have anything
to hide. However, such an open policy may prove to be counterproductive from a QA point
of view, while clinics may hesitate to use a tool where individual clinics could be identifed.
The recommendation is therefore to follow an intermediate path which allows the individual
clinic to confgure the system to reveal or hide the clinics name in the global database. Even
if the clinic chooses anonymity it is possible to compare their own data from the local data-
base with the global database.
Related to the possibility to identify individual clinics is the possibility to identify individual
countries or regions. As there are areas where the number of clinics is small, country or re-
gion identifcation would in principle be equivalent to clinic identifcation. It is therefore also
suggested that geographical information be treated as optional data in the global database.
The type of treatment unit, vendor and version is suggested to be mandatory information in
both the global and local statistical database for the same reason as for the treatment planning
system.
In general, the global database should be designed to support all collaborating clinics with
reference data which are specifc with respect to treatment method and equipment. All data
related to individuals should be optional and protected by coding procedures. Any access to
such data should be allowed only by special authorisation and would further require access
to the de-coding keys.
39
5. beaM ModellIng and dose CalCulatIons
Dose calculations can be performed through various methods utilizing fairly different appro-
aches. A tool for independent dose calculations, or any other kind of dose calculation device,
is a compromise between the benefts and drawbacks associated with different calculation
methods in relation to the demands on accuracy, speed, ease of use. The complexity of mo-
dern external beam therapy techniques paired with clinical demands on effciency require
dose calculation methods that offer a high degree of generality, but still are robust and simple
to use. This implies that the employed IDC calculation methods must develop into more
explicit models of the physical processes that signifcantly infuence the delivered dose. As
a result the major workload for clinical implementation of an IDC tool is shifted from beam
commissioning, performed by individual users, to an earlier stage of research and software
development.
Traditionally the most common way of calculating the dose is through a series of multiplica-
tive correction factors that describe one-by-one the change in dose associated with a change
of an individual treatment parameter, such as feld size and depth, starting from the dose
under reference conditions. This approach is commonly referred to as factor-based calcula-
tion and has been the subject of detailed descriptions (Venselaar et al., 1999; Dutreix et al.,
1997). The individual factors are normally structured in tables derived from measurements or
described through parameterizations. Some factors can be calculated through simple model-
ling, for example the inverse square law accounting for varying treatment distances. From an
implementation point of view a factor-based method may be an attractive approach due to its
computational simplicity, once all the required data are available. The obvious problem as-
sociated with this approach is the required amount of commissioned beam data as this type of
method can not calculate doses when the beam setup is not covered by the commissioned set
of data. For treatment techniques that can make use of many degrees of freedom, such as the
shape of an irregular feld, it becomes practically impossible to tabulate or parameterize all
factors needed to cover all possible cases. Hence, the factor-based approach is best suited for
point dose calculations along the central beam axis in beams of simple (rectangular) shapes.
The most general dose calculation method currently available is the Monte Carlo simulation,
which explicitly simulates the particle transport and energy deposition through probability
distributions, combining detailed geometric descriptions and fundamental radiation inter-
action cross-section data. The drawbacks are related to the advanced implementation and
the need for non-trivial beam commissioning as there is a requirement for fundamental pro-
perties such as energy spectra and details of the treatment head design. The extensive and
time-consuming calculations also limit the use of Monte Carlo methods in clinical routine,
although the access to more powerful computers is causing this aspect to gradually lose re-
levance.
An effective method for model-based dose calculations is offered by combining multi-source
modelling of the energy fuence exiting the treatment head with use of energy deposition
40
kernels describing the energy deposition in the patient through convolution/superposition
with the energy fuence incident on the irradiated object. This approach utilizes the natural
divider between the radiation sources inside the treatment head, consisting of air and high-
density materials, and the patient or the phantom, consisting of water-like media (cf. fgure
5.1). Consequently, the dose calculations can be separated into two factors, both determined
through modelling:
D (x,d;A) (x;A) D(x,d;A)
= . , (5.1)
M(A) M(A) (x;A)
where D is the dose, x is an arbitrary calculation point, d is the treatment depth, A represents
the treatment head setting, M is the monitor signal, and is the energy fuence. This type of
model also has the advantage that it can be suffciently characterized by a limited amount of
commissioned beam data. In the following sections a more detailed description of the com-
ponents involved in this dose calculation approach is given.
Figure 5.1. Schematic illustration of the separation (red dotted line) in equation (5.1) between
energy uence modelling, associated with the treatment head, and the subsequent formation of
dose inside the irradiated patient/phantom.
d
Patient/
phantom
x
A
/
D/
M
Treatment
head
/
d
Patient/
phantom
x
A
/
D/
M
Treatment
head
Beam modelling and dose calculations
41
5.1 energy FluenCe ModellIng
In many cases the radiation source of a linear accelerator is regarded as a single point source
located at the nominal focal point, normally 100 cm upstream from the accelerators iso-
center. However, in reality the focal source contributes 90-97% of the energy fuence rea-
ching the isocenter point, depending on the design and actual settings of the treatment head.
In order to accurately model the energy fuence exiting the treatment machine the remaining
signifcant sources must be identifed and properly accounted for. Figure 5.2 shows an over-
view of the principal treatment head components that form a clinical megavoltage photon
beam.
Following the elements of treatment head design a general expression for the resulting pho-
ton energy fuence per monitor signal can be formulated as (Ahnesj et al., 1992a; Olofsson
et al., 2006b)
(x;A)
d
(x;A) +
e
(x;A) +
pw
(x;A) +
c
(x;A)
= , (5.2)
M(A) M
d
+ M
e
+ M
pw
+ M
c
(A)
where the indices d, e, pw, and c in equation (5.2) denote direct (focal), extra-focal, physical
wedge, and collimator contributions, respectively. These four sources also generate the dose
monitor signal M, but it is only the component associated with the collimators downstream
from the monitor chamber (M
c
) that varies with the treatment head setting A. In sections 5.1.1
to 5.1.7 the different components of equation (5.2) will be discussed in some detail. It should,
however, be noted that equation (5.2) does not include the charged particle contamination of
high-energy photon beams that will be further discussed in section 5.2.2.
42
Figure 5.2. Examples of treatment head congurations for megavoltage photon beams with an
internal physical wedge (a) or an external physical wedge (b). For irregular beam shapes a cus-
tom-made block is mounted downstream from the collimator jaws in (a), whereas a multileaf
collimator (MLC) has replaced the lower collimator jaw in (b).
5.1.1 direct (focal) source
The X-ray target (cf. fgure 5.2) is designed to stop the impinging electrons and thereby
convert them into a beam of bremsstrahlung photons. Consequently, it constitutes the source
of direct (focal) photons. The electron interaction cross section for bremsstrahlung processes
increases with the atomic number (Z) of the medium, which is the reason for using heavy
elements such as tungsten (Z=74) or gold (Z=79) for X-ray targets. The high-Z material can
also be combined with a subsequent layer of lower Z, such as copper or aluminium, in order
to further harden the X-ray spectrum (Karzmark et al., 1993).
As a result of electron beam optics and multiple scattering inside the X-ray target the direct
source is in reality not a point source but associated with a fnite size. The source projection
that faces the opening of the treatment head is of a particular importance as this projection
will affect the fuence penumbras that emerge from beam collimation. A thorough experi-
mental investigation of the lateral focal source distribution is given by Jaffray et al (1993)
who used a rotating single slit camera and then derived the source distributions through CT
X-ray target
Isocenter
Custom-made
block
(a)
X-
Physical
wedge
M
L
C
enter
(b)
Isocenter
(a)
IIssoo Isocenter
(a)
X-
M
L
C
enter
(b)
ray target
Flattening filter
Monitor chamber
Upper collimator
Primary collimator
X-
MLC
Isocenter
(b)
Lower
collimator
X-ray target
block
Flattening filter
Monitor chamber
Upper
collimator
Lower
collimator
Primary collimator
Physical wedge
Beam modelling and dose calculations
43
reconstruction techniques. In total 12 different megavoltage photon beams were studied over
a period of up to two years. A number of conclusions were drawn from this study:
The shape of the distribution is approximately Gaussian, albeit in some cases rather elliptical
(ratios up to 3.1 were observed). The Full Width at Half Maximum (FWHM) varied between
0.5 and 3.4 mm, while the corresponding span for Full Width at Tenth Maximum (FWTM)
went from 1.2 up to 7.1 mm. More typical values for FWHM and FWTM were however 1.4
and 2.8 mm, respectively. (The Gaussian width is sometimes described through instead,
which is very close to FWHM/2.35.)
The variations over time, including adjustments of the beam transport, for a given accelerator
and photon beam quality were fairly small. More signifcant differences were found when
comparing accelerators of different design.
A source distribution that has been determined on the central axis is representative also for
off-axis positions, despite the three-dimensional nature of the X-ray target.
The so-called geometric penumbra, associated with energy fuence and not dose, corresponds
to a zone centred along the edge of a feld where the direct source is partially visible and
partially obscured (cf. fgure 5.3). By combining realistic values for spot sizes and collimator
distances with Gaussian source integration one can conclude that the geometric penumbra
(10-90%) typically has a width of 3-5 mm at isocenter level, but can in more extreme cases
extend up to about 10 mm. In 3D conformal radiotherapy only a small portion of the irradia-
ted volume will be located inside the geometric penumbra. In multi-segment IMRT, however,
the situation is different and doses delivered to a large part of the volume may be affected by
the direct source distribution.
44
Figure 5.3. Close to the eld edge the direct (focal) source is partially visible and partially obscu-
red, resulting in a lateral energy uence gradient (green curve) known as the geometric penum-
bra.
The direct source distribution has, consequently, been approximated as being Gaussian in
published attempts to model the geometrical penumbra explicitly through ray tracing through
the treatment head (Fippel et al., 2003; Sikora et al., 2007). An alternative way of accoun-
ting for the fnite direct source distribution is to laterally blur the complete energy fuence
distribution in the exit plane by convolution with a Gaussian kernel that corresponds to the
projected source distribution in the same plane (red curve in fgure 5.3). A similar approach
was proposed by Ahnesj et al (1992b) where the Gaussian blurring instead was included in
the photon pencil kernel used to model the primary dose deposition. Although, in order to
fully describe variations in the geometric penumbra that are associated with different colli-
mators edges the process must also handle lateral variations in the size of the blurring kernel.
Also well inside the beam, where the direct source is entirely visible, the lateral distribu-
tion of
d
varies somewhat. The raw X-ray lobe produced in the bremsstrahlung target is
forward-peaked, which means that for broad beam applications it needs to be modulated in
a cone-shaped fattening flter (see fgure 5.2). Normally, the goal is to create a beam with
a more or less uniform lateral dose distribution (Larsen et al., 1978; Olofsson et al., 2007).
d
coll
d
iso
(SAD)
Isocenter Isoceeennnttteeeeerrrr
d
d
coll
d
iso
(SAD)
Isocenter
Isodose
a
b
d
a
d
b
F/ FF 4 F/ FF 4
SSStttaaandardd
measuremm
depth
Isoodose
aaa
bbb
d
a
d
d
b
dd
Field size (F)
F/4 F/4
Standard
measurement
depth
Isodose
a
b
d
a
d
b
54
where d
a
, d
b
, and F are defned in fgure 5.8. Assuming that the doses in points a and b are
related to each other only through the attenuation of primary photons (i.e. neglecting any dif-
ference in scatter contribution) and that the attenuation coeffcient () and the un-modulated
energy fuence is identical in a and b, then the required ratio of beam modulation () between
points a and b follows from
b
-(d
a
d
b
)
-(F/2) tan
= e = e (5.4)
a
This simple relation has also been utilized to create the modulation curves for a 6 MV beam
in fgure 5.4(a), albeit generalizing F/4 from fgure 5.8 to arbitrary lateral positions.
Even though this simplifcation may be suffcient in many applications, the situation in wed-
ged photon beams is in reality more complex than what is refected in equation (5.4). The
dose component originating from scattered radiation varies noticeably in the lateral direction
due to the asymmetric beam profle. Furthermore, a physical wedge acts as a beam flter it-
self, yielding changes in beam quality that are linked to the laterally varying wedge thickness
(Tailor et al., 1998a). The modifed beam quality results in altered depth doses in comparison
with the corresponding open beams (Heukelom et al., 1994b, a). Another consequence of
the wedge fltration is that dose profles along the non-gradient direction are affected by the
presence of a physical wedge.
5.2 dose ModellIng
The four main physical phenomena driving the formation of depth dose distributions in water
are i) the inverse square law, ii) attenuation of the primary beam iii) the build-up of photon
generated electron fuence (within a few cm depth) and build-up of phantom scatter dose
(within 9 to 18 cm depth), and iv) electron contamination from sources in the treatment head
and in the air between the treatment head and the patient. In fgure 5.9 a depth dose curve
is shown with the dose separated into these components. The part of the dose that is due to
photons scattered in the treatment head, i.e. the indirect part of total beam fuence, is shown
separately as head scatter dose. This part can also be subdivided into a primary part and a
phantom scatter part, depending on how the dose calculation model treats these parts.
Beam modelling and dose calculations
55
Figure 5.9 Depth dose distributions for a 1010cm
2
eld of 10 MV photons, showing separately
the direct beam primary dose (blue), direct beam phantom scatter dose (red), electron conta-
mination (green), total head scatter dose (pink) and the total sum of all components (black).
Normalization is versus the total dose at the calibration position and eld, which is the preferred
normalization for comparing calculated and measured dose data.
The inverse square law is a pure effect of treatment distance and independent of feld shape
and size and therefore simple to factorize. This motivated the defnition of TPR (Tissue-
Phantom-Ratio) that, despite its strange name, describes the relative depth dose distribution
for a non-divergent (infnite SSD) feld.
The primary dose to an object is defned as the dose deposited by electrons released from the
frst interaction of each photon entering the object. The depth distribution of the primary dose
follows the primary fuence attenuation distribution closely for depths larger than the build-
up depth under the condition that the feld size is large enough to establish lateral electron
equilibrium. The minimum feld size required for lateral electronic equilibrium depends on
the primary beam spectrum, the projected source size, and the composition of the irradiated
object. Hence, lung dose calculations require extra attention since lateral disequilibrium oc-
curs for feld sizes four to fve times larger than in water.
The scatter dose component depends on both the primary beam spectrum and the size and
shape of the feld. The scatter depth dose distribution reaches its maximum in water at the
order of 9 to 18 cm (Ahnesj et al., 1992b; Nyholm et al., 2006c) and is therefore very dif-
ferently shaped from the primary dose distribution.
5.2.1 photon dose modelling
Effective dose modelling can be achieved by convolving the calculated energy fuence distri-
bution with an energy deposition kernel describing the spatial distribution of the expectation
Depth dose 10 MV SSD 90 cm; 10x10 cm
0 5 10 15 20
0.0
0.5
1.0
1.5
Depth/cm
D
o
s
e
a
t
1
0
c
m
d
e
p
t
h
to
ta
l
phantom scatter
p
rim
a
ry
head scatter
electron contamination
D
o
s
e
Depth dose 10 MV SSD 90 cm; 10x10 cm
0 5 10 15 20
0.0
0.5
1.0
1.5
0 5 10 15 20
0.0
0.5
1.0
1.5
Depth/cm
to
ta
l
phantom scatter
p
rim
a
ry
head scatter
electron contamination
D
o
s
e
56
value for the energy deposition caused by an elemental beam in a given medium (normally
water). The kernels can be separated into different types and components depending on in-
teraction geometry and history in order to distinguish between different phenomena or to
facilitate more adequate parameterizations. The most commonly applied energy deposition
kernels in calculations for photon beam therapy are pencil and point kernels (cf. fgure 5.10);
both of which are usually separated into primary and scatter dose components.
Figure 5.10. Illustration of different types of energy deposition kernels; (a) point kernel where the
initial interaction of the impinging photon is forced to a given location, (b) pencil kernel describing
the energy deposition pattern around a point mono-directional photon beam, and (c) planar
kernel depicting the mean forward and backward energy transport. The coloured lines represent
isodose curves generated by the incident photons (arrows). (Adapted from Ahnesj and Aspra-
dakis (1999).)
5.2.1.1 Pencil kernel methods
A popular method for model-based dose calculations, particularly in treatment plan optimi-
zation where the dose calculations are iterated many times, is built on pencil kernels. This
means that the deposited energy originates from photons interacting along a common line of
incidence (cf. fgure 5.10(b)). The pencil kernel concept can combine 2D intensity modulati-
on with fast 3D dose calculation, providing a good compromise between generality, accuracy
and calculation speed. This is the reason why pencil kernel algorithms have become the frst
choice in many radiotherapy applications.
There are a number of different options when acquiring the pencil kernel properties for a
photon beam. It can be done by means of Monte Carlo simulations (Mohan and Chui, 1987;
Ahnesj et al., 1987; Mackie et al., 1988), experimentally by radial differentiation of mea-
(a) (b) (c) (a) ((aa)) (( )) (a) (bb) (b) (b) (c) (c)
Beam modelling and dose calculations
57
sured scatter contributions (Ceberg et al., 1996; Storchi and Woudstra, 1996), or through
deconvolution of measured dose profles (Bergman et al., 2004; Chui and Mohan, 1988).
Nyholm et al (2006c) propose a condensed characterization scheme where the beam quality
index TPR
20,10
is used as a single fngerprint to yield the complete photon pencil kernel.
The pencil kernel anatomy must somehow be quantifed in order to facilitate general dose
calculations. Several proposals on how to resolve this issue can be found in the literature. Ah-
nesj et al (1992b) proposed a radial parameterization consisting of a double exponential that
separates the primary and the secondary scatter contributions. Nyholm et al (2006c) utilized
the same radial parameterization, although introducing a parameterization over the depth that
replaced the original tabulated depth description. Alternatively, a photon pencil kernel can be
described as a sum of three Gaussians (Dong et al., 1998) or by analytically differentiating
parameterized scatter-primary ratios (SPRs) (Ceberg et al., 1996). yet another option is to
utilize a discrete numerical description (Bergman et al., 2004), which means that the kernel
has a fnite spatial resolution and extension.
Another issue of concern is the choice of the numerical method for lateral superposition of
the pencil kernels, which is a process that must be linked to the specifc pencil kernel descrip-
tion. The double exponential parameterization of Ahnesj et al (1992b) enables analytical
integration over circular beams. Alternatively, arbitrary beam shapes can be decomposed
into triangular beam elements that are possible to handle by so-called Sievert integrals. Both
these solutions require, however, that the energy fuence be constant over each integrated
area. However, non-uniform fuence distributions can be managed by fuence sampling and
subsequent weighting of each surface integral before adding them together. For 2D and 3D
dose calculations different fast transform convolution techniques can be utilized in order
to simultaneously yield results for an entire calculation plane or volume, thereby offering
considerable speedups. A commonly employed algorithm in this category is the fast Fourier
transform (FFT) that enables discrete convolution of the energy fuence distribution and the
pencil kernel (Mohan and Chui, 1987; Murray et al., 1989).
Pencil kernel dose calculation is a compromise that in some geometries applies approxima-
tions that favour simplicity and calculation speed over accuracy. One such approximation
is the assumption of lateral invariance of the pencil kernel, which neglects the lateral shift
in photon beam quality from off-axis softening (Tailor et al., 1998b). If this approximation
is not compensated for, this may introduce dose calculations errors up to about 5% at large
off-axis distances (Olofsson et al., 2006a; Piermattei et al., 1990). Furthermore, integration
with laterally invariant pencil kernel parameters also implies that the depth must have a con-
stant value which, in practice corresponds to a slab phantom geometry. In a situation where
the depth varies considerably over the lateral plane (cf. fgure 5.11) the calculated scatter
contributions may, consequently, be over- or underestimated depending on the surrounding
geometry (Hurkmans et al., 1995).
58
Figure 5.11. During pencil kernel integration a laterally constant depth, i.e. a slab phantom ge-
ometry, is generally assumed (a). Laterally varying depths, illustrated in (b), (c), and (d), may
therefore yield over- or underestimated scatter contributions, depending on the exact geometry.
Various methods to handle and correct for density variations (heterogeneities) in pencil
kernel algorithms have been presented in the literature (Ahnesj and Aspradakis, 1999). Most
often these heterogeneity corrections rely on one-dimensional depth scaling along ray lines
from the direct source, employing equivalent/effective/radiological depths that replace the
geometrical depths in the dose calculations (cf. fgure 5.12). In general, the basic concept of
the pencil kernel approach, i.e. to divide the energy deposition process into separate depth
and lateral components, means that the full 3D nature of the process can not be properly
Calculation
point
Primary
deposition
volume
d
Scatter overestimated
(b)
Calculation
point
Primary
deposition
volume
d
Scatter underestimated
(c)
Errors cancel (roughly)
Calculation
point
Primary
deposition
volume
d
(d)
Calculation
point
Primary
deposition
volume
d
(a)
Homogeneous
slab phantom
Calculation
point
Primary
deposition
volume
d
Scatter overestimated
(b)
Calculation
point
Primary
deposition
volume
d
Scatter overestimated
(b)
Calculation
point
Primary
deposition
volume
d
Scatter underestimated
(c)
Calculation
point
Primary
deposition
volume
d
Scatter underestimated
(c)
Errors cancel (roughly)
Calculation
point
Primary
deposition
volume
d
(d)
Errors cancel (roughly)
Calculation
point
Primary
deposition
volume
d
(d)
Calculation
point
Primary
deposition
volume
d
(a)
Homogeneous
slab phantom
Calculation
point
Primary
deposition
volume
d
(a)
Homogeneous
slab phantom
Beam modelling and dose calculations
59
modelled. The result is that all deviations from the ideal slab phantom geometry, either by
external shape or internal composition of the irradiated object, will cause different errors in
the calculated doses.
Figure 5.12. If the density variations (heterogeneities) t the slab phantom geometry (a) pencil
kernel models can yield fairly correct dose calculations through the use of an equivalent depth,
here denoted d
eq
. However, scatter effects associated with heterogeneities that are smaller than
the lateral beam dimensions, illustrated by a low-density volume
1
in (b), (c), and (d), can not be
adequately modelled. In addition, the primary dose deposition is generally not scaled laterally,
which means that it will be incorrectly modelled in cases of lateral charged particle disequilibrium
(d).
Calculation
point
Primary
deposition
volume (c)
d
eq
1
Scatter underestimated
Heterogeneous
slab phantom
Calculation
point
Primary
deposition
volume (a)
d
eq
1
Calculation
point
Primary
deposition
volume (b)
d
0
1
Scatter overestimated
Calculation
point
Primary
deposition
volume (d)
d
1
Scatter and primary
overestimated
Calculation
point
Primary
deposition
volume (c)
d
eq
1
Scatter underestimated
Calculation
point
Primary
deposition
volume (c)
d
eq
1
Scatter underestimated
Heterogeneous
slab phantom
Calculation
point
Primary
deposition
volume (a)
d
eq
1
Heterogeneous
slab phantom
Calculation
point
Primary
deposition
volume (a)
d
eq
1
Calculation
point
Primary
deposition
volume (b)
d
0
1
Scatter overestimated
Calculation
point
Primary
deposition
volume (b)
d
0
1
Scatter overestimated
Calculation
point
Primary
deposition
volume (d)
d
1
Scatter and primary
overestimated
Calculation
point
Primary
deposition
volume (d)
d
1
Scatter and primary
overestimated
60
The analytical anisotropic algorithm (AAA) may perhaps be seen as a hybrid between a pen-
cil kernel and a point kernel algorithm. The crucial difference from a point kernel algorithm
is that in the AAA all energy originating from a photon interaction point is deposited either in
the forward beam direction or along one of 16 lateral transport lines, all located in the plane
perpendicular to the incident beam direction (Van Esch et al., 2006). Due to the applied den-
sity scaling along these transport lines this implementation will present more accurate cal-
culation results close to density heterogeneities, as compared to a conventional pencil kernel
algorithm that lacks lateral scaling. However, when evaluated against the more realistic 3D
modelling of a collapsed cone algorithm the shortcomings of the faster AAA algorithm are
obvious (Hasenbalg et al., 2007).
5.2.1.2 Point kernel methods
Point kernel models, sometimes referred to as convolution/superposition models, have the
advantage that they enable a more complete 3D modelling of the energy deposition processes
as compared to pencil kernel models. In a frst calculation step, before actually employing
the point kernels, the total energy released per unit mass, or terma (T), must be determined
throughout the dose calculation object (patient/phantom). This is done through ray tracing
and attenuation of the incident photon energy fuence through the 3D calculation object. In
a second step, the point kernels are applied and weighted according to the determined terma
distribution to yield the resulting dose distribution (cf. fgure 5.13).
Figure 5.13. In a point kernel (convolution/superposition) model the resulting dose distribution is
calculated by convolving the terma (total energy released per unit mass) with one or a few point
kernels, here illustrated along the depth dimension.
The spectral properties of a photon beam are essential to the energy deposition processes. An
energy spectrum can be represented by a number of discrete energy intervals (bins) in the
terma calculation, which can then be combined with a corresponding set of monoenergetic
point kernels (Boyer et al., 1989). This approach will intrinsically include spectral changes
that originate inside the dose calculation object, such as beam hardening, provided that the
=
Point
kernel
d
D
Dose
d
T
Terma
==
Beam modelling and dose calculations
61
number of bins is adequate. The drawback is that the terma calculation and the point kernel
superposition must be repeated for each energy bin employed, resulting in long calculation
times. The use of a single poly-energetic point kernel will speed up the superposition consi-
derably, although the requirement to model the spectral variations over the dose calculation
volume remains. One solution to this problem is to combine two different polyenergetic point
kernels; one associated with the primary dose deposition and one with the scatter dose depo-
sitions (Hoban, 1995; Ahnesj, 1991). The terma should at the same time be divided into two
corresponding components; the collision kerma (K
c
) and the scatter kerma, or scerma, (S)
en
K
c
(s) = T
E
(s) (E) dE (5.5)
en
S
(s) = T
E
(s) 1 (E) dE (5.6)
Hence, K
c
and S are determined by weighting the ratios of
en
and in agreement with the
energy spectrum at the photon interaction site s, including effects that originate both inside
and outside the calculation object (such as the off-axis softening). Through K
c
and S a two-
fold point kernel superposition procedure is enabled that provides accurate dose modelling
throughout the calculation volume.
Energy deposition modelled by means of point kernels generally includes scaling along dis-
crete and straight lines joining the primary photon interaction site and the energy deposition
points. Consequently, the applied density scaling is only affected by the media encountered
along these straight lines. While exact for the frst scatter component, the scaling of the mul-
tiple scatter is approximate (Ahnesj and Aspradakis, 1999). Inside a large homogeneous
phantom, similar to where the point kernel originally was created, this is not a problem as
long as the resolution of the kernel superposition is properly set. However, in a heterogene-
ous calculation object the multiple scattered particles may encounter other media, possibly
not present at all along the straight transport line. The situation is similar close to outer
boundaries where the use of a kernel derived inside an infnite phantom will result in overes-
timated doses due to missing multiple scatter. In fact, for a given point in an irradiated object
there is one unique kernel anatomy that perfectly describes the energy deposition stem-
ming from that point (Woo and Cunningham, 1990). Various methods have been proposed
to reduce the effects of the linear energy deposition approximation (Keall and Hoban, 1996;
Yu et al., 1995), all associated with increasing calculation times. However, the total dose at a
point is the sum of contributions from all surrounding interaction points, implying that errors
related to inadequate modelling of multiple scatter from a few directions will be less critical
when added together with all the other contributions. To maximize the calculation accuracy,
tilting the point kernels due to the geometric beam divergence should also be included in the
algorithm (Liu et al., 1997b). In essence, despite the restriction to only transport energy in
straight lines between the photon interaction and dose deposition sites, point kernel based
62
dose calculations have been proven to provide results with a high degree of accuracy (Aspra-
dakis et al., 2003; Dobler et al., 2006).
The most straightforward way of implementing a point dose calculation is through a direct
summation of the energy deposited in each individual volume element (voxel) by each of the
other voxels, resulting in numerical operations proportional to N
7
for N
3
voxels. This will be a
very time-consuming procedure and it may not be necessary in order to ensure high accuracy.
Another option is to employ the collapsed cone approximation (Ahnesj, 1989) where the
set of discrete transport lines that represents the point kernel instead is identical throughout
the volume, resulting in numerical operations proportional to MN
3
where M is the number
of discrete directions used in the calculations. Hence, the algorithm is based on a number of
predefned transport directions, typically on the order of 100, where the associated lines will
intersect each voxel in the dose calculation volume at least once (cf. fgure 5.14). The dose
distribution then gradually builds up by following all transport lines and simultaneously pic-
king up and depositing energy in each intersected voxel. The term collapsed cone originates
from geometrical considerations as each utilized transport direction represents a conical sec-
tor of the point kernel where the associated energy in this approximation is entirely deposited
along the axis of the sector.
Figure 5.14. The collapsed cone approximation employs discretized point kernels where the full
solid angle is divided into a number of conical sectors, each collapsed onto a single transport
direction along the cone axis (a). The dose distribution is determined by following the xed trans-
port lines while collecting and depositing energy in each intersected voxel (b).
Fast transform convolution techniques, like the fast Fourier Transform (FFT), can also of-
fer considerably reduced calculation times (Boyer et al., 1989; Miften et al., 2000). These
algorithms are, however, associated with a requirement for spatially invariant kernels, which
is a signifcant drawback when modelling the effects of heterogeneous densities. Attempts
(a) (b)
Transport
lines
t n Transport
lines
Beam modelling and dose calculations
63
have been made to compensate for this limitation by scaling the calculated dose distribution,
or at least the scattered component, by the density at the scattering and/or the deposition site
(Boyer and Mok, 1986; Wong et al., 1996; Wong and Purdy, 1990). The resolution of the dose
calculation grid is yet another parameter that can be explored in order to reduce calculation
times. Miften et al (2000) implemented a collapsed cone algorithm where the gradients of
energy fuence and density in the beam and in the dose calculation object, respectively, were
used to vary the resolution of the calculation grid over the volume. During the collapsed cone
calculation every other point was omitted in low gradient areas and the missing doses were
then determined later on through interpolation. On average this reduced the calculation time
by a factor of 3.5 without leading to any noticeable reductions in calculation accuracy. Ano-
ther approach that offers considerable speedups is to perform a point kernel calculation on a
very coarse calculation grid that is only to be used as a 3D correction factor for a simpler and
faster dose calculation algorithm performed on a fne grid (Aspradakis and Redpath, 1997).
5.2.2 Charged particle contamination modelling
High-energy photon beams delivered by medical linear accelerators should not be regarded
as pure photon beams as they are in fact contaminated by charged particles, essentially elec-
trons and to some extent positrons that contribute signifcantly to the dose in the build-up
region (fg 5.9). The origin of these electrons can be found inside the treatment head, most
often in the fattening flter or the dose monitor chamber (see fgure 5.2), and in the irradiated
air column (Petti et al., 1983). At higher beam energies the treatment head is typically the
dominating source of contaminant electrons, while the electrons generated in air gain impor-
tance with decreasing beam energy (Biggs and Russell, 1983). Monte Carlo simulations have
shown that the energy spectrum of contaminant electrons in a megavoltage photon beam has
a similar shape as the spectrum of primary photons (Sheikh-Bagheri and Rogers, 2002a).
The continuous distribution of electron energies yields depth dose characteristics that can be
adequately described by an exponential curve (Beauvais et al., 1993; Sjgren and Karlsson,
1996), which is noticeably different from depth dose curves associated with clinical elec-
tron beams. The lateral distribution of dose from contaminant electrons has been reported
as being rounded, i.e. more intense in the central parts of the beam (Nilsson, 1985; yang et
al., 2004). Also the collimator opening and the treatment distance (SSD) have been shown
to be important parameters to consider when trying to quantify the dosimetric signifcance of
charged particle contamination in different treatment situations (Sjgren and Karlsson, 1996;
Zhu and Palta, 1998).
To model the charged particle contamination in dose calculations Ahnesj et al (1992b) se-
parated the dependence into a lateral Gaussian pencil kernel and an exponentially decrea-
sing depth dose. The Gaussian kernel is characterized by its width parameters, which can be
derived through comparison of calculated depth dose curves associated with a (theoretical)
pure photon beam and corresponding measurements in the (real) contaminated beam. Fip-
pel et al (2003) and yang et al (2004) have both presented multi-source models intended for
64
Monte Carlo purposes that include a separate circular electron contamination source located
in the upper parts of the treatment head. The pencil kernel correction derived empirically by
Nyholm et al (2006a) will largely compensate at shallow depths for charged particle con-
tamination, which was not included in the original kernel parameterization (Nyholm et al.,
2006b, c).
5.3 patIent representatIon
The actual geometry of the dose modelling object, i.e. the patient, is often not included when
independently verifying the dose calculations for a treatment plan. One explanation for this
may be that no appropriate modelling of the full patient geometry is facilitated by the factor
based methods traditionally employed for IDC. Furthermore, applying some sort of standar-
dized dose modelling object simplifes the verifcation process as it eliminates the require-
ment to import a CT-study, defne patient contours, etc. for each individual treatment plan.
The minimum information to perform calculations for arbitrary beams on a standardized dose
modelling object can be restricted to its position and rotation (for a fnite 3D phantom) or just
the SSD (for an infnite slab phantom with its surface perpendicular to the beam central axis).
The fact that the conditions applied in the IDC-verifed plan are usually not identical to the
dose calculations in the treatment plan can be somewhat problematic to handle. One option
is to repeat the dose calculations in the TPS after replacing the patient by the standardized
dose modelling object from the IDC. This method should, consequently, yield identical cal-
culation results and enable a detailed comparison. This approach is frequently employed for
experimental verifcation where a detector, e.g. an ionization chamber, is positioned inside
an irradiated QA phantom and the measured dose is then compared with a corresponding
calculation from the TPS. Even if the actual patient geometry in this case is absent in the IDC
tool, all characteristics of the energy fuence that exits the treatment head are still included.
A drawback is, however, that the extra dose calculation that must be carried out in the TPS
imposes an additional workload.
Another alternative, perhaps more frequently applied, is to accept that the dose modelling
object in the IDC is different from the TPS calculation. This also means that one must be pre-
pared to fnd and accept deviations that are caused by these differences. Obviously this adds
signifcant uncertainty to the QA procedure as it requires the ability to distinguish between
deviations associated with the dose modelling object and deviations that are related to actual
faws/bugs in the calculations. Typical sources of deviations due to the non-identical dose
modelling objects are heterogeneities and irregular outer boundaries of the patient (Mijn-
heer and Georg 2008). The irregular shape can affect both the calculation depths and the
lateral size of the scattering volume inside the beam (so-called missing tissue). In order to
compensate for the major sources of uncertainty in the IDC an equivalent/effective/radiolo-
gical depth is regularly applied, yielding dose deviations that in most cases are within a few
percent. These modifed depths are generally provided by the TPS, which is a practice that
Beam modelling and dose calculations
65
should be questioned as it also compromises the independence of the two dose calculations.
But from a practical point of view it may be diffcult to derive them in some other way by
simple means.
5.4 CalCulatIon unCertaIntIes
Over the years a number of publications have addressed the clinical requirements for dosime-
tric accuracy in radiotherapy (Brahme et al., 1988; ICRU, 1976; Mijnheer et al., 1989, 1987)
and the general conclusion seems to point to the interval between 3 and 5% expressed as 1 SD
in the dose specifcation point. Ahnesj and Aspradakis (1999) tried to identify the accuracy
goals that should be associated with dose calculations in radiotherapy by combining estima-
ted uncertainties in absolute dose calibration (Andreo, 1990) with a corresponding estimate
for the clinical dose delivery (Brahme et al., 1988). Using these input data the conclusion
was that if the dose calculation uncertainty, corresponding to one standard deviation, is larger
than 2-3% it will seriously affect the overall dosimetric accuracy. Since then the estimated
uncertainty for absolute dose calibration in high-energy photon beams has improved from
2.0 to 1.5% (IAEA, 2000a), implying that the scope for imperceptible dose calculation
uncertainties also has decreased somewhat. Furthermore, by reducing the other uncertainties
to account for future developments, it was concluded that as an ultimate design goal the
uncertainties that are associated with dose calculation methods in radiotherapy should be
limited to 1% (1 std. dev.). Note that in order to avoid ambiguity all uncertainties should be
clearly specifed.
Uncertainties that are associated with dose calculation models employed in radiotherapy are
in general not systematically accounted for in clinical procedures, as discussed in chapter 3.
The main reason for this is simply that they are not clearly presented and are, therefore, not
readily available to clinical users. Most scientifc publications dealing with dose calculation
algorithms contain some kind of basic analysis of discovered deviations and uncertainties.
But the implications of these fndings are rarely brought forward, discussing how the infor-
mation can be transferred and made useful in the clinical situation. To better incorporate esti-
mated uncertainties in dose calculations into clinical use the uncertainties should be presen-
ted together with the calculation results, preferably individualized with respect to the specifc
setup. Such an approach requires that a separate model be created that can adequately predict
the calculation uncertainty for any given treatment situation.
There are published attempts to fnd models capable of estimating the dose calculation
uncertainty in individual treatment setups. Olofsson et al (2003; 2006b) analyzed deviations
between calculations and measurements of output factors in air, which resulted in a simple
empirical model based on square summation of discrete uncertainty components. Thus, the
validity of this model relies on the assumption that these components, associated with tre-
atment parameters such as beam shape, treatment distance etc., are independent from each
other. By utilizing a database consisting of measured beam data from 593 clinical mega-
66
voltage photon beams Nyholm et al (2006a) managed to extract both an empirical pencil
kernel correction and a model for estimation of the residual errors linked to the pencil kernel
properties. By combining the models of Olofsson et al and Nyholm et al the calculation
uncertainties associated with the total dose output, i.e. dose per MU, have also been estimated
(Olofsson et al., 2006a; Olofsson et al., 2006b). Even though the results indicate that this is
a feasible concept, it is rather diffcult to judge the validity of such uncertainty predictions.
The statistical nature of the problem requires the use of extensive data sets during evaluation
in order to achieve acceptable signifcance. Another issue that must be considered is the
accuracy of the reference values themselves, i.e. the dose measurements, if the uncertainty
estimations are evaluated by empirical means. All measurements are associated with some
kind of experimental uncertainty that will become part of the evaluation procedure as well, if
not somehow explicitly accounted for. Another option for evaluating such models of uncer-
tainty estimation would be to replace the measured references by equivalent Monte Carlo
simulations that are able to present a similar degree of accuracy. The possibilities offered by
a Monte Carlo simulation package in terms of beam parameter confguration etc. would also
enable a more detailed investigation of the reasons behind the encountered dose calculation
uncertainties.
As a general remark, it is important to evaluate and understand the uncertainties introduced in
the individual steps of the treatment process and how they combine to the total uncertainty in
the actual treatment. For instance, how will simplifcations in the patient geometry or use of
a more approximate dose model affect the total uncertainty in different types of treatments?
There is no general answer to this question but if the analysis is performed and the more ad-
vanced methods are applied only where needed, a lot of extra work may be saved while still
keeping tight dosimetric tolerance limits. In this analysis one must also keep in mind that use
of more advanced methods may also increase the uncertainty due to user errors which under
some circumstances may contribute signifcantly to the total uncertainty.
67
6. Measured data For verIFICatIon and dose
CalCulatIons
All dose calculation methods use experimental data to characterize the radiation source(s)
into suffcient detail. These data must be expressed in quantities suitable to the algorithm, or
processed into such quantities.
Early dose calculation models were designed to use measured dose distribution data directly
through simple factorisation. Then data could then be applied to different clinical beam geo-
metries for simpler treatment techniques. Several detailed formalisms have been worked out
to consistently defne factors to cover possible clinical cases as completely as possible (Das et
al., 2008a; Dutreix et al., 1997). Beam data used in these older dose calculation models was
primarily directed towards determination of the absorbed dose distributions in a phantom,
rather than to fnd methods for determination of emission characteristics such as the source
distribution, source intensity and energy spectra that cause the observed dose distribution.
Even though these older models are suffcient for treatments using simple felds this approach
becomes very impractical for the more general radiation conditions encountered in advanced
technologies such as IMRT that more fully exploit the degrees of freedom offered by modern
radiotherapy hardware.
The most fundamental approach to model a treatment feld would be to describe the electron
source of a treatment unit in differential terms of position, energy and angle, and use trans-
port calculations based on a complete geometrical description of the treatment head. This
approach has been extensively utilized for researching beam properties by means of Monte
Carlo transport calculations (Rogers et al., 1995), but for most practical cases in clinical dose
calculations one must use more effcient methods, e.g. parameterized multi-source models. In
the latter approache the beam is modelled by combining source emission distributions with
a geometrical description of the source. Further, shielding and scatter of the emitted fuence
from the source is simulated.
Modelling of several radiation sources is needed to describe the effects from both the primary
beam source and major scattering elements like the fattening flter. The sources need to be
parameterized as extended sources rather than point sources to model effects like geometrical
penumbra and head scatter variations. Hence, use of a multi-source approach as implemented
in a TPS or an IDC code requires determination of the parameters needed to model the sour-
ces. Given a very detailed description of the treatment head design, a fundamental approach
with full transport calculations through the treatment head can be used to yield phase space
sets of particles exiting the machine. These sets can then be back-projected to the source
planes, and tallied for parameterization with respect to at least energy and position (Fix et al.,
2001). In applying this approach measured data are needed mainly for verifcation purposes.
Even though the concept in principle is simple, applying this approach in practice requires fa-
miliarity with Monte Carlo or similar calculation techniques that might not be available in the
clinic. Also, detailed information about the geometrical details of the treatment head might
68
be cumbersome to access and verify. However, in future it is likely that new machines will
be delivered with complete source parameterization derived through this kind of procedure.
A practical approach is to derive the source parameters through minimization of deviations
between measured and calculated dose data while varying the source parameters (Ahnesj
et al., 2005; Bedford et al., 2003; Fippel et al., 2003; Olofsson et al., 2003). Commonly the
TPS and IDC systems employing this kind of models provide support or software tailored
for this purpose. The optimization procedure has also the beneft of biasing the parameters
towards the intended end result, and will also give an estimate of the expected performance
of the calculations. The input data to this type of software consists of measured dose data
and geometrical descriptions of the machine into some detail depending how explicit the
dose calculation system will model collimating effects such as tongue and groove effect,
leakage through rounded leaf tips, etc. The required level of detail is generally less than for
a full transport calculation.
Independent of the applied method, any error in the measured input data will result in para-
meter errors that will degrade the overall calculation accuracy. The intention of this chapter
is to briefy discuss possible measurement issues in relation to the fnal accuracy of the dose
calculations. For more in depth details of measurement and beam commissioning procedures
the reader is referred to report from the AAPM taskgroup 106 (Das et al., 2008a).
6.1 IndependenCe oF Measured data For dose
verIFICatIon
With similar types of dose calculation algorithms in a clinics TPS and IDC, both systems
could in theory be commissioned with identical data sets. However, due to the lack of
standards for specifying and formatting dosimetric data for such systems the TPS and the
IDC will probably require different, system-specifc data sets. Although reformatting and
interpolation methods might enable transformation of data sets from one system to another,
this is not recommended for IDC verifcation as an error in the TPS data thus may cause cor-
related errors in the transformed IDC data set. The risk for correlated errors can be further
reduced if the IDC and the TPS use completely different type of beam data.
The absolute dose calibration is performed in the reference geometry, and to establish the
dose per monitor unit any IDC-TPS combination will use the same calibration geometry
which thus will be an identical data entity in both systems. However, the validity of this
absolute dose calibration should be checked through other QA procedures like independent
reviewing and periodic verifcation measurements.
69
Measured data for verification and dose calculations
6.2 treatMent head geoMetry Issues and FIeld
speCIFICatIons
In addition to measurable dosimetry data, multi-source models need a geometrical descrip-
tion of the treatment head with enough details to correctly describe the variation of source
effects. This type of data consists of both feld independent data like the locations of flters
and collimators, as well as feld dependent data like collimator settings. The latter data are
normally accessible in DICOM format through treatment plan databases. It is of immense
importance that the DICOM format feld specifcations from the TPS are correctly execu-
ted on the treatment machine. If not, inconsistencies in auxiliary beam set up may lead to
unexpected deviations between the calculated and delivered dose. This must be verifed by
an independent QA procedure, at least when hardware or software modifcations have been
made to the accelerator or the TPS.
Use of rounded leaves is another issue that may cause inconsistencies in feld size specifca-
tions since the positions of the leaf tip projection and the radiation feld edge differs syste-
matically, see fgure 5.7 (Boyer and Li, 1997; Boyer et al., 2001). Especially for small feld
dosimetry of narrow slit IMRT the leaves must be positioned with high accuracy since small
errors in leaf positioning amplify into large errors in delivered dose (Kung and Chen, 2000;
Cadman et al., 2002; Das et al., 2008b).
6.3 depth dose, tpr and beaM QualIty IndICes
In most dose calculation systems the dose component related to contaminating electrons on
the surface is often less accurately modelled than other dose components. This electron con-
tamination may also introduce errors in output measurements. It has a complex dependence
on radiation feld geometry and direct beam energy since any irradiated accelerator part will
release electrons and positrons that may travel far in air and can be scattered considerably.
The maximum penetration depth of these particles is determined by the most energetic elec-
trons and may thus well exceed the d
max
-depth. This must be considered when selecting build-
up caps for head scatter measurements.
By using a well specifed feld size it is possible to obtain attenuation measurements that cor-
relate well with various spectrum-dependent quantities. This is the basis for the construction
of beam quality indices like TPR
20/10
and %dd
10
(Andreo, 2000; Rogers and yang, 1999).
These indices were originally designed as surrogates for the full spectrum to facilitate tabu-
lation of pre-calculated values of water-to-air stopping power ratios for absolute dosimetry,
but have also been applied for parameterization of energy deposition kernels (Nyholm et al.,
2006c) and scatter factors (Bjrngard and Petti, 1988; Bjrngard and Vadash, 1998).
Energy deposition kernels such as pencil kernels or point kernels are used by many dose
calculation systems. The kernels are based on spectrally dependent data for which the qua-
lity of measured depth dose data can be of crucial importance. The most direct approach for
70
derivation of pencil kernel data is to differentiate TPR tables with respect to feld radii for
circular felds (Ceberg et al., 1996). In such applications, it is practical to use the relation
R = 0.5611 S
to calculate the radius R of the circular feld that gives the same dose on the cen-
tral axis as a square feld of side S
(Tatcher and Bjrngard, 1993). A more explicit procedure
is to determine the spectrum from depth dose data for one or several feld sizes by automated
optimization (Ahnesj et al., 2005; Ahnesj and Andreo, 1989), or manual trial (Starkschall
et al., 2000), and use the spectrum for superposition of pre-calculated Monte Carlo mono-
energetic kernels. This approach needs constraints to achieve a physically realistic shape of
the spectrum since separation of depth data into spectral components is highly uncertain due
to the weak energy dependence of the beam attenuation (Ahnesj and Andreo, 1989; Sauer
and Neumann, 1990).
A robust method to obtain pencil kernel data has been demonstrated by a number of investi-
gators (Nyholm et al., 2006a; Nyholm et al., 2006b, c) who used a database of parameterized
kernels for a large set of treatment machines and correlated those to beam quality index,
resulting in a direct mapping of beam index to pencil kernels for the accelerator designs in
common use (
60
Co beams and treatment machines without fattening flters were not included
in the study). Since the only data needed were the beam quality index, no depth dose data or
output factors needed to be ftted, making the method very effective for routine use. Once the
quality index is known for a particular machine, the method can also be used for consistency
checks of measured depth doses by comparing with calculated depth doses.
Whatever depth dose data are needed by the dose calculation algorithm, the outcome depends
on the data acquisition integrity. The measurement technique for depth dose is often simpler
and more reliable than for TPR if a suitable scanning water phantom is available. If TPR data
are required they can be recalculated from the depth dose data and vice versa, (Purdy, 1977;
Das et al., 2008a).
Erroneous measured data may seriously corrupt the dose calculations. The nature of these er-
rors may range from direct replication of measured dose errors to offset of model parameters
with serious effects that may appear uncorrelated to its cause. As an example, if depth dose
is used to unfold the beam spectrum, a depth offset can be interpreted as increased beam pe-
netration, yielding a spectrum with higher mean energy, which then reduces both attenuation
and scatter. A simple check may be performed however, as depth dose curves normalized per
monitor unit and plotted together for different feld geometries, can never cross each other
since the scatter dose increases with feld area for all depths.
In the build-up region, care should be taken since the normally used effective point of measu-
rement for cylindrical ionization chambers is derived for the conditions valid only at depths
larger than the depth of dose maximum. The build-up is a high gradient region where the size
of the detector is critical and small detectors such as diodes or pinpoint chambers are prefer-
red. A further concern is the high presence of electron contamination in the build-up region
which will vary signifcantly with feld settings. This contribution is included in the dose
71
Measured data for verification and dose calculations
calculation by various models (chap. 5.2) which should be considered in the measurement
situation as these models may require different datasets.
The spectral properties of primary and scattered photons are different, which can cause pro-
blems with detectors such as diodes which typically over respond to low-energy photons.
This is a particular problem in large photon felds where a large fraction of the photons are
scattered, thus yielding a high abundance of low energy photons.
yet another set of problems arises for small felds where the penumbras from opposing feld
edges overlap in the centre of the feld. Scanning of depth dose and transversal profles in
such felds requires particular concern when aligning the scan path with the beam centre. The
size of the detector is also critical and must be small enough to adequately resolve the dose
variations in the beam.
6.4 relatIve output MeasureMents
The absolute dose calibration is normally performed to give a standard dose per monitor unit
in a well defned standard geometry, in the normal case 1 Gy per 100 MU. The relative output
(S
cp
) is then defned as the ratio of the dose per monitor unit at a reference point in a feld of
interest to the dose per monitor unit for the same point in the reference feld.
If the relative output data should be used in a fuence model for the treatment head connected
to a dose model for the resulting dose distribution in an object, the output measurements must
be acquired following specifc procedures. These procedures must differentiate between scat-
ter originating from the treatment head (S
c
), that infuences the photon fuence per monitor
unit, and scatter originating in the irradiated object (S
p
) that infuences the resulting dose per
incident energy fuence.
6.4.1 head scatter factor measurements
Through the use of a phantom small enough to be entirely covered by all intended felds,
measured output factors then map the feld size dependence of the energy fuence output and
its in energy absorption characteristics of the phantom medium. The factor relating this to the
reference conditions has been given different names such as output in-air, head scatter factor,
or collimator scatter factor. These respective names refect the process of measuring with
small phantoms in air, that the source of variation is mainly the scattering processes in the
treatment head, and that changing collimator settings is the clinical action causing the varia-
tions. Measurements of this factor for feld A are done versus a reference setup A
ref
, normally
a 10x10 cm
2
feld, with the detector at the isocenter:
Signal
small phantom
(A)/M
S
c
(A) = . (6.1)
Signal
small phantom
(A
ref
)/M
@isocenter
72
When using diodes for small feld measurements the use of a smaller reference feld (5x5
cm
2
) is recommended to reduce infuence from low energy scatter (Haryanto et al., 2002).
However, data should for presentation be renormalized to the customary 10x10 cm
2
feld
perhaps by using ionization chamber data to avoid confusion. The most critical issue is to
ensure that the contaminant electrons and positrons do not give rise to any perturbing signal.
The penetration distance indicates how thick a build-up cap must be to stop enough of these
particles from reaching the detector volume while measuring head scatter factors. The proto-
col recommended by ESTRO in 1997 (Dutreix et al., 1997) applied a plastic mini-phantom.
The dimensions of that phantom however were too large to permit feld sizes smaller than
5x5 cm
2
. Li et al (1995) determined the radial thickness required to establish lateral electron
equilibrium to r = (5.973 TPR
20,10
- 2.688). For a typical 6 MV beam with TPR
20,10
= 0.68, this
translates into a water equivalent thickness of 14 mm. They also claimed that brass caps can
be used without serious energy response alterations. Weber et al (1997) recommended use of
brass caps with a rule of thumb thickness MV/3 expressed as g cm
-2
. For a 6 MV beam this
implies 20 mm water equivalent thickness which with brass of density 8.4 g cm
-3
translates
to 2.5 mm bulk thickness thus enabling measurements in small felds. For large felds a small
energy dependence with brass caps was also noted due to the lower energy of the head scat-
tered photons. This effect was shown to increase with higher beam energies. Furthermore, the
fltration of metal wedges affects the energy spectrum of the beam. The wedge factors should
therefore not be measured with brass caps as the energy change due to fltration may alter the
response dosimetry system. For practical use, brass caps of thicknesses following the rules of
thumb given above could be used for small felds. For high-accuracy measurements in larger
felds the brass caps should be replaced by build up caps of lower atomic number, where the
lower density and hence larger size is not a problem.
6.4.2 total output factors
Contrary to the situation for head scatter factors, the use of a phantom larger than all inten-
ded felds causes the measured output to quantify the combined effects of energy fuence per
monitor unit variations and feld-size specifc scatter buildup in the phantom. This quantity is
defned as the total output factor, S
cp
. For standard felds, large enough to provide lateral char-
ged particle equilibrium over an area large enough to cover the detector, the measurements
can be done by common techniques according to the defnition;
D
water phantom
(A)/M
S
cp
(A) = . (6.2)
D
water phantom
(A
ref
)/M
@isocenter
Extending the scope of measurements to the small feld domain and IMRT verifcation in-
volving high gradients requires the use of small detectors carefully selected with respect to
their characteristics (Zhu et al 2009). Since small feld output factors may be requested by the
73
Measured data for verification and dose calculations
dose calculation model to establish the effective source size for calculations in high gradient
regions, all such data should be checked with respect to calculation consistency.
6.4.3 phantom scatter factors
Phantom scatter factors, S
p
, can be obtained by several different methods. By cutting phant-
oms to conform to the beam shapes and irradiating these with fully open beams, with all
collimators withdrawn, one creates radiation conditions for which the variation of scatter
dose with feld size can be determined. Since this procedure is experimentally impractical it
is instead customary to estimate the phantom scatter through
S
cp
S
p
= . (6.3)
S
c @isocenter
It is important to keep in mind that Eq. (6.3) is an approximation and not the defnition of
phantom scatter factor since the distribution of head scatter photons is not limited by the no-
minal feld size but has a more Gaussian shape within the feld. The fraction of the phantom
scatter generated by photons scattered in the treatment head will thus be slightly different
from the contribution from primary photons which are properly collimated.
Instead of being measured, the phantom scatter variation with feld size can be calculated
from parameterizations of standard data based on the beam quality index as outlined by se-
veral groups (Storchi and van Gasteren, 1996; Venselaar et al., 1997; Bjrngard and Vadash,
1998; Nyholm et al., 2006c). This provides a route for consistency checks of measured fac-
tors by comparing calculated values with experimental results according to Eq. (6.3).
6.4.4 wedge factors for physical wedges
Metal wedges are used to modulate the lateral dose profle but the varying fltration in the
wedges also introduces spectral changes in the photon spectrum. Since spectral changes may
cause response variations in some dosimetric systems, it should be considered whether wed-
ged beams need to be treated as separate beam qualities when determining the dose per mo-
nitor unit under reference conditions. The total modulation of the wedge versus the open feld
is usually measured by taking the dose ratio to the corresponding open feld at equal positions
along the depth axis. Spectral and scattering variations will cause a variation in profle ratios
with depth making it important to specify measurement conditions fully. It is also important
to avoid infuence from charged particle contamination by selecting a large enough depth. A
factor similar to the wegde factor can be established by in air measurements but such a factor
should be used only with great care since it can be biased by spectral changes, see e.g. Zhu
et al (2009).
Wedged felds generated by dynamically varying the collimator position are basically a com-
bination of non-wedged beams and wedge factors in most cases will be calculated by the dose
modelling system from open beam data.
74
6.5 CollIMator transMIssIon
The leakage through collimators becomes more important the more modulated the treat-
ment is, simply because more monitor units expose the patient to more collimator leakage.
Leakage can result from radiation passing between the leaves, inter-leaf leakage, or being
transmitted through the leaves, intra-leaf leakage.
Measurements aiming at determining the overall results of intra- and inter-leaf leakages are
best done with a large ionization chamber or radiochromic flm. The measurement geometry
must avoid infuence from phantom scatter by minimizing the open parts of the beam. During
measurements, accessory backup collimators must be retracted to avoid blocking leakage
that otherwise may appear for irregularly shaped beam segments. This is a general recom-
mendation but alternative geometries may be recommended depending on how these data are
implemented in the dose calculation model.
The radiation quality is different outside the direct beam compared to the beam quality in
the actual beam. It is therefore important to use detectors with small energy dependence,
such as ionization chambers. It is also important to check current leakage offsets, since small
perturbations have a larger relative infuence in the low dose region outside the beam than in
the actual beam.
6.6 lateral dose proFIles
Lateral dose profles, i.e. data measured along lines across the beam, have several appli-
cations for beam commissioning and characterization. The high dose part inside the beam
refects the fattening of the beam and to some extent its lateral energy variation. Lateral
profles are more sensitive to energy variations than are depth doses (Sheikh-Bagheri and
Rogers, 2002b).
The collimating system of a modern linear accelerator normally has a stationary, circular
primary collimator close to the target followed by adjustable multi-leaf collimators and pro-
visional backup jaws. To characterize and validate beam fattening and off-axis performance,
lateral profles taken with a fully open beam are customary. While taken diagonally (without
any collimator rotation), such profles refect the infuence from the primary collimator and
the full beam profle. Hence, such profles are useful to model the off-axis, non-collimated
beam fuence, provided rotational symmetry exists elsewhere. The only experimental dif-
fculty in acquiring such profles is that some water phantoms are not large enough to ac-
commodate the entire profle. In these cases the phantom may be positioned to measure
half-profles. In these situations it is critical that full scatter equilibrium is obtained at the
central axis. This must be obtained by adding scattering material outside the phantom but as
full scatter contribution is a critical requirement this should always be verifed when any part
of the beam is positioned near the edge or outside the phantom.
75
Measured data for verification and dose calculations
Another possible solution is to measure the profle in air and thus directly acquire the fuence
distribution. As with all in-air measurements, great care must be taken to exclude infuence
from electron contamination, see e.g. Zhu et al (2009). For off-axis measurements in air,
low atomic number buildup caps should be used unless compensation for off-axis spectral
response can be made (Tonkopi et al., 2005). To check modelling of off-axis softening in
calculations, profles could be measured at several depths and compared to calculations.
6.6.1 dose profles in wedge felds
Wedge shaped modulations have several applications in radiotherapy. The wedge profle can
be shaped by essentially two methods, a metal wedge or by computer controlled movement
of the collimators to create a lateral wedge gradient. Measurements are simpler with metal
wedges since the entire profle is delivered with each monitor unit. With moving collimators
the fnal profle is not fnished until the last monitor unit is delivered. When such wedge pro-
fles are measured with a scanning detector one full delivery cycle is needed for each detector
position. Therefore, profles in the latter case are commonly measured with flm or multiple
detector systems in order to reduce the acquisition time. Multi-detector arrays require careful
cross-calibration for uniform response corrections.
The energy and angular distribution of the photons in wedged beams vary over the beam pro-
fle due to the variation of the scatter component. With metal wedges the added fltering will
vary over the beam. Detectors with large energy dependence, e.g. silver based flm, should
therefore in general be avoided in measurements of wedge beam profles.
As with measurements in all gradient regions, careful positioning and orientation of detectors
is crucial. While using cylindrical chambers with physical wedges, the detector should be
oriented to expose its narrowest sensitivity profle across the highest gradient. The main fu-
ence direction of secondary electrons is still along the depth direction, not across any wedge
gradient, making concepts for effective point of measurement originally derived for depth
dose measurements irrelevant.
6.7 penuMbra and sourCe sIze deterMInatIon
The accelerator beam target is in reality not a point source but has a fnite size that comprises
an effective source size and that yields fuence penumbra effects in beam collimation. In
multi-segmented IMRT a substantial part of the dose stems from penumbra regions making
the source size characterization an important part of beam commissioning. The beam penum-
bra is used as input data in most currently available beam calculation models.
Two main groups of methods exist to determine the source characteristics. One group aims
at determining the source distribution in some detail utilizing dedicated camera techniques.
Such a method can be based on a single slit camera that can be rotated (Jaffray et al., 1993;
Munro et al., 1988) or laterally shifted (Loewenthal et al., 1992). Other examples of sugge-
sted techniques include a multi-slit camera (Lutz et al., 1988) or a tungsten roll bar (Schach
76
von Wittenau et al., 2002). Treuer et. al (2003) used a MLC equipped with 1 mm wide
non-divergent leaves for grid feld measurements, which then were used to derive a source
distribution. Although these authors provide a lot of details, this group of methods is rather
experimental and needs refnement to be practical in a clinical setting.
Another group of methods is based on standard dosimetric measurements, frequently by pa-
rameterizing the source by one or several Gaussian distributions whose parameters are found
by ftting calculated dose profles to profles measured either in water (Ahnesj et al., 1992b)
or in air (Fippel et al., 2003). Sikora (2007) used measured output factors in water for small
beams (down to 0.80.8 cm
2
) to ft the direct source size. Measured penumbras and output
factors are, however, dependent on many parameters that are not directly linked to the direct
source distribution, such as collimator design, lateral transport of secondary particles and
volume averaging effects in the detector. Direct beam sources that have been characterized
through these types of methods are therefore associated with considerable uncertainty, which
on the other hand may be quite acceptable if the main goal is to tune the dose calculation
model to reproduce experimental penumbras and/or output factors. The most critical aspect
from the users point of view is to choose detectors that are small enough not to introduce pe-
numbra widening through averaging. Proper orientation of the detector such that its smallest
width is exposed to the gradient is important to minimize the effects. If in doubt, the profle
measurements should be reproduced with a smaller detector.
77
APPENDIX 1, ALGORITHM IMPLEMENTATION AND
THE GLOBAL DATABASE
Within the process of this project published algorithms were carefully analysed and evalua-
ted. In this evaluation process a set of algorithms was selected for implementation in research
software used for clinical testing in a number of test sites in Europe. Nucletron has imple-
mented these algorithms in a CE/FDA certifed product, EQUAL-Dose
the symbol TL