The development of treatment planning methods in radiation therapy requires dose calculation meth... more The development of treatment planning methods in radiation therapy requires dose calculation methods that are both accurate and general enough to provide a dose per unit monitor setting for a broad variety of fields and beam modifiers. The purpose of this work was to develop ...
To explore the use of the frequency of the energy deposition (ED) clusters of different sizes (cl... more To explore the use of the frequency of the energy deposition (ED) clusters of different sizes (cluster order, CO) as a surrogate (instead of, e.g., LET) classification of the physical characteristics of ionizing radiation at a nanometer scale, to construct a framework for the calculation of relative biological effectiveness (RBE) with cell survival as endpoint. The frequency of cluster order fCO is calculated by sorting the ED sites generated with the Monte Carlo track structure code LIonTrack into clusters based on a single parameter called the cluster distance dC being the maximum allowed distance between two neighboring EDs belonging to a cluster. Published cell survival data parameterized with the linear-quadratic (LQ) model for V79 cells exposed to 15 different radiation qualities (including brachytherapy sources, proton, and carbon ions) were used as input to a fitting procedure, designed to determine a weighting function wCO that describes the capacity of a cluster of a certain CO to damage the cell's sensitive volume. The proposed framework uses both fCO and wCO to construct surrogate based functions for the LQ parameters α and β from which RBE values can be derived. The results demonstrate that radiation quality independent weights wCO exist for both the α and β parameters. This enables the calculation of α values that correlate to their experimental counterparts within experimental uncertainties (relative residual of 15% for dC = 2.5 nm). The combination of both α and β surrogate based functions, despite the higher relative residuals for β values, yielded an RBE function that correlated to experimentally derived RBE values (relative residual of 16.5% for dC = 2.5 nm) for all radiation qualities included in this work. The fCO cluster characterization of ionizing radiation at a nanometer scale can effectively be used to calculate particle and energy dependent α and β values to predict RBE values with potential applications to, e.g., treatment planning systems in radiotherapy.
Basic dosimetric properties of fast-neutron beams with energies ≤80 MeV were explored using Monte... more Basic dosimetric properties of fast-neutron beams with energies ≤80 MeV were explored using Monte Carlo techniques. Elementary pencil-beam dose distributions taking into account transport of all relevant types of released charged particles (protons, deuterons, tritons, 3He and a particles) were calculated and used to derive several absorbed dose distributions. Broad-beam depth doses in phantoms of different materials were compared and scaling factors calculated to convert absorbed dose in one material to absorbed dose in another. The scaling factors were in good agreement with available published data and show that water is a good substitute for soft tissue even at neutron energies as high as 80 Me V. The inherent penumbra and fraction of absorbed dose due to photons were also studied, and found to be consistent with published values.Treatment planning in fast-neutron therapy is commonly performed using dose calculation algorithms designed for photon beam therapy. These algorithms have limitations in the physical models when applied to neutron beams. Monte Carlo derived neutron pencil-beam kernels were parameterized and implemented into the photon dose calculation algorithms of the TMS (MDS Nordion) treatment planning system. It was shown that these algorithms yield good results in homogeneous water media. However, the heterogeneity correction method of the photon dose calculation algorithm failed to calculate correct results in heterogeneous media for neutron beams.Fundamental cross-section data are needed when calculating absorbed doses. To achieve results with adequate accuracy, neutron cross-sections are still not sufficiently well known. At the The Svedberg Laboratory in Uppsala, Sweden, an experimental facility has been designed to measure neutron-induced charged-particle production cross-sections for (n,xp), (n,xd), (n,xt), (n,x3He) and (n,xα) reactions at neutron energies up to 100 MeV. In order to derive the energy distributions of charged particles generated inside the production target, the measured data have to be corrected for the energy lost by the particles in the target. In this work a code (CRAWL) was developed for the reconstruction of the true spectrum. It uses a stripping method. With the limitation of reduced energy resolution, results using CRAWL compare well with those of other methods.
Tumor Response Monitoring and Treatment Planning, 1992
We present here an implementation of a photon dose pencil beam algorithm (Ahnesjo et al. 199la) i... more We present here an implementation of a photon dose pencil beam algorithm (Ahnesjo et al. 199la) in the TMS-Radix (Helax) treatment planning system. It is intended for interactive dose calculation with a minimum of overhead to specified points including two-dimensional grids in an interactive environment in which field shapes, gantry angles, wedges, and treatment units may be changed in the search for an optimal treatment plan. Although the dose can also be calculated in three-dimensional grids, the algorithm is not primarily intended for such bulk dose calculation, as better performing algorithms exist for that purpose (Ahnesjo 1989). The algorithm is based on first physical principles as far as possible. In this way the number of characterizing measurements is kept to a minimum, and yet the algorithm is flexible enough to handle nonstandard cases such as arbitrary field shapes, inhomogeneities, and laterally varying energy fluence from the radiation head. The characterizing measurements required are summarized elsewhere (Ahnesjo and Saxner, this volume).
The variation in specific energy absorbed to different cell compartments caused by variations in ... more The variation in specific energy absorbed to different cell compartments caused by variations in size and chemical composition is poorly investigated in radiotherapy. The aim of this study was to develop an algorithm to derive cell and cell nuclei size distributions from 2D histology samples, and build 3D cellular geometries to provide Monte Carlo (MC)-based dose calculation engines with a morphologically relevant input geometry. Stained and unstained regions of the histology samples are segmented using a Gaussian mixture model, and individual cell nuclei are identified via thresholding. Delaunay triangulation is applied to determine the distribution of distances between the centroids of nearest neighbour cells. A pouring simulation is used to build a 3D virtual tissue sample, with cell radii randomised according to the cell size distribution determined from the histology samples. A slice with the same thickness as the histology sample is cut through the 3D data and characterised in the same way as the measured histology. The comparison between this virtual slice and the measured histology is used to adjust the initial cell size distribution into the pouring simulation. This iterative approach of a pouring simulation with adjustments guided by comparison is continued until an input cell size distribution is found that yields a distribution in the sliced geometry that agrees with the measured histology samples. The thus obtained morphologically realistic 3D cellular geometry can be used as input to MC-based dose calculation programs for studies of dose response due to variations in morphology and size of tumour/healthy tissue cells/nuclei, and extracellular material.
Abstract Dose calculation methods for 3D treatment planning must he general enough to consider th... more Abstract Dose calculation methods for 3D treatment planning must he general enough to consider the degrees of freedom available for treatment delivery, without loosing accuracy in the final result. This is the motivation for the development of enery deposition kernel models which will be reviewed here.
Various dose calculation methods have been proposed to serve the needs in treatment planning of r... more Various dose calculation methods have been proposed to serve the needs in treatment planning of radiotherapy. Common to these are that they need a patient model to describe the interaction properties of the irradiated tissues, and a sufficiently accurate description of the incident radiation. This chapter starts with a brief review of the contexts in which patient dose calculations may serve, followed by a description of common methods for patient modelling and beam characterization. The focus is on external beam photon, but also partly covers particle beams like electrons and protons. The last section describes common approaches of varying complexity for dose calculations ranging from simple factor based models, more elaborate pencil and point kernel models, and finally summarizes some aspects of Monte Carlo and grid based methods.
Journal of applied clinical medical physics / American College of Medical Physics, Jan 8, 2014
Metal objects in the body such as hip prostheses cause artifacts in CT images. When CT images deg... more Metal objects in the body such as hip prostheses cause artifacts in CT images. When CT images degraded by artifacts are used for treatment planning of radiotherapy, the artifacts can yield inaccurate dose calculations and, for particle beams, erroneous penetration depths. A metal artifact reduction software (O-MAR) installed on a Philips Brilliance Big Bore CT has been tested for applications in treatment planning of proton radiotherapy. Hip prostheses mounted in a water phantom were used as test objects. Images without metal objects were acquired and used as reference data for the analysis of artifact-affected regions outside of the metal objects in both the O-MAR corrected and the uncorrected images. Water equivalent thicknesses (WET) based on proton stopping power data were calculated to quantify differences in the calculated proton beam penetration for the different image sets. The WET to a selected point of interest between the hip prostheses was calculated for several beam dir...
New treatment techniques in radiotherapy employ increasing dose calculation complexity in treatme... more New treatment techniques in radiotherapy employ increasing dose calculation complexity in treatment planning. For an adequate check of the results coming from a modern treatment planning system, clinical tools with almost the same degree of generality and accuracy as the planning system itself are needed. To fulfil this need we propose a photon pencil kernel parameterization based on a minimum of input data that can be used for phantom scatter calculations. Through scatter integration the pencil kernel model can calculate common parameters, such as TPR or phantom scatter factors, used in various dosimetric QA (quality assurance) procedures. The proposed model originates from an already published radially parameterized pencil kernel. A depth parameterization of the pencil kernel parameters has been introduced, based on a large database containing commissioned beam data for a commercial treatment planning system. The entire pencil kernel model demands only one photon beam quality index, TPR20,10, as input. By comparing the dose calculation results to the extensive experimental data set in the database, it has been possible to make a thorough analysis of the resulting accuracy. The errors in calculated doses, normalized to the reference geometry, are in most cases smaller than 2%. The investigation shows that a pencil kernel model based only on TPR20,10 can be used for dosimetric verification purposes in megavoltage photon beams at depths below the range of contaminating electrons.
The implementation of two algorithms for calculating dose distributions for radiation therapy tre... more The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi Eyges formalism and Molière multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media.
Two different commercial electronic portal imaging devices (EPIDs), one based on a liquid ion cha... more Two different commercial electronic portal imaging devices (EPIDs), one based on a liquid ion chamber matrix and the other based on a fluoroscopic CCD camera, were used to acquire readings that, through a calibration procedure, provided images proportional to the absolute dose to a virtual water slab located at the EPID plane. The transformation of the matrix ion chamber image into a portal dose image (PDI) was based on a published relationship between dose rate and ionization current. For the fluoroscopic CCD-camera-based system, the transformation was based on a deconvolution with a radial light scatter kernel. Local response variations were corrected in the images from both systems using open field fluence maps. The acquired PDIs were compared with PDIs calculated with the collapsed cone superposition method for a three-dimensional detector model in water equivalent buildup material. The calculation model was based on the beam modelling and geometrical description of the treatment unit and energy used for treatment planning in a kernel-based system. The validity of the calculation method was evaluated for several field shapes and thicknesses of patient phantoms for the matrix ion chamber at 6 MV x-rays and for the camera-based EPID at 6 and 15 MV x-rays. The agreement between predicted and measured PDIs was evaluated with dose comparisons at points of interest and gamma index calculations. The average area failing the passing criteria in dose and position deviation was analysed to validate the performance of the method. For the matrix ion chamber on average an area less than 1% fails the passing criteria of 3 mm and 3%. For the camera-based EPID, the average area is 7% and 1% for 6 and 15 MV, respectively. The overall agreement centrally in the fields was 0.1 +/- 1.6% (1 sd) for the camera-based EPID and -0.1 +/- 1.6% (1 sd) for the matrix ion chamber. Thus, an absolute dose calibrated EPID could validate the delivered dose to the patient by comparing a calculated and a measured PDI.
Silicon diodes have good spatial resolution, which makes them advantageous over ionization chambe... more Silicon diodes have good spatial resolution, which makes them advantageous over ionization chambers for dosimetry in fields with high dose gradients. However, silicon diodes overrespond to low-energy photons, that are more abundant in scatter which increase with large fields and larger depths. We present a cavity-theory-based model for a general response function for silicon detectors at arbitrary positions within photon fields. The model uses photon and electron spectra calculated from fluence pencil kernels. The incident photons are treated according to their energy through a bipartition of the primary beam photon spectrum into low- and high-energy components. Primary electrons from the high-energy component are treated according to Spencer-Attix cavity theory. Low-energy primary photons together with all scattered photons are treated according to large cavity theory supplemented with an energy-dependent factor K(E) to compensate for energy variations in the electron equilibrium. The depth variation of the response for an unshielded silicon detector has been calculated for 5 x 5 cm(2), 10 x 10 cm(2) and 20 x 20 cm(2) fields in 6 and 15 MV beams and compared with measurements showing that our model calculates response factors with deviations less than 0.6%. An alternative method is also proposed, where we show that one can use a correlation with the scatter factor to determine the detector response of silicon diodes with an error of less than 3% in 6 MV and 15 MV photon beams.
The development of treatment planning methods in radiation therapy requires dose calculation meth... more The development of treatment planning methods in radiation therapy requires dose calculation methods that are both accurate and general enough to provide a dose per unit monitor setting for a broad variety of fields and beam modifiers. The purpose of this work was to develop ...
To explore the use of the frequency of the energy deposition (ED) clusters of different sizes (cl... more To explore the use of the frequency of the energy deposition (ED) clusters of different sizes (cluster order, CO) as a surrogate (instead of, e.g., LET) classification of the physical characteristics of ionizing radiation at a nanometer scale, to construct a framework for the calculation of relative biological effectiveness (RBE) with cell survival as endpoint. The frequency of cluster order fCO is calculated by sorting the ED sites generated with the Monte Carlo track structure code LIonTrack into clusters based on a single parameter called the cluster distance dC being the maximum allowed distance between two neighboring EDs belonging to a cluster. Published cell survival data parameterized with the linear-quadratic (LQ) model for V79 cells exposed to 15 different radiation qualities (including brachytherapy sources, proton, and carbon ions) were used as input to a fitting procedure, designed to determine a weighting function wCO that describes the capacity of a cluster of a certain CO to damage the cell's sensitive volume. The proposed framework uses both fCO and wCO to construct surrogate based functions for the LQ parameters α and β from which RBE values can be derived. The results demonstrate that radiation quality independent weights wCO exist for both the α and β parameters. This enables the calculation of α values that correlate to their experimental counterparts within experimental uncertainties (relative residual of 15% for dC = 2.5 nm). The combination of both α and β surrogate based functions, despite the higher relative residuals for β values, yielded an RBE function that correlated to experimentally derived RBE values (relative residual of 16.5% for dC = 2.5 nm) for all radiation qualities included in this work. The fCO cluster characterization of ionizing radiation at a nanometer scale can effectively be used to calculate particle and energy dependent α and β values to predict RBE values with potential applications to, e.g., treatment planning systems in radiotherapy.
Basic dosimetric properties of fast-neutron beams with energies ≤80 MeV were explored using Monte... more Basic dosimetric properties of fast-neutron beams with energies ≤80 MeV were explored using Monte Carlo techniques. Elementary pencil-beam dose distributions taking into account transport of all relevant types of released charged particles (protons, deuterons, tritons, 3He and a particles) were calculated and used to derive several absorbed dose distributions. Broad-beam depth doses in phantoms of different materials were compared and scaling factors calculated to convert absorbed dose in one material to absorbed dose in another. The scaling factors were in good agreement with available published data and show that water is a good substitute for soft tissue even at neutron energies as high as 80 Me V. The inherent penumbra and fraction of absorbed dose due to photons were also studied, and found to be consistent with published values.Treatment planning in fast-neutron therapy is commonly performed using dose calculation algorithms designed for photon beam therapy. These algorithms have limitations in the physical models when applied to neutron beams. Monte Carlo derived neutron pencil-beam kernels were parameterized and implemented into the photon dose calculation algorithms of the TMS (MDS Nordion) treatment planning system. It was shown that these algorithms yield good results in homogeneous water media. However, the heterogeneity correction method of the photon dose calculation algorithm failed to calculate correct results in heterogeneous media for neutron beams.Fundamental cross-section data are needed when calculating absorbed doses. To achieve results with adequate accuracy, neutron cross-sections are still not sufficiently well known. At the The Svedberg Laboratory in Uppsala, Sweden, an experimental facility has been designed to measure neutron-induced charged-particle production cross-sections for (n,xp), (n,xd), (n,xt), (n,x3He) and (n,xα) reactions at neutron energies up to 100 MeV. In order to derive the energy distributions of charged particles generated inside the production target, the measured data have to be corrected for the energy lost by the particles in the target. In this work a code (CRAWL) was developed for the reconstruction of the true spectrum. It uses a stripping method. With the limitation of reduced energy resolution, results using CRAWL compare well with those of other methods.
Tumor Response Monitoring and Treatment Planning, 1992
We present here an implementation of a photon dose pencil beam algorithm (Ahnesjo et al. 199la) i... more We present here an implementation of a photon dose pencil beam algorithm (Ahnesjo et al. 199la) in the TMS-Radix (Helax) treatment planning system. It is intended for interactive dose calculation with a minimum of overhead to specified points including two-dimensional grids in an interactive environment in which field shapes, gantry angles, wedges, and treatment units may be changed in the search for an optimal treatment plan. Although the dose can also be calculated in three-dimensional grids, the algorithm is not primarily intended for such bulk dose calculation, as better performing algorithms exist for that purpose (Ahnesjo 1989). The algorithm is based on first physical principles as far as possible. In this way the number of characterizing measurements is kept to a minimum, and yet the algorithm is flexible enough to handle nonstandard cases such as arbitrary field shapes, inhomogeneities, and laterally varying energy fluence from the radiation head. The characterizing measurements required are summarized elsewhere (Ahnesjo and Saxner, this volume).
The variation in specific energy absorbed to different cell compartments caused by variations in ... more The variation in specific energy absorbed to different cell compartments caused by variations in size and chemical composition is poorly investigated in radiotherapy. The aim of this study was to develop an algorithm to derive cell and cell nuclei size distributions from 2D histology samples, and build 3D cellular geometries to provide Monte Carlo (MC)-based dose calculation engines with a morphologically relevant input geometry. Stained and unstained regions of the histology samples are segmented using a Gaussian mixture model, and individual cell nuclei are identified via thresholding. Delaunay triangulation is applied to determine the distribution of distances between the centroids of nearest neighbour cells. A pouring simulation is used to build a 3D virtual tissue sample, with cell radii randomised according to the cell size distribution determined from the histology samples. A slice with the same thickness as the histology sample is cut through the 3D data and characterised in the same way as the measured histology. The comparison between this virtual slice and the measured histology is used to adjust the initial cell size distribution into the pouring simulation. This iterative approach of a pouring simulation with adjustments guided by comparison is continued until an input cell size distribution is found that yields a distribution in the sliced geometry that agrees with the measured histology samples. The thus obtained morphologically realistic 3D cellular geometry can be used as input to MC-based dose calculation programs for studies of dose response due to variations in morphology and size of tumour/healthy tissue cells/nuclei, and extracellular material.
Abstract Dose calculation methods for 3D treatment planning must he general enough to consider th... more Abstract Dose calculation methods for 3D treatment planning must he general enough to consider the degrees of freedom available for treatment delivery, without loosing accuracy in the final result. This is the motivation for the development of enery deposition kernel models which will be reviewed here.
Various dose calculation methods have been proposed to serve the needs in treatment planning of r... more Various dose calculation methods have been proposed to serve the needs in treatment planning of radiotherapy. Common to these are that they need a patient model to describe the interaction properties of the irradiated tissues, and a sufficiently accurate description of the incident radiation. This chapter starts with a brief review of the contexts in which patient dose calculations may serve, followed by a description of common methods for patient modelling and beam characterization. The focus is on external beam photon, but also partly covers particle beams like electrons and protons. The last section describes common approaches of varying complexity for dose calculations ranging from simple factor based models, more elaborate pencil and point kernel models, and finally summarizes some aspects of Monte Carlo and grid based methods.
Journal of applied clinical medical physics / American College of Medical Physics, Jan 8, 2014
Metal objects in the body such as hip prostheses cause artifacts in CT images. When CT images deg... more Metal objects in the body such as hip prostheses cause artifacts in CT images. When CT images degraded by artifacts are used for treatment planning of radiotherapy, the artifacts can yield inaccurate dose calculations and, for particle beams, erroneous penetration depths. A metal artifact reduction software (O-MAR) installed on a Philips Brilliance Big Bore CT has been tested for applications in treatment planning of proton radiotherapy. Hip prostheses mounted in a water phantom were used as test objects. Images without metal objects were acquired and used as reference data for the analysis of artifact-affected regions outside of the metal objects in both the O-MAR corrected and the uncorrected images. Water equivalent thicknesses (WET) based on proton stopping power data were calculated to quantify differences in the calculated proton beam penetration for the different image sets. The WET to a selected point of interest between the hip prostheses was calculated for several beam dir...
New treatment techniques in radiotherapy employ increasing dose calculation complexity in treatme... more New treatment techniques in radiotherapy employ increasing dose calculation complexity in treatment planning. For an adequate check of the results coming from a modern treatment planning system, clinical tools with almost the same degree of generality and accuracy as the planning system itself are needed. To fulfil this need we propose a photon pencil kernel parameterization based on a minimum of input data that can be used for phantom scatter calculations. Through scatter integration the pencil kernel model can calculate common parameters, such as TPR or phantom scatter factors, used in various dosimetric QA (quality assurance) procedures. The proposed model originates from an already published radially parameterized pencil kernel. A depth parameterization of the pencil kernel parameters has been introduced, based on a large database containing commissioned beam data for a commercial treatment planning system. The entire pencil kernel model demands only one photon beam quality index, TPR20,10, as input. By comparing the dose calculation results to the extensive experimental data set in the database, it has been possible to make a thorough analysis of the resulting accuracy. The errors in calculated doses, normalized to the reference geometry, are in most cases smaller than 2%. The investigation shows that a pencil kernel model based only on TPR20,10 can be used for dosimetric verification purposes in megavoltage photon beams at depths below the range of contaminating electrons.
The implementation of two algorithms for calculating dose distributions for radiation therapy tre... more The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi Eyges formalism and Molière multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media.
Two different commercial electronic portal imaging devices (EPIDs), one based on a liquid ion cha... more Two different commercial electronic portal imaging devices (EPIDs), one based on a liquid ion chamber matrix and the other based on a fluoroscopic CCD camera, were used to acquire readings that, through a calibration procedure, provided images proportional to the absolute dose to a virtual water slab located at the EPID plane. The transformation of the matrix ion chamber image into a portal dose image (PDI) was based on a published relationship between dose rate and ionization current. For the fluoroscopic CCD-camera-based system, the transformation was based on a deconvolution with a radial light scatter kernel. Local response variations were corrected in the images from both systems using open field fluence maps. The acquired PDIs were compared with PDIs calculated with the collapsed cone superposition method for a three-dimensional detector model in water equivalent buildup material. The calculation model was based on the beam modelling and geometrical description of the treatment unit and energy used for treatment planning in a kernel-based system. The validity of the calculation method was evaluated for several field shapes and thicknesses of patient phantoms for the matrix ion chamber at 6 MV x-rays and for the camera-based EPID at 6 and 15 MV x-rays. The agreement between predicted and measured PDIs was evaluated with dose comparisons at points of interest and gamma index calculations. The average area failing the passing criteria in dose and position deviation was analysed to validate the performance of the method. For the matrix ion chamber on average an area less than 1% fails the passing criteria of 3 mm and 3%. For the camera-based EPID, the average area is 7% and 1% for 6 and 15 MV, respectively. The overall agreement centrally in the fields was 0.1 +/- 1.6% (1 sd) for the camera-based EPID and -0.1 +/- 1.6% (1 sd) for the matrix ion chamber. Thus, an absolute dose calibrated EPID could validate the delivered dose to the patient by comparing a calculated and a measured PDI.
Silicon diodes have good spatial resolution, which makes them advantageous over ionization chambe... more Silicon diodes have good spatial resolution, which makes them advantageous over ionization chambers for dosimetry in fields with high dose gradients. However, silicon diodes overrespond to low-energy photons, that are more abundant in scatter which increase with large fields and larger depths. We present a cavity-theory-based model for a general response function for silicon detectors at arbitrary positions within photon fields. The model uses photon and electron spectra calculated from fluence pencil kernels. The incident photons are treated according to their energy through a bipartition of the primary beam photon spectrum into low- and high-energy components. Primary electrons from the high-energy component are treated according to Spencer-Attix cavity theory. Low-energy primary photons together with all scattered photons are treated according to large cavity theory supplemented with an energy-dependent factor K(E) to compensate for energy variations in the electron equilibrium. The depth variation of the response for an unshielded silicon detector has been calculated for 5 x 5 cm(2), 10 x 10 cm(2) and 20 x 20 cm(2) fields in 6 and 15 MV beams and compared with measurements showing that our model calculates response factors with deviations less than 0.6%. An alternative method is also proposed, where we show that one can use a correlation with the scatter factor to determine the detector response of silicon diodes with an error of less than 3% in 6 MV and 15 MV photon beams.
Uploads
Papers by Anders Ahnesjo