Lack of good user interfaces has been a major impediment to the acceptance and routine use of health-care professional workstations. Health-care providers, and the environment in which they practice, place strenuous demands on the... more
Lack of good user interfaces has been a major impediment to the acceptance and routine use of health-care professional workstations. Health-care providers, and the environment in which they practice, place strenuous demands on the interface. User interfaces must be designed with greater consideration of the requirements, cognitive capabilities, and limitations of the end-user. The challenge of gaining better acceptance and achieving widespread use of clinical information systems will be accentuated as the variety and complexity of multi-media presentation increases. Better understanding of issues related to cognitive processes involved in human-computer interactions is needed in order to design interfaces that are more intuitive and more acceptable to health-care professionals. Critical areas which deserve immediate attention include: improvement of pen-based technology, development of knowledge-based techniques that support contextual presentation, and development of new strategies and metrics to evaluate user interfaces. Only with deliberate attention to the user interface, can we improve the ways in which information technology contributes to the efficiency and effectiveness of health-care providers.
Meal lipids (LIP) and proteins (PRO) may influence the effect of insulin doses based on carbohydrate (CHO) counting in patients with type 1 diabetes (T1D). We developed a smartphone application for CHO, LIP, and PRO counting in daily food... more
Meal lipids (LIP) and proteins (PRO) may influence the effect of insulin doses based on carbohydrate (CHO) counting in patients with type 1 diabetes (T1D). We developed a smartphone application for CHO, LIP, and PRO counting in daily food and assessed its usability in real-life conditions and potential usefulness. Ten T1D patients used the android application for 1 week to collect their food intakes. Data included meal composition, premeal and 2-hour postmeal blood glucose, corrections for hypo- or hyperglycemia after meals, and time for entering meals in the application. Meal insulin doses were based on patients' CHO counting (application in blinded mode). Linear mixed models were used to assess the statistical differences. In all, 187 meals were analyzed. Average computed CHO amount was 74.37 ± 31.78 grams; LIP amount: 20.26 ± 14.28 grams and PRO amount: 25.68 ± 16.68 grams. Average CHO, LIP, and PRO contents were significantly different between breakfast and lunch/dinner. The...
Validation is one of the software engineering disciplines that help build quality into software. The major objective of software validation process is to determine that the software performs its intended functions correctly and provide... more
Validation is one of the software engineering disciplines that help build quality into software. The major objective of software validation process is to determine that the software performs its intended functions correctly and provide information about its quality and reliability. This paper identifies general measures for the specific goals and its specific practices of Validation Process Area (PA) in Capability Maturity Model Integration (CMMI). CMMI is developed by Software Engineering Institute (SEI). CMMI is a framework for improvement and assessment of a software development process. CMMI needs a measurement program that is practical. The method we used to define the measures is to apply the Goal Question Metrics (GQM) paradigm to the specific goals and its specific practices of Validation Process Area in CMMI.
Haptic technology (sense of touch) along with 3D-virtual reality (VR) graphics, creating lifelike training simulations, was used to develop a dental training simulator system (PerioSim). This preliminary study was designed to evaluate... more
Haptic technology (sense of touch) along with 3D-virtual reality (VR) graphics, creating lifelike training simulations, was used to develop a dental training simulator system (PerioSim). This preliminary study was designed to evaluate whether faculty considered PerioSim realistic and useful for training and evaluating basic procedural skills of students. The haptic device employed was a PHANToM and the simulator a Dell Xeon 530 workstation with 3D, VR oral models and instruments viewed on a stereoscopic monitor. An onscreen VR periodontal probe or explorer was manipulated by operating the PHANToM for sensing lifelike contact and interactions with the teeth and gingiva. Thirty experienced clinical dental and dental hygiene faculty judged the realism of the system. A PowerPoint presentation on one screen provided instructions for the simulator use with the 3D, VR simulator on a second stereoscopic monitor viewed with 3D goggles. Faculty/practitioners found the images very realistic fo...
Quality assurance is one of the important issues for software companies because the delivery of high-quality software for customer satisfaction is much needed. Software quality is a relatively complex concept, but many companies have... more
Quality assurance is one of the important issues for software companies because the delivery of high-quality software for customer satisfaction is much needed. Software quality is a relatively complex concept, but many companies have standards for quality assurance. In agile methods, developers are responsible for quality assurance. Agile methods involve many practices for quality assurance. Moreover, quality assurance in the waterfall model in each stage by using different practices is done for validation. Quality assurance has many parameters, to control software quality, that are also discussed. The traditional quality assurance is carried out periodically at various stages, but the agile quality assurance is constantly being carried out by the team on daily basis. In this paper, we mentioned ways to improve quality assurance in agile development.
We report the results of a transcript finishing initiative, undertaken for the purpose of identifying and characterizing novel human transcripts, in which RT-PCR was used to bridge gaps between paired EST clusters, mapped against the... more
We report the results of a transcript finishing initiative, undertaken for the purpose of identifying and characterizing novel human transcripts, in which RT-PCR was used to bridge gaps between paired EST clusters, mapped against the genomic sequence. Each pair of EST clusters selected for experimental validation was designated a transcript finishing unit (TFU). A total of 489 TFUs were selected for validation, and an overall efficiency of 43.1% was achieved. We generated a total of 59,975 bp of transcribed sequences organized into 432 exons, contributing to the definition of the structure of 211 human transcripts. The structure of several transcripts reported here was confirmed during the course of this project, through the generation of their corresponding full-length cDNA sequences. Nevertheless, for 21% of the validated TFUs, a full-length cDNA sequence is not yet available in public databases, and the structure of 69.2% of these TFUs was not correctly predicted by computer prog...
A Monte Carlo (MC) based QA process to validate the dynamic beam delivery accuracy for Varian RapidArc (Varian Medical Systems, Palo Alto, CA) using Linac delivery log files (DynaLog) is presented. Using DynaLog file analysis and MC... more
A Monte Carlo (MC) based QA process to validate the dynamic beam delivery accuracy for Varian RapidArc (Varian Medical Systems, Palo Alto, CA) using Linac delivery log files (DynaLog) is presented. Using DynaLog file analysis and MC simulations, the goal of this article is to (a) confirm that adequate sampling is used in the RapidArc optimization algorithm (177 static gantry angles) and (b) to assess the physical machine performance [gantry angle and monitor unit (MU) delivery accuracy]. Ten clinically acceptable RapidArc treatment plans were generated for various tumor sites and delivered to a water-equivalent cylindrical phantom on the treatment unit. Three Monte Carlo simulations were performed to calculate dose to the CT phantom image set: (a) One using a series of static gantry angles defined by 177 control points with treatment planning system (TPS) MLC control files (planning files), (b) one using continuous gantry rotation with TPS generated MLC control files, and (c) one using continuous gantry rotation with actual Linac delivery log files. Monte Carlo simulated dose distributions are compared to both ionization chamber point measurements and with RapidArc TPS calculated doses. The 3D dose distributions were compared using a 3D gamma-factor analysis, employing a 3%/3 mm distance-to-agreement criterion. The dose difference between MC simulations, TPS, and ionization chamber point measurements was less than 2.1%. For all plans, the MC calculated 3D dose distributions agreed well with the TPS calculated doses (gamma-factor values were less than 1 for more than 95% of the points considered). Machine performance QA was supplemented with an extensive DynaLog file analysis. A DynaLog file analysis showed that leaf position errors were less than 1 mm for 94% of the time and there were no leaf errors greater than 2.5 mm. The mean standard deviation in MU and gantry angle were 0.052 MU and 0.355 degrees, respectively, for the ten cases analyzed. The accuracy and flexibility of the Monte Carlo based RapidArc QA system were demonstrated. Good machine performance and accurate dose distribution delivery of RapidArc plans were observed. The sampling used in the TPS optimization algorithm was found to be adequate.
Numerous publications and commercial systems are available that deal with automatic detection of pulmonary nodules in thoracic computed tomography scans, but a comparative study where many systems are applied to the same data set has not... more
Numerous publications and commercial systems are available that deal with automatic detection of pulmonary nodules in thoracic computed tomography scans, but a comparative study where many systems are applied to the same data set has not yet been performed. This paper introduces ANODE09 ( http://anode09.isi.uu.nl), a database of 55 scans from a lung cancer screening program and a web-based framework for objective evaluation of nodule detection algorithms. Any team can upload results to facilitate benchmarking. The performance of six algorithms for which results are available are compared; five from academic groups and one commercially available system. A method to combine the output of multiple systems is proposed. Results show a substantial performance difference between algorithms, and demonstrate that combining the output of algorithms leads to marked performance improvements.
A pivotal component in automated external defibrillators (AEDs) is the detection of ventricular fibrillation (VF) by means of appropriate detection algorithms. In scientific literature there exists a wide variety of methods and ideas for... more
A pivotal component in automated external defibrillators (AEDs) is the detection of ventricular fibrillation (VF) by means of appropriate detection algorithms. In scientific literature there exists a wide variety of methods and ideas for handling this task. These algorithms should have a high detection quality, be easily implementable, and work in realtime in an AED. Testing of these algorithms should
A Monte Carlo (MC) validation of the vendor-supplied Varian TrueBeam 6 MV flattened (6X) phase-space file and the first implementation of the Siebers-Keall MC MLC model as applied to the HD120 MLC (for 6X flat and 6X flattening... more
A Monte Carlo (MC) validation of the vendor-supplied Varian TrueBeam 6 MV flattened (6X) phase-space file and the first implementation of the Siebers-Keall MC MLC model as applied to the HD120 MLC (for 6X flat and 6X flattening filter-free (6X FFF) beams) are described. The MC model is validated in the context of VMAT patient-specific quality assurance. The Monte Carlo commissioning process involves: 1) validating the calculated open-field percentage depth doses (PDDs), profiles, and output factors (OF), 2) adapting the Siebers-Keall MLC model to match the new HD120-MLC geometry and material composition, 3) determining the absolute dose conversion factor for the MC calculation, and 4) validating this entire linac/MLC in the context of dose calculation verification for clinical VMAT plans. MC PDDs for the 6X beams agree with the measured data to within 2.0% for field sizes ranging from 2 × 2 to 40 × 40 cm2. Measured and MC profiles show agreement in the 50% field width and the 80%-20...
Background An advantage of the Intensity Modulated Radiotherapy (IMRT) technique is the feasibility to deliver different therapeutic dose levels to PTVs in a single treatment session using the Simultaneous Integrated Boost (SIB)... more
Background An advantage of the Intensity Modulated Radiotherapy (IMRT) technique is the feasibility to deliver different therapeutic dose levels to PTVs in a single treatment session using the Simultaneous Integrated Boost (SIB) technique. The paper aims to describe an automated tool to calculate the dose to be delivered with the SIB-IMRT technique in different anatomical regions that have the same Biological Equivalent Dose (BED), i.e. IsoBED, compared to the standard fractionation. Methods Based on the Linear Quadratic Model (LQM), we developed software that allows treatment schedules, biologically equivalent to standard fractionations, to be calculated. The main radiobiological parameters from literature are included in a database inside the software, which can be updated according to the clinical experience of each Institute. In particular, the BED to each target volume will be computed based on the alpha/beta ratio, total dose and the dose per fraction (generally 2 Gy for a standard fractionation). Then, after selecting the reference target, i.e. the PTV that controls the fractionation, a new total dose and dose per fraction providing the same isoBED will be calculated for each target volume. Results The IsoBED Software developed allows: 1) the calculation of new IsoBED treatment schedules derived from standard prescriptions and based on LQM, 2) the conversion of the dose-volume histograms (DVHs) for each Target and OAR to a nominal standard dose at 2Gy per fraction in order to be shown together with the DV-constraints from literature, based on the LQM and radiobiological parameters, and 3) the calculation of Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) curve versus the prescribed dose to the reference target.
This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for... more
This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated. We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in c...
Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the... more
Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measure...
Quality of electronic health record systems (EHR-S) is one of the key points in the discussion about the safe use of this kind of system. It stimulates creation of technical standards and certifications in order to establish the minimum... more
Quality of electronic health record systems (EHR-S) is one of the key points in the discussion about the safe use of this kind of system. It stimulates creation of technical standards and certifications in order to establish the minimum requirements expected for these systems. [1] In other side, EHR-S suppliers need to invest in evaluation of their products to provide systems according to these requirements. This work presents a proposal of use ISO 25040 standard, which focuses on the evaluation of software products, for define a model of evaluation of EHR-S in relation to Brazilian Certification for Electronic Health Record Systems - SBIS-CFM Certification. Proposal instantiates the process described in ISO 25040 standard using the set of requirements that is scope of the Brazilian certification. As first results, this research has produced an evaluation model and a scale for classify an EHR-S about its compliance level in relation to certification. This work in progress is part fo...