Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Given the capacity of Optical Coherence Tomography (OCT) imaging to display symptoms of a wide variety of eye diseases and neurological disorders, the need for OCT image segmentation and the corresponding data interpretation is latterly... more
Given the capacity of Optical Coherence Tomography (OCT) imaging to display symptoms of a wide variety of eye diseases and neurological disorders, the need for OCT image segmentation and the corresponding data interpretation is latterly felt more than ever before. In this paper, we wish to address this need by designing a semi-automatic software program for applying reliable segmentation of 8 different macular layers as well as outlining retinal pathologies such as diabetic macular edema. The software accommodates a novel graph-based semi-automatic method, called “Livelayer” which is designed for straightforward segmentation of retinal layers and fluids. This method is chiefly based on Dijkstra’s Shortest Path (SPF) algorithm and the Live-wire function together with some preprocessing operations on the to-be-segmented images. The software is indeed suitable for obtaining detailed segmentation of layers, exact localization of clear or unclear fluid objects and the ground truth, deman...
To assist ophthalmologists in diagnosing retinal abnormalities, Computer Aided Diagnosis has played a significant role. In this paper, a particular Convolutional Neural Network based on Wavelet Scattering Transform (WST) is used to detect... more
To assist ophthalmologists in diagnosing retinal abnormalities, Computer Aided Diagnosis has played a significant role. In this paper, a particular Convolutional Neural Network based on Wavelet Scattering Transform (WST) is used to detect one to four retinal abnormalities from Optical Coherence Tomography (OCT) images. Predefined wavelet filters in this network decrease the computation complexity and processing time compared to deep learning methods. We use two layers of the WST network to obtain a direct and efficient model. WST generates a sparse representation of the images which is translation-invariant and stable concerning local deformations. Next, a Principal Component Analysis classifies the extracted features. We evaluate the model using four publicly available datasets to have a comprehensive comparison with the literature. The accuracies of classifying the OCT images of the OCTID dataset into two and five classes were $$100\%$$ 100 % and $$82.5\%$$ 82.5 % , respectively. ...
ABSTRACT The choroid is a densely layer under the retinal pigment epithelium (RPE). Its deeper boundary is formed by the sclera, the outer fibrous shell of the eye. However, the inhomogeneity within the layers of choroidal Optical... more
ABSTRACT The choroid is a densely layer under the retinal pigment epithelium (RPE). Its deeper boundary is formed by the sclera, the outer fibrous shell of the eye. However, the inhomogeneity within the layers of choroidal Optical Coherence Tomography (OCT)-tomograms presents a significant challenge to existing segmentation algorithms. In this paper, we performed a statistical study of retinal OCT data to extract the choroid. This model fits a Gaussian mixture model (GMM) to image intensities with Expectation Maximization (EM) algorithm. The goodness of fit for proposed GMM model is computed using Chi-square measure and is obtained lower than 0.04 for our dataset. After fitting GMM model on OCT data, Bayesian classification method is employed for segmentation of the upper and lower border of boundary of retinal choroid. Our simulations show the signed and unsigned error of -1.44 +/- 0.5 and 1.6 +/- 0.53 for upper border, and -5.7 +/- 13.76 and 6.3 +/- 13.4 for lower border, respectively. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Background: Reconstruction of high quality two dimensional images from fan beam computed tomography (CT) with a limited number of projections is already feasible through Fourier based iterative reconstruction method. However, this article... more
Background: Reconstruction of high quality two dimensional images from fan beam computed tomography (CT) with a limited number of projections is already feasible through Fourier based iterative reconstruction method. However, this article is focused on a more complicated reconstruction of three dimensional (3D) images in a sparse view cone beam computed tomography (CBCT) by utilizing Compressive Sensing (CS) based on 3D pseudo polar Fourier transform (PPFT). Method: In comparison with the prevalent Cartesian grid, PPFT re gridding is potent to remove rebinning and interpolation errors. Furthermore, using PPFT based radon transform as the measurement matrix, reduced the computational complexity. Results: In order to show the computational efficiency of the proposed method, we compare it with an algebraic reconstruction technique and a CS type algorithm. We observed convergence in <20 iterations in our algorithm while others would need at least 50 iterations for reconstructing a qualified phantom image. Furthermore, using a fast composite splitting algorithm solver in each iteration makes it a fast CBCT reconstruction algorithm. The algorithm will minimize a linear combination of three terms corresponding to a least square data fitting, Hessian (HS) Penalty and l1 norm wavelet regularization. We named it PP-based compressed sensing-HS-W. In the reconstruction range of 120 projections around the 360° rotation, the image quality is visually similar to reconstructed images by Feldkamp-Davis-Kress algorithm using 720 projections. This represents a high dose reduction. Conclusion: The main achievements of this work are to reduce the radiation dose without degrading the image quality. Its ability in removing the staircase effect, preserving edges and regions with smooth intensity transition, and producing high-resolution, low-noise reconstruction results in low-dose level are also shown.
In this experiment, a gene selection technique was proposed to select a robust gene signature from microarray data for prediction of breast cancer recurrence. In this regard, a hybrid scoring criterion was designed as linear combinations... more
In this experiment, a gene selection technique was proposed to select a robust gene signature from microarray data for prediction of breast cancer recurrence. In this regard, a hybrid scoring criterion was designed as linear combinations of the scores that were determined in the mutual information (MI) domain and protein-protein interactions network. Whereas, the MI-based score represents the complementary information between the selected genes for outcome prediction; and the number of connections in the PPI network between the selected genes builds the PPI-based score. All genes were scored by using the proposed function in a hybrid forward-backward gene-set selection process to select the optimum biomarker-set from the gene expression microarray data. The accuracy and stability of the finally selected biomarkers were evaluated by using five-fold cross-validation (CV) to classify available data on breast cancer patients into two cohorts of poor and good prognosis. The results showed an appealing improvement in the cross-dataset accuracy in comparison with similar studies whenever we applied a primary signature, which was selected from one dataset, to predict survival in other independent datasets. Moreover, the proposed method demonstrated 58-92 percent overlap between 50-genes signatures, which were selected from seven independent datasets individually.
Asymmetry analysis is a challenging step in computerized early diagnosis of Diabetic retinopathy (DR) which provides an opportunity for early treatment. In this study to compare the patterns of vascular in right and left eyes, a... more
Asymmetry analysis is a challenging step in computerized early diagnosis of Diabetic retinopathy (DR) which provides an opportunity for early treatment. In this study to compare the patterns of vascular in right and left eyes, a combination of fractal analysis and radon transformation is investigated to provide both statistical distribution of the vessel thickness, and their geometrical distribution. For this purpose, the vessel segmentation and skeletonizetion are performed and the vessels' thickness map (VTM) is obtained. Then, the fractal dimension (FD) is found on various versions like the segmented vessels, skeletonized vessels, VTM, and radon transform (RT) of VTM in right and left eyes for asymmetry analysis. According to the obtained results for mean/SD values of the differences of FDs in right and left eyes and p-values, we conclude that RT of VTM is able to better discriminate two eyes from each other and accordingly, it can be used as a powerful feature for comparison of the symmetry/asymmetry in fundus images. Our evaluation results show that a difference of 0.33 ± 0.11 between FD of VTM's RT in left and right eyes is expected for normal subjects.
... Using maximum a posteriori (MAP) estimator and minimum mean squared estimator (MMSE), we describe two methods for video denoising which rely on the bivariate Cauchy random variables with high local correlation. Because ...
Diabetic retinopathy (DR) is a chronic eye disease characterized by degenerative changes to the retina's blood vessels. In this paper, we present a dictionary learning (DL)-based method for automatic detection of DR in digital fundus... more
Diabetic retinopathy (DR) is a chronic eye disease characterized by degenerative changes to the retina's blood vessels. In this paper, we present a dictionary learning (DL)-based method for automatic detection of DR in digital fundus images. The detection method is according to best atomic representation of fundus images based on learned dictionaries by K-SVD algorithm. However, the learned dictionaries by K-SVD should be able to discriminate the normal and diabetic classes, i.e. discriminative atoms should be designed. For this purpose, the best discriminative atoms are obtained for atomic representation of images in each class. The classification rule is based on the best sparse representation, i.e. the test image is belonged to the class with minimum number of best specific atoms. Our discriminative DL-based method was tested on 30 color fundus images which accuracies of 70% and 90% were obtained for normal and diabetic images, respectively.
In this paper an automatic computer-aided (CAD) method is utilized for lung segmentation using computed tomography (CT) images. We segmented lung regions - based on the CT data- with nodules attached to the chest wall by using level set... more
In this paper an automatic computer-aided (CAD) method is utilized for lung segmentation using computed tomography (CT) images. We segmented lung regions - based on the CT data- with nodules attached to the chest wall by using level set modeling. This method is made up of 3 steps: In the first step, an adaptive fuzzy thresholding operation is used to binarize the CT images; in the second step, the lung with non-isolated nodules is segmented applying both level set modeling and convex hull algorithm. In the third step, by using the shape features of lung lobe, the lung is segmented. The experimental results show an accuracy of 98% by our method with out performance other exiting methods.
Optical coherence tomography (OCT) represents a non-invasive, high-resolution cross-sectional imaging modality. Macular edema is the swelling of the macular region. Segmentation of fluid or cyst regions in OCT images is essential, to... more
Optical coherence tomography (OCT) represents a non-invasive, high-resolution cross-sectional imaging modality. Macular edema is the swelling of the macular region. Segmentation of fluid or cyst regions in OCT images is essential, to provide useful information for clinicians and prevent visual impairment. However, manual segmentation of fluid regions is a time-consuming and subjective procedure. Traditional and off-the-shelf deep learning methods fail to extract the exact location of the boundaries under complicated conditions, such as with high noise levels and blurred edges. Therefore, developing a tailored automatic image segmentation method that exhibits good numerical and visual performance is essential for clinical application. The dual-tree complex wavelet transform (DTCWT) can extract rich information from different orientations of image boundaries and extract details that improve OCT fluid semantic segmentation results in difficult conditions. This paper presents a comparat...
Recently, deep convolutional neural networks have been successfully applied in different fields of computer vision and pattern recognition. Offline handwritten signature is one of the most important biometrics applied in banking systems,... more
Recently, deep convolutional neural networks have been successfully applied in different fields of computer vision and pattern recognition. Offline handwritten signature is one of the most important biometrics applied in banking systems, administrative and financial applications, which is a challenging task and still hard. The aim of this study is to review of the presented signature verification/recognition methods based on the convolutional neural networks and also evaluate the performance of some prominent available deep convolutional neural networks in offline handwritten signature verification/recognition as feature extractor using transfer learning. This is done using four pretrained models as the most used general models in computer vision tasks including VGG16, VGG19, ResNet50, and InceptionV3 and also two pre-trained models especially presented for signature processing tasks including SigNet and SigNet- F. Experiments have been conducted using two benchmark signature datasets: GPDS Synthetic signature dataset and MCYT- 75 as Latin signature datasets, and two Persian datasets: UTSig and FUM-PHSD. Obtained experimental results, in comparison with literature, verify the effectiveness of the models: VGG16 and SigNet for signature verification and the superiority of VGG16 in signature recognition task.
Handwriting signatures are widely used to register ownership in banking systems, administrative and financial applications, all over the world. With the increasing advancement of technology, increasing the volume of financial... more
Handwriting signatures are widely used to register ownership in banking systems, administrative and financial applications, all over the world. With the increasing advancement of technology, increasing the volume of financial transactions, and the possibility of signature fraud, it is necessary to develop more accurate, convenient, and cost effective signature based authentication systems. In this paper, a signature verification method based on circlet transform and the statistical properties of the circlet coefficients is presented. Experiments have been conducted using three benchmark datasets: GPDS synthetic and MCYT-75 as two Latin signature datasets, and UTSig as a Persian signature dataset. Obtained experimental results, in comparison with literature, confirm the effectiveness of the presented method.
Background: With the increasing advancement of technology, it is necessary to develop more accurate, convenient, and cost-effective security systems. Handwriting signature, as one of the most popular and applicable biometrics, is widely... more
Background: With the increasing advancement of technology, it is necessary to develop more accurate, convenient, and cost-effective security systems. Handwriting signature, as one of the most popular and applicable biometrics, is widely used to register ownership in banking systems, including checks, as well as in administrative and financial applications in everyday life, all over the world. Automatic signature verification and recognition systems, especially in the case of online signatures, are potentially the most powerful and publicly accepted means for personal authentication. Methods: In this article, a novel procedure for online signature verification and recognition has been presented based on Dual-Tree Complex Wavelet Packet Transform (DT-CWPT). Results: In the presented method, three-level decomposition of DT-CWPT has been computed for three time signals of dynamic information including horizontal and vertical positions in addition to the pressure signal. Then, in order t...
Despite the growing growth of technology, handwritten signature has been selected as the first option between biometrics by users. In this paper, a new methodology for offline handwritten signature verification and recognition based on... more
Despite the growing growth of technology, handwritten signature has been selected as the first option between biometrics by users. In this paper, a new methodology for offline handwritten signature verification and recognition based on the Shearlet transform and transfer learning is proposed. Since, a large percentage of handwritten signatures are composed of curves and the performance of a signature verification/recognition system is directly related to the edge structures, subbands of shearlet transform of signature images are good candidates for input information to the system. Furthermore, by using transfer learning of some pre-trained models, appropriate features would be extracted. In this study, four pre-trained models have been used: SigNet and SigNet-F (trained on offline signature datasets), VGG16 and VGG19 (trained on ImageNet dataset). Experiments have been conducted using three datasets: UTSig, FUM-PHSD and MCYT-75. Obtained experimental results, in comparison with the ...
Brain-computer interfaces based on code-modulated visual evoked potentials provide high information transfer rates, which make them promising alternative communication tools. Circular shifts of a binary sequence are used as the flickering... more
Brain-computer interfaces based on code-modulated visual evoked potentials provide high information transfer rates, which make them promising alternative communication tools. Circular shifts of a binary sequence are used as the flickering pattern of several visual stimuli, where the minimum correlation between them is critical for recognizing the target by analyzing the EEG signal. Implemented sequences have been borrowed from communication theory without considering visual system physiology and related ergonomics. Here, an approach is proposed to design optimum stimulus sequences considering physiological factors, and their superior performance was demonstrated for a 6-target c-VEP BCI system. This was achieved by defining a time-factor index on the frequency response of the sequence, while the autocorrelation index ensured a low correlation between circular shifts. A modified version of the non-dominated sorting genetic algorithm II (NSGAII) multi-objective optimization technique was implemented to find, for the first time, 63-bit sequences with simultaneously optimized autocorrelation and time-factor indexes. The selected optimum sequences for general (TFO) and 6-target (6TO) BCI systems, were then compared with m-sequence by conducting experiments on 16 participants. Friedman tests showed a significant difference in perceived eye irritation between TFO and m-sequence (p = 0.024). Generalized estimating equations (GEE) statistical test showed significantly higher accuracy for 6TO compared to m-sequence (p = 0.006). Evaluation of EEG responses showed enhanced SNR for the new sequences compared to m-sequence, confirming the proposed approach for optimizing the stimulus sequence. Incorporating physiological factors to select sequence(s) used for c-VEP BCI systems improves their performance and applicability.
Optical Coherence Tomography (OCT) is one of the most informative methodologies in ophthalmology and provides cross sectional images from anterior and posterior segments of the eye. Corneal diseases can be diagnosed by these images and... more
Optical Coherence Tomography (OCT) is one of the most informative methodologies in ophthalmology and provides cross sectional images from anterior and posterior segments of the eye. Corneal diseases can be diagnosed by these images and corneal thickness maps can also assist in the treatment and diagnosis. The need for automatic segmentation of cross sectional images is inevitable since manual segmentation is time consuming and imprecise. In this paper, segmentation methods such as Gaussian Mixture Model (GMM), Graph Cut, and Level Set are used for automatic segmentation of three clinically important corneal layer boundaries on OCT images. Using the segmentation of the boundaries in three-dimensional corneal data, we obtained thickness maps of the layers which are created by these borders. Mean and standard deviation of the thickness values for normal subjects in epithelial, stromal, and whole cornea are calculated in central, superior, inferior, nasal, and temporal zones (centered o...
Watermarking is a new impressive digital copyright protection method and data security technology. The image watermarking algorithms can be grouped into two groups depending on the domain of watermarking, i.e., the spatial and transform... more
Watermarking is a new impressive digital copyright protection method and data security technology. The image watermarking algorithms can be grouped into two groups depending on the domain of watermarking, i.e., the spatial and transform domain. Usually, the transform-based watermarking approaches can efficiently hide a robust watermark due to the utilization of the characteristics of human visual system (HVS). In this
In this paper, a multivariate statistical model that is suitable for describing Optical Coherence Tomography (OCT) images is introduced. The proposed model is comprised of a multivariate Gaussianization function in sparse domain. Such an... more
In this paper, a multivariate statistical model that is suitable for describing Optical Coherence Tomography (OCT) images is introduced. The proposed model is comprised of a multivariate Gaussianization function in sparse domain. Such an approach has two advantages, i.e. 1) finding a function that can effectively transform the input – which is often not Gaussian – into normally distributed samples enables the reliable application of methods that assume Gaussianity, 2) although multivariate Gaussianization in spatial domain is a complicated task and rarely results in closed-form analytical model, by transferring data to sparse domain, our approach facilitates multivariate statistical modeling of OCT images. To this end, a proper multivariate probability density function (pdf) which considers all three properties of OCT images in sparse domains (i.e. compression, clustering, and persistence properties) is designed and the proposed sparse domain Gaussianization framework is established. Using this multivariate model, we show that the OCT images often follow a 2-component multivariate Laplace mixture model in the sparse domain. To evaluate the performance of the proposed model, it is employed for OCT image denoising in a Bayesian framework. Visual and numerical comparison with previous prominent methods reveals that our method improves the overall contrast of the image, preserves edges, suppresses background noise to a desirable amount, but is less capable of maintaining tissue texture. As a result, this method is suitable for applications where edge preservation is crucial, and a clean noiseless image is desired.
In this paper we introduce a simple shrinkage function employing local Laplace distribution for medical volume noise reduction in contourlet transform domain. Since we implement our denoising algorithm in contourlet domain, we are able to... more
In this paper we introduce a simple shrinkage function employing local Laplace distribution for medical volume noise reduction in contourlet transform domain. Since we implement our denoising algorithm in contourlet domain, we are able to preserve the important details of noise-free images that for medical volume may contain important diagnostic information. It is clear that using maximum a posteriori (MAP) estimator for denoising problem, needs a prior distribution for noise-free data. In this paper we propose a Laplace probability density function (pdf) to model the statistical properties of contourlet coefficients. This distribution is able to simultaneously model the heavy-tailed nature and spatially clustering property of coefficients. We use the produced thresholding function from MAP estimator for denoising of a sequence of CT images corrupted with additive Gaussian noise in various noise levels. The simulation results show that our method has better performance visually and in terms of peek signal-to-noise ratio (PSNR) in comparison with several denoising methods.

And 66 more