Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Nasser Mehrshad

    Nasser Mehrshad

    The classification of the sound heart into different valve-physiological heart disease categories is a complex pattern recognition task. This paper will purpose sound heart recognition for diagnosing heart disease with 4 type of... more
    The classification of the sound heart into different valve-physiological heart disease categories is a complex pattern recognition task. This paper will purpose sound heart recognition for diagnosing heart disease with 4 type of Artificial Neural Network (ANN). We develop a simple model for the recognition of heart sounds, and demonstrate its utility in identifying features useful in diagnosis. We then
    ABSTRACT In the mineral production industry, separation of valuable from gangue minerals is generally carried out using the flotation process. Mineral recovery into the concentrate and the grade of the concentrate are the main... more
    ABSTRACT In the mineral production industry, separation of valuable from gangue minerals is generally carried out using the flotation process. Mineral recovery into the concentrate and the grade of the concentrate are the main metallurgical parameters of the flotation process. In the present investigation, a fuzzy model was developed to simulate the relationship between the process conditions (i.e., gas velocity, slurry solids %, frother dosage, and frother type) and the metallurgical performance of an industrial flotation column in a copper concentrator in Iran. Afterwards, an intelligent model–based control system was designed to control the process performance at the desired level using fuzzy logic rules. Simulation results show that the developed controller is capable of maintaining the process performance at its target level within a reasonable time.
    Background Phosphorylation is the most important and studied post-translational modification (PTM), which plays a crucial role in protein function studies and experimental design. Many significant studies have been performed to predict... more
    Background Phosphorylation is the most important and studied post-translational modification (PTM), which plays a crucial role in protein function studies and experimental design. Many significant studies have been performed to predict phosphorylation sites using various machine-learning methods. Recently, several studies have claimed that deep learning-based methods are the best way to predict the phosphorylation sites because deep learning as an advanced machine learning method can automatically detect complex representations of phosphorylation patterns from raw sequences and thus offers a powerful tool to improve phosphorylation site prediction. Results In this study, we report DF-Phos, a new phosphosite predictor based on the deep forest to predict phosphorylation sites. In DF-Phos, the feature vector taken from the CkSAApair method is as input for a deep forest framework for predicting phosphorylation sites. The results of 10-fold cross-validation show that the deep forest meth...
    The human visual system (HVS) recognizes object in the crowded scenes with high speed and accuracy. So far, many object recognition models based on HVS, like HMAX, have been developed. In this paper, the new effective method based on HMAX... more
    The human visual system (HVS) recognizes object in the crowded scenes with high speed and accuracy. So far, many object recognition models based on HVS, like HMAX, have been developed. In this paper, the new effective method based on HMAX is proposed called Probabilistic Selection HMAX (PSHMAX). HMAX main problem is random patch extraction which extracts two useless patches. First, patches involving low information that cause more computational complexity with no useful result. Second, patches with wrong information from background that produce wrong output. In the proposed method, the optimum patches involving maximum useful information are extracted in the random way which has two steps: first is producing poll of patches involving maximum information, second is patches extracting with useful information from poll. To evaluate the proposed method, we apply it to object categorization and conduct experiment on the Caltech5 and Caltech101 databases. Results demonstrate that the prop...
    Nowadays, air transportation has gained a significant growth due to its advantages in transporting goods and passengers. The rapid growth of this activity and some limitations in different parts of aviation operation often cause traffic... more
    Nowadays, air transportation has gained a significant growth due to its advantages in transporting goods and passengers. The rapid growth of this activity and some limitations in different parts of aviation operation often cause traffic congestion the mismanagement and proper planning of which can lead to a lot of flight delays; accompanied by different problems. In order to appropriately systematize air traffic congestion various researches have been done during  the recent two decades the major part of which is dealing with planning of aircrafts taking-off and landing. Thus, in the current study; and for the first time, the two algorithms Biogeography-Based Optimization (BBO) and Particle Swarm Optimization with Constriction Coefficient (CPSO) deal with a feasible planning of aircrafts take-off /landing, taking modern conditions and limitations into account. Simulations prove that adding rich and effective knowledge to optimization process can, to a large extent, undue and redunda...
    In order to enhance the accuracy of the motion vector (MV) estimation and also reduce the error propagation issue during the estimation, in this paper, a new adaptive error concealment (EC) approach is proposed based on the information... more
    In order to enhance the accuracy of the motion vector (MV) estimation and also reduce the error propagation issue during the estimation, in this paper, a new adaptive error concealment (EC) approach is proposed based on the information extracted from the video scene. In this regard, the motion information of the video scene around the degraded MB is first analyzed to estimate the motion type of the degraded MB. If the neighboring MBs possess uniform motion, the degraded MB imitates the behavior of neighboring MBs by choosing the MV of the collocated MB. Otherwise, the lost MV is estimated through the second proposed EC technique (i.e., IOBMA). In the IOBMA, unlike the conventional boundary matching criterion-based EC techniques, not only each boundary distortion is evaluated regarding both the luminance and the chrominance components of the boundary pixels, but also the total boundary distortion corresponding to each candidate MV is calculated as the weighted average of the availabl...
    Since the search process of the particle swarm optimization (PSO) technique is non-linear and very complicated, it is hard if not impossible, to mathematically model the search process and dynamically adjust the PSO parameters. Thus,... more
    Since the search process of the particle swarm optimization (PSO) technique is non-linear and very complicated, it is hard if not impossible, to mathematically model the search process and dynamically adjust the PSO parameters. Thus, already some fuzzy systems proposed to control the important structural parameters of basic PSO. However, in those researches no effort were reported for optimizing the structural parameters of the designed fuzzy controller. In this paper, a new algorithm called Fuzzy Optimum PSO (FOPSO) has been introduced. FOPSO utilizes two optimized fuzzy systems for optimal controlling the main parameters of basic PSO. Extensive experimental results on many benchmark functions with different dimensions show that the powerfulness and effectiveness of the proposed FOPSO outperforms other versions of PSO.
    This paper presents a semisupervised dimensionality reduction (DR) method based on the combination of semisupervised learning (SSL) and metric learning (ML) (CSSLML-DR) in order to overcome some existing limitations in HSIs analysis.... more
    This paper presents a semisupervised dimensionality reduction (DR) method based on the combination of semisupervised learning (SSL) and metric learning (ML) (CSSLML-DR) in order to overcome some existing limitations in HSIs analysis. Specifically, CSSML focuses on the difficulties of high dimensionality of hyperspectral images (HSIs) data, the insufficient number of labelled samples and inappropriate distance metric. CSSLML aims to learn a local metrics under which the similar samples are pushed as close as possible, and simultaneously, the different samples are pulled away as far as possible. CSSLML constructs two local-reweighted dynamic graphs in an iterative two-steps approach: L-step and V-step. In L-step, the local between-class and within-class graphs are updated. In V-step, the transformation matrix and the reduced space are updated. The algorithm is repeated until a stopping criterion is satisfied. Experimental results on two well-known hyperspectral image data sets demonst...
    This paper considers the ensemble classification for the text independent speaker verification issue. Using one classifier for the speaker verification may not result in dependable decision, because it may not exploit different... more
    This paper considers the ensemble classification for the text independent speaker verification issue. Using one classifier for the speaker verification may not result in dependable decision, because it may not exploit different characteristics of speech signal. Therefore, state-of-the-art speaker verification systems use an ensemble of classifiers for the verification. Most of the ensemble speaker verification systems use a weighted summation of the score of the individual expert classifiers to calculate the final score of the verification. The weights of this score fusion is obtained using a method, e.g. logistic regression, in the training phase. These works do not efficiently take into account issues such as correlation of classifiers and instance specific behavior of the base classifiers into account. In this paper a new solution is proposed for these two issues by using the process of ensemble design and combination rule based on training data. The obtained results on NIST 2004...
    This paper presents a down-conversion active mixer with improved performance for direct conversion receivers in wireless local area networks. The effect of negative admittance on the Gilbert cell mixer performance is investigated in terms... more
    This paper presents a down-conversion active mixer with improved performance for direct conversion receivers in wireless local area networks. The effect of negative admittance on the Gilbert cell mixer performance is investigated in terms of flicker noise, conversion gain, and linearity. The proposed negative admittance is implemented using a modified negative capacitance connected in parallel to a negative resistance, which exhibits a high degree of freedom to achieve high negative capacitance along with low negative resistance. The proposed mixer is designed and simulated using TSMC 0.18 µm CMOS technology in Cadence Spectre-RF at the input frequency of 2.4 GHz and intermediate frequency of 10 MHz. Post-layout simulation results show using the negative admittance can improve the flicker noise more than 16.7 dB at the frequency of 1 kHz. The proposed mixer exhibits a conversion gain of 19.3 dB and a flicker noise corner frequency of 5 kHz. The double-side band noise figure is 7.57 dB at the output frequency of 10 MHz, and the third-order intermodulation intercept point (IIP3) is − 7.5 dBm. The power dissipation is 13.6 mW from the power supply of 1.8 V. Moreover, the linearity of the proposed mixer can be enhanced by choosing the proper values for transconductance of switching quad and negative resistance so that it achieves the IIP3 of + 7.9 dBm with the conversion gain of 7.6 dB.
    Recently, deep learning (DL)-based methods have attracted increasing attention for hyperspectral images (HSIs) classification. However, the complex structure and limited number of labelled training...
    The interference of artefacts with evoked scalp electroencephalogram (EEG) responses is a problem in event related brain computer interface (BCI) system that reduces signal quality and interpretability of user's intentions. Many... more
    The interference of artefacts with evoked scalp electroencephalogram (EEG) responses is a problem in event related brain computer interface (BCI) system that reduces signal quality and interpretability of user's intentions. Many strategies have been proposed to reduce the effects of non-neural artefacts, while the activity of neural sources that do not reflect the considered stimulation has been neglected. However discerning such activities from those to be retained is important, but subtle and difficult as most of their features are the same. We propose an automated method based on a combination of a genetic algorithm (GA) and a support vector machine (SVM) to select only the sources of interest. Temporal, spectral, wavelet, autoregressive and spatial properties of independent components (ICs) of EEG are inspected. The method selects the most distinguishing subset of features among this comprehensive fused set of information and identifies the components to be preserved. EEG da...
    Human recognize objects in complex natural images very fast within a fraction of a second. Many computational object recognition models inspired from this powerful ability of human. The Human Visual System (HVS) recognizes object in... more
    Human recognize objects in complex natural images very fast within a fraction of a second. Many computational object recognition models inspired from this powerful ability of human. The Human Visual System (HVS) recognizes object in several processing layers which we know them as hierarchically model. Due to amazing complexity of HVS and the connections in visual pathway, computational modeling of HVS directly from its physiology is not possible. So it considered as a some blocks and each block modeled separately. One models inspiring of HVS is HMAX which its main problem is selecting patches in random way. As HMAX is a hierarchical model, HMAX can enhanced with enhancing each layer separately. In this paper instead of random patch extraction, Desirable Patches for HMAX (DPHMAX) will extracted.  HVS for extracting patch first selected patches with more information. For simulating this block patches with more variance will be selected. Then HVS will chose patches with more similarity...
    In this paper, a hand-crafted spectral-spatial feature extraction (SEA-FE) method for classification of hyperspectral images (HSIs) is proposed to improve the classification performance, especially...
    ABSTRACT Feature extraction (FE) methods play a central role in the classification of hyperspectral images (HSIs). However, all traditional FE methods work in original feature space (OFS), OFS may suffer from noise, outliers and poorly... more
    ABSTRACT Feature extraction (FE) methods play a central role in the classification of hyperspectral images (HSIs). However, all traditional FE methods work in original feature space (OFS), OFS may suffer from noise, outliers and poorly discriminative features. This paper presents a feature space enriching technique to address the problems of noise, outliers and poorly discriminative features which may exist in OFS. The proposed method is based on low-rank representation (LRR) with the capability of pairwise constraint preserving (PCP) termed LRR-PCP. LRR-PCP does not change the dimension of OFS and only can be used as an appropriate preprocessing procedure for any classification algorithm or DR methods. The proposed LRR-PCP aims to enrich the OFS and obtain extracted feature space (EFS) which results in features richer than OFS. The problems of noise and outliers can be decreased using LRR. But, LRR cannot preserve the intrinsic local structure of the original data and only capture the global structure of data. Therefore, two additional penalty terms are added into the objective function of LRR to keep the local discriminative ability and also preserve the data diversity. LRR-PCP method not only can be used in supervised learning but also in unsupervised and semi-supervised learning frameworks. The effectiveness of LRR-PCP is investigated on three HSI data sets using some existing DR methods and as a denoising procedure before the classification task. All experimental results and quantitative analysis demonstrate that applying LRR-PCP on OFS improves the performance of the classification and DR methods in supervised, unsupervised, and semi-supervised conditions.
    AbstractFeature extraction (FE) methods based on low-rank representation (LRR) have become important topics in hyperspectral images (HSIs) data analysis. In this paper, a supervised FE method for HSIs data based on LRR with the ability to... more
    AbstractFeature extraction (FE) methods based on low-rank representation (LRR) have become important topics in hyperspectral images (HSIs) data analysis. In this paper, a supervised FE method for HSIs data based on LRR with the ability to preserve the local pairwise constraints information (LRLPC) is proposed. LRLPC does not change the data dimensionality and only employs a technique to enrich the original feature space (OFS) and to obtain enriched feature space, which results in features richer than OFS. To overcome the problem of LRR in lacking the local structure information (LSI) of data, a local discriminative regularization term is imposed on the fitness function of LRR to keep the LSI of data. For nonlinear structure of data, LRLPC is extended to kernel LRLPC (KLRLPC) using kernel trick. Utilization of existing information in the pairwise constraints is useful for limited labeled samples situations as a common problem in HSI data analysis. The obtained experimental results using two well-known HSI data sets confirm the effectiveness of LRLPC and KLRLPC for dimension reduction and classification of HSIs.
    Containing hundreds of spectral bands (features), hyperspectral images (HSIs) have high ability in discrimination of land cover classes. Traditional HSIs data processing methods consider the same importance for all bands in the original... more
    Containing hundreds of spectral bands (features), hyperspectral images (HSIs) have high ability in discrimination of land cover classes. Traditional HSIs data processing methods consider the same importance for all bands in the original feature space (OFS), while different spectral bands play different roles in identification of samples of different classes. In order to explore the relative importance of each feature, we learn a weighting matrix and obtain the relative weighted feature space (RWFS) as an enriched feature space for HSIs data analysis in this paper. To overcome the difficulty of limited labeled samples which is common case in HSIs data analysis, we extend our method to semisupervised framework. To transfer available knowledge to unlabeled samples, we employ graph based clustering where low rank representation (LRR) is used to define the similarity function for graph. After construction the RWFS, any arbitrary dimension reduction method and classification algorithm can...
    The highly random manner in which veins spread along a finger, their immunity to counterfeiting, active liveness, and user friendliness make finger veins the best choice for a biometric identification system (BIS). In this paper, veins of... more
    The highly random manner in which veins spread along a finger, their immunity to counterfeiting, active liveness, and user friendliness make finger veins the best choice for a biometric identification system (BIS). In this paper, veins of six fingers of two hands of a person are used to develop a secure, reliable, and robust multimodal BIS (MBIS). The main structure of the proposed MBIS is based on the effective combination of rank- and decision-level fusion. In the training step, the power (weight) of each single modality is estimated by extracting the information that lies in the cumulative match characteristic (CMC) curve. The testing step consists of two main parts. In the first part, the region of the finger vein is extracted by using a simple method, and then the binarized statistical image features (BSIFs) algorithm is used to extract feature vectors. In the second part, final decision for the test input probe is made by generating ‘top rank-decision matrix’, which fuses the information of each biometric identifier in the hybrid rank-decision level. The obtained results show that proposed method is more reliable and accurate than other fusion techniques at the post-classification fusion level. © 2017 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.
    D Flip-Flop as a digital circuit can be used as a timing element in many sophisticated circuits. Therefore the optimum performance with the lowest power consumption and acceptable delay time will be critical issue in electronics circuits.... more
    D Flip-Flop as a digital circuit can be used as a timing element in many sophisticated circuits. Therefore the optimum performance with the lowest power consumption and acceptable delay time will be critical issue in electronics circuits. The newly proposed Dual-Edge Triggered Static D Flip-Flop circuit layout is defined as a multi-objective optimization problem. For this, an optimum fuzzy inference system with fuzzy rules is proposed to enhance the performance and convergence of non-dominated sorting Genetic Algorithm-II by adaptive control of the exploration and exploitation parameters. By using proposed Fuzzy NSGA-II algorithm, the more optimum values for MOSFET channel widths and power supply are discovered in search space than ordinary NSGA types. What is more, the design parameters involving NMOS and PMOS channel widths and power supply voltage and the performance parameters including average power consumption and propagation delay time are linked. To do this, the required mat...
    Aircraft landing planning (ALP) is one of the most important challenging problems in the domain of air traffic control (ATC). Solving this NP-hard problem is a valuable aid in organizing air traffic in terminal control area (TCA), which... more
    Aircraft landing planning (ALP) is one of the most important challenging problems in the domain of air traffic control (ATC). Solving this NP-hard problem is a valuable aid in organizing air traffic in terminal control area (TCA), which itself leads to a decrease in aircraft fuel consumption, costs of airlines, and workload undertaken by air traffic controllers. In the present paper, the ALP problem is dealt with by applying effective rich knowledge to the optimization process (to remove obvious non-optimal solutions), and the first use of Gravitational Search Algorithm (GSA) in resolving such a case. In this regard, while the specific regulations for safe separation have been observed, the optimal landing time, the optimal runway, and the order of consecutive landings have been determined so that the main goal (minimizing total flight delays) would be best met. Results of simulations show that this approach, compared to previous ones, which are based on Genetic and Bionomic algorithms, GLS, and Scatter search method, considerably decreases total flight delays. Attaining zero in the total flight delays in three scenarios with real data shows that the suggested intelligent approach is more decisive than others in finding an optimal solution.
    This article is based on the application of heuristic algorithms to solve the optimum solution for a VLSI circuit. The idea is to find the optimum layout for a 2-to-1 multiplexer with minimal average power. The objective function is the... more
    This article is based on the application of heuristic algorithms to solve the optimum solution for a VLSI circuit. The idea is to find the optimum layout for a 2-to-1 multiplexer with minimal average power. The objective function is the average power of 2:1 MUX with four MOSFETs with different channel widths. They make a four dimensional space which is searched by search agents of algorithm. Motivated by the convergence of Invasive Weeds Optimization (IWO) and Genetic Algorithm (GA) and the link of MATLAB with HSPICE Software the optimized layout of 2:1 MUX is obtained. Based on IWO, Fuzzy-IWO, GA, Fuzzy-GA algorithms the best resulting of MUX layout in Static NMOS Logic in 0.18µm Technology with supply voltage of 5v has the average power consumption of 3.6 nW with Fuzzy-IWO.
    Research Interests:
    Knowing the body fat is an extremely important issue since it affects everyone's health. Although there are several ways to measure the body fat percentage (BFP), the accurate methods are often associated with hassle and/or high... more
    Knowing the body fat is an extremely important issue since it affects everyone's health. Although there are several ways to measure the body fat percentage (BFP), the accurate methods are often associated with hassle and/or high costs. Therefore, certain measurements or explanatory variables are used to predict the BFP. This study proposes an intelligent feature subset selection approach with unspecified number of features based on Binary GA and Fuzzy Binary GA algorithms to discover most important variable or feature and facilitate an artificial neural network (ANN) classifier model which is applied for body fat prediction (BFP). The proposed forecasting model is able to effectively predict the BFP with error of ± 3.64031% and the most effective feature of forearm circumference among total twelve features by using Fuzzy Binary GA.