Detecting and recognizing human faces automatically in digital images strongly enhance content-based video indexing systems. In this paper, a novel scheme for human faces detection in color images under nonconstrained scene conditions,... more
Detecting and recognizing human faces automatically in digital images strongly enhance content-based video indexing systems. In this paper, a novel scheme for human faces detection in color images under nonconstrained scene conditions, such as the presence of a complex background and uncontrolled illumination, is presented. Color clustering and filtering using approximations of the YCbCr and HSV skin color subspaces are applied on the original image, providing quantized skin color regions. A merging stage is then iteratively performed on the set of homogeneous skin color regions in the color quantized image, in order to provide a set of potential face areas. Constraints related to shape and size of faces are applied, and face intensity texture is analyzed by performing a wavelet packet decomposition on each face area candidate in order to detect human faces. The wavelet coefficients of the band filtered images characterize the face texture and a set of simple statistical deviations is extracted in order to form compact and meaningful feature vectors. Then, an efficient and reliable probabilistic metric derived from the Bhattacharrya distance is used in order to classify the extracted feature vectors into face or nonface areas, using some prototype face area vectors, acquired in a previous training stage.
Fractal structures are found in biomedical time series from a wide range of physiological phenomena. The multifractal spectrum identifies the deviations in fractal structure within time periods with large and small fluctuations. The... more
Fractal structures are found in biomedical time series from a wide range of physiological phenomena. The multifractal spectrum identifies the deviations in fractal structure within time periods with large and small fluctuations. The present tutorial is an introduction to multifractal detrended fluctuation analysis (MFDFA) that estimates the multifractal spectrum of biomedical time series. The tutorial presents MFDFA step-by-step in an interactive Matlab session. All Matlab tools needed are available in Introduction to MFDFA folder at the website www.ntnu.edu/inm/geri/software. MFDFA are introduced in Matlab code boxes where the reader can employ pieces of, or the entire MFDFA to example time series. After introducing MFDFA, the tutorial discusses the best practice of MFDFA in biomedical signal processing. The main aim of the tutorial is to give the reader a simple self-sustained guide to the implementation of MFDFA and interpretation of the resulting multifractal spectra.
The presence of scaling behaviour in telecommunications tra c is striking not only in its ubiquity, appearing in almost every kind of packet data, but also in the wide range of scales over which the scaling holds (see e.g. 43], 18], 78]).... more
The presence of scaling behaviour in telecommunications tra c is striking not only in its ubiquity, appearing in almost every kind of packet data, but also in the wide range of scales over which the scaling holds (see e.g. 43], 18], 78]). It is rare indeed that a physical phenomenon obeys a consistent law over so many orders of magnitude. This may well extend further, as increases in network bandwidth over time progressively`reveal' higher scales.
Droughts are destructive climatic extreme events, which may cause significant damages both in natural environments and human lives. Drought forecasting plays an important role in the control and management of water resources systems. In... more
Droughts are destructive climatic extreme events, which may cause significant damages both in natural environments and human lives. Drought forecasting plays an important role in the control and management of water resources systems. In this study, a conjunction model is presented to forecast droughts. The proposed conjunction model is based on dyadic wavelet transforms and neural networks. Neural networks have shown great ability in modeling and forecasting nonlinear and nonstationary time series in a water resources engineering, and wavelet transforms provide useful decompositions of an original time series. The wavelettransformed data aids in improving the model performance by capturing helpful information on various resolution levels. Neural networks were used to forecast decomposed sub-signals in various resolution levels and reconstruct forecasted sub-signals. The performance of the conjunction model was measured using various forecast skill criteria. The model was applied to forecast droughts in the Conchos River Basin in Mexico, which is the most important tributary of the Lower Rio Grande/Bravo. The results indicate that the conjunction model significantly
Accurate and timely forecasting of traffic flow is of paramount importance for effective management of traffic congestion in intelligent transportation systems. In this paper, a novel nonparametric dynamic time-delay recurrent wavelet... more
Accurate and timely forecasting of traffic flow is of paramount importance for effective management of traffic congestion in intelligent transportation systems. In this paper, a novel nonparametric dynamic time-delay recurrent wavelet neural network model is presented for forecasting traffic flow. The model incorporates the self-similar, singular, and fractal properties discovered in the traffic flow. The concept of wavelet frame is introduced and exploited in the model to provide flexibility in the design of wavelets and to add extra features such as adaptable translation parameters desirable in traffic flow forecasting. The statistical autocorrelation function is used for selection of the optimum input dimension of traffic flow time series. The model incorporates both the time of the day and the day of the week of the prediction time. As such, it can be used for long-term traffic flow forecasting in addition to short-term forecasting. The model has been validated using actual freeway traffic flow data. The model can assist traffic engineers and highway agencies to create effective traffic management plans for alleviating freeway congestions.
We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are... more
We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and continuous two-dimensional Morlet wavelet transform responses taken at multiple scales. The Morlet wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces and compare its performance with the linear minimum squared error classifier. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE [1] and STARE [2] databases of manually labeled non-mydriatic images. On the DRIVE database, it achieves an area under the receiver operating characteristic (ROC) curve of 0.9598, being slightly superior than that presented by the method of Staal et al. [1].
A number of researchers have suggested that in order to understand the response properties of cells in the visual pathway, we must consider the statistical structure of the natural environment. In this paper, we focus on one aspect of... more
In recent years there has been a considerable development in the use of wavelet methods in statistics. As a result, we are now at the stage where it is reasonable to consider such methods to be another standard tool of the applied... more
In recent years there has been a considerable development in the use of wavelet methods in statistics. As a result, we are now at the stage where it is reasonable to consider such methods to be another standard tool of the applied statistician rather than a research novelty. With that in mind, this paper gives a relatively accessible introduction to standard wavelet analysis and provides a review of some common uses of wavelet methods in statistical applications. It is primarily orientated towards the general statistical audience who may be involved in analysing data where the use of wavelets might be effective, rather than to researchers who are already familiar with the ®eld. Given that objective, we do not emphasize mathematical generality or rigour in our exposition of wavelets and we restrict our discussion to the more frequently employed wavelet methods in statistics. We provide extensive references where the ideas and concepts discussed can be followed up in greater detail and generality if required. The paper ®rst establishes some necessary basic mathematical background and terminology relating to wavelets. It then reviews the more well-established applications of wavelets in statistics including their use in nonparametric regression, density estimation, inverse problems, changepoint problems and in some specialized aspects of time series analysis. Possible extensions to the uses of wavelets in statistics are then considered. The paper concludes with a brief reference to readily available software packages for wavelet analysis.
This paper describes an attack on the recently proposed 'Watermarking Method Based on Significant Difference of Wavelet Coefficient Quantization' [1]. While the method is shown to be robust against many signal processing operations,... more
This paper describes an attack on the recently proposed 'Watermarking Method Based on Significant Difference of Wavelet Coefficient Quantization' [1]. While the method is shown to be robust against many signal processing operations, security of the watermarking scheme under intentional attack exploiting knowledge of the implementation has been neglected. We demonstrate a straightforward attack which retains the fidelity of the image. The method is therefore not suitable for copyright protection applications. Further, we propose a countermeasure which mitigates the shortcoming.
In this study, whether the wavelet transform method is better for spectral analysis of the brain signals is investigated. For this purpose, as a spectral analysis tool, wavelet transform is compared with fast Fourier transform (FFT)... more
In this study, whether the wavelet transform method is better for spectral analysis of the brain signals is investigated. For this purpose, as a spectral analysis tool, wavelet transform is compared with fast Fourier transform (FFT) applied to the electroencephalograms (EEG), which have been used in the previous studies. In addition, the time-domain characteristics of the wavelet transform are also detected. The comparison results show that the wavelet transform method is better in detecting brain diseases.
Image and video coding is an optimization problem. A successful image and video coding algorithm delivers a good tradeoff between visual quality and other coding performance measures, such as compression, complexity, scalability,... more
Image and video coding is an optimization problem. A successful image and video coding algorithm delivers a good tradeoff between visual quality and other coding performance measures, such as compression, complexity, scalability, robustness, and security. In this paper, we follow two recent trends in image and video coding research. One is to incorporate human visual system (HVS) models to improve the current state-of-the-art of image and video coding algorithms by better exploiting the properties of the intended receiver. The other is to design rate scalable image and video codecs, which allow the extraction of coded visual information at continuously varying bit rates from a single compressed bitstream.
In this paper, the authors propose the spread spectrum image watermarking algorithm using the discrete multiwavelet transform. Performance improvement with respect to existing algorithms is obtained by genetic algorithms optimization. In... more
In this paper, the authors propose the spread spectrum image watermarking algorithm using the discrete multiwavelet transform. Performance improvement with respect to existing algorithms is obtained by genetic algorithms optimization. In the proposed optimization process, the authors search for parameters that consist of threshold values and the embedding strength to improve the visual quality of watermarked images and the robustness of the watermark. These parameters are varied to find the most suitable for images with different characteristics. The experimental results show that the proposed algorithm yields a watermark that is invisible to human eyes and robust to various image manipulations. The authors also compare their experimental results with the results of previous work using various test images.
This paper deals with typical problems that arise when using wavelets in numerical analysis applications. The rst part involves the construction of quadrature formulae for the calculation of inner products of smooth functions and scaling... more
This paper deals with typical problems that arise when using wavelets in numerical analysis applications. The rst part involves the construction of quadrature formulae for the calculation of inner products of smooth functions and scaling functions. Several types of quadratures are discussed and compared for di erent classes of wavelets. Since their construction using monomials is ill-conditioned, also a modi ed, well-conditioned construction using Chebyshev polynomials is presented. The second part of the paper deals with pointwise asymptotic error expansions of wavelet approximations of smooth functions. They are used to to derive asymptotic interpolating properties of the wavelet approximation and to construct a convergence acceleration algorithm. This is illustrated with numerical examples.
1] The availability of high resolution topography from LIDAR offers new opportunities for objectively extracting the channels directly from a DEM using local topographic information, instead of inferring them indirectly based on global... more
1] The availability of high resolution topography from LIDAR offers new opportunities for objectively extracting the channels directly from a DEM using local topographic information, instead of inferring them indirectly based on global criteria, such as area or area-slope threshold relationships. Here we introduce the use of wavelet filtering to delineate threshold curvatures for defining valleys and threshold slope-direction-change for defining probable channeled portions of the valleys. This approach exploits the topographic signatures uniquely found in high resolution topography, and reveals the fuzzy topographic transition in which local weakly convergent areas lie at the transition between hillslopes and valleys.
Using six San Diego solar resource stations, clear-sky indices at 1-sec resolution were computed for one site and for the average of six sites separated by less than 3 km to estimate the smoothing of aggregated power output due to... more
Using six San Diego solar resource stations, clear-sky indices at 1-sec resolution were computed for one site and for the average of six sites separated by less than 3 km to estimate the smoothing of aggregated power output due to geographic dispersion in a distribution feeder. Ramp rate (RR) analysis was conducted on the 1-sec timeseries, including moving averages to simulate a large PV plant with energy storage. Annual maximum RRs of up to 60% per second were observed, and the largest 1-sec ramp rates were enhanced over 40% by cloud reflection. However, 5% per second ramps never occurred for a simulated 10 MW power plant. Applying a wavelet transform to both the clear-sky index at one site and the average of six sites showed a strong reduction in variability at timescales shorter than 5-min, with a lesser decrease at longer timescales. Comparing these variability reductions to the Hoff and Perez (2010) model, good agreement was observed at high dispersion factors (short timescales), but our analysis shows larger reductions in variability than the model at smaller dispersion factors (long timescales).
Follicular lymphoma (FL) is a cancer of lymph system and it is the second most common lymphoid malignancy in the western world. Currently, the risk stratification of FL relies on histological grading method, where pathologists evaluate... more
Follicular lymphoma (FL) is a cancer of lymph system and it is the second most common lymphoid malignancy in the western world. Currently, the risk stratification of FL relies on histological grading method, where pathologists evaluate hematoxilin and eosin (H&E) stained tissue sections under a microscope as recommended by the World Health Organization. This manual method requires intensive labor in nature. Due to the sampling bias, it also suffers from inter- and intra-reader variability and poor reproducibility. We are developing a computer-assisted system to provide quantitative assessment of FL images for more consistent evaluation of FL. In this study, we proposed a statistical framework to classify FL images based on their histological grades. We introduced model-based intermediate representation (MBIR) of cytological components that enables higher level semantic description of tissue characteristics. Moreover, we introduced a novel color-texture analysis approach that combines the MBIR with low level texture features, which capture tissue characteristics at pixel level. Experimental results on real follicular lymphoma images demonstrate that the combined feature space improved the accuracy of the system significantly. The implemented system can identify the most aggressive FL (grade III) with 98.9% sensitivity and 98.7% specificity and the overall classification accuracy of the system is 85.5%.
This paper proposed a novel model for short term load forecast in the competitive electricity market. The prior electricity demand data are treated as time series. The forecast model is based on wavelet multi-resolution decomposition by... more
This paper proposed a novel model for short term load forecast in the competitive electricity market. The prior electricity demand data are treated as time series. The forecast model is based on wavelet multi-resolution decomposition by autocorrelation shell representation and neural networks (multilayer perceptrons, or MLPs) modeling of wavelet coefficients. To minimize the influence of noisy low level coefficients, we applied the practical Bayesian method Automatic Relevance Determination (ARD) model to choose the size of MLPs, which are then trained to provide forecasts. The individual wavelet domain forecasts are recombined to form the accurate overall forecast. The proposed method is tested using Queensland electricity demand data from the Australian National Electricity Market.
In this paper, a technique is presented for the fusion of multispectral (MS) and hyperspectral (HS) images to enhance the spatial resolution of the latter. The technique works in the wavelet domain and is based on a Bayesian estimation of... more
In this paper, a technique is presented for the fusion of multispectral (MS) and hyperspectral (HS) images to enhance the spatial resolution of the latter. The technique works in the wavelet domain and is based on a Bayesian estimation of the HS image, assuming a joint normal model for the images and an additive noise imaging model for the HS image. In the complete model, an operator is defined, describing the spatial degradation of the HS image. Since this operator is, in general, not exactly known and in order to alleviate the burden of solving the inverse operation (a deconvolution problem), an interpolation is performed a priori. Furthermore, the knowledge of the spatial degradation is restricted to an approximation based on the resolution difference between the images. The technique is compared to its counterpart in the image domain and validated for noisy conditions. Furthermore, its performance is compared to several state-of-the-art pansharpening techniques, in the case where the MS image becomes a panchromatic image, and to MS and HS image fusion techniques from the literature.
The affective state of a speaker can be identified from the prosody of his or her speech. Voice quality is the most important prosodic cue for emotion recognition from short verbal utterances and nonverbal exclamations, the latter... more
The affective state of a speaker can be identified from the prosody of his or her speech. Voice quality is the most important prosodic cue for emotion recognition from short verbal utterances and nonverbal exclamations, the latter conveying pure emotion, void of all semantic meaning. We adopted two context violation paradigmsFoddball and primingFto study the event-related brain potentials (ERP) reflecting this recognition process. We found a negative wave, the N300, in the ERPs to contextually incongruous exclamations, and interpreted this component as analogous to the well-known N400 response to semantically inappropriate words. The N300 appears to be a real-time psychophysiological measure of spontaneous emotion recognition from vocal cues, which could prove a useful tool for the examination of affective-prosody comprehension. In addition, we developed a new ERP component detection and estimation method that is based on the continuous wavelet transform (CWT), does not rely on visual inspection of the waveforms, and yields larger statistical difference effects than classical methods.
The most critical attribute of a power control strategy for a fuel cell (FC) and ultracapacitor (UC) based hybrid vehicular power system is the sharing of load demand between available power sources. Transients and rapid changes of the... more
The most critical attribute of a power control strategy for a fuel cell (FC) and ultracapacitor (UC) based hybrid vehicular power system is the sharing of load demand between available power sources. Transients and rapid changes of the power demand may cause drying of the FC membrane resulting in degradation of FC lifetime. The load demand profile of a hybrid electric vehicle (HEV) is a nonstationary fluctuating signal and consists of transients. Wavelet transforms are suitable for analyzing and evaluating such signals for dynamic systems. This paper focuses on the integration of a developed novel-wavelet-based load sharing algorithm and dynamic model of the FC/UC hybrid vehicular power system in order to ensure efficient power flow control strategy. Index Terms-Fuel cell (FC), hybrid system, power management, ultracapacitor (UC), vehicular application, wavelet.
This paper proposes a new method for image compression. The method is based on the approximation of an image, regarded as a function, by a linear spline over an adapted triangulation, D(Y), which is the Delaunay triangulation of a small... more
This paper proposes a new method for image compression. The method is based on the approximation of an image, regarded as a function, by a linear spline over an adapted triangulation, D(Y), which is the Delaunay triangulation of a small set Y of significant pixels. The linear spline minimizes the distance to the image, measured by the mean square error, among all linear splines over D(Y). The significant pixels in Y are selected by an adaptive thinning algorithm, which recursively removes less significant pixels in a greedy way, using a sophisticated criterion for measuring the significance of a pixel. The proposed compression method combines the approximation scheme with a customized scattered data coding scheme. We compare our compression method with JPEG2000 on two geometric images and on three popular test cases of real images. Résumé. Cet article propose une nouvelle méthode de compression d'images. Cette méthode est basée sur l'approximation d'une image, vue comme une fonction, par une fonction spline linéaire sur une triangulation adaptée, D(Y), qui est la triangulation de Delaunay d'un ensemble réduit Y de pixels significatifs. Cette fonction spline linéaire minimise la distanceà l'image originale, mesurée par l'erreur quadratique moyenne, parmi la classe de toutes les fonctions splines linéaires sur la triangulation D(Y). Les pixels significatifs de Y sont sélectionnés par un algorithme d'Adaptive Thinning, quiélimine récursivement les pixels les moins significatifs. La suppression de ces pixels est effectuée selon un critère mesurant la significativité relative de ce pixel. Le schéma de compression proposé combine la méthode d'approximation précédente avec un codage adapté d'un ensemble de points du plan non structurés. Finalement, nous comparons notre méthode de compression avec le standard de compression JPEG2000 sur deux images géométriques et trois images naturelles classiques.
The need to increase machine reliability and decrease production loss due to faulty products in highly automated line requires accurate and reliable fault classification technique. Wavelet transform and statistical method are used to... more
The need to increase machine reliability and decrease production loss due to faulty products in highly automated line requires accurate and reliable fault classification technique. Wavelet transform and statistical method are used to extract salient features from raw noise and vibration signals. The wavelet transform decomposes the raw time-waveform signals into two respective parts in the time space and frequency domain. With wavelet transform prominent features can be obtained easily than from time-waveform analysis. This paper focuses on the development of an advanced signal classifier for small reciprocating refrigerator compressors using noise and vibration signals. Three classifiers, self-organising feature map, learning vector quantisation and support vector machine (SVM) are applied in training and testing for feature extraction and the classification accuracies of the techniques are compared to determine the optimum fault classifier. The classification technique selected for detecting faulty reciprocating refrigerator compressors involves artificial neural networks and SVMs. The results confirm that the classification technique can differentiate faulty compressors from healthy ones and with high flexibility and reliability.
Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred... more
Artificial neural networks as a major soft-computing technology have been extensively studied and applied during the last three decades. Research on backpropagation training algorithms for multilayer perceptron networks has spurred development of other neural network training algorithms for other networks such as radial basis function, recurrent network, feedback network, and unsupervised Kohonen self-organizing network. These networks, especially the multilayer perceptron network with a backpropagation training algorithm, have gained recognition in research and applications in various scientific and engineering areas. In order to accelerate the training process and overcome data over-fitting, research has been conducted to improve the backpropagation algorithm. Further, artificial neural networks have been integrated with other advanced methods such as fuzzy logic and wavelet analysis, to enhance the ability of data interpretation and modeling and to avoid subjectivity in the operation of the training algorithm. In recent years, support vector machines have emerged as a set of high-performance supervised generalized linear classifiers in parallel with artificial neural networks. A review on development history of artificial neural networks is presented and the standard architectures and algorithms of artificial neural networks are described. Furthermore, advanced artificial neural networks will be introduced with support vector machines, and limitations of ANNs will be identified. The future of artificial neural network development in tandem with support vector machines will be discussed in conjunction with further applications to food science and engineering, soil and water relationship for crop management, and decision support for precision agriculture. Along with the network structures and training algorithms, the applications of artificial neural networks will be reviewed as well, especially in the fields of agricultural and biological engineering.
Various time-frequency methods have been used to study the time-varying properties of non-stationary neurophysiological signals. In the present study, a time-frequency coherence using continuous wavelet transform (CWT) together with its... more
Various time-frequency methods have been used to study the time-varying properties of non-stationary neurophysiological signals. In the present study, a time-frequency coherence using continuous wavelet transform (CWT) together with its confidence intervals are proposed to evaluate the correlation between two non-stationary processes. A systematic comparison between approaches using CWT and short-time Fourier transform (STFT) is carried out. Simulated data are generated to test the performance of these methods when estimating time-frequency based coherence. Surprisingly and in contrast to the common belief, the coherence estimation based upon CWT does not always supersede STFT. We suggest that a combination of STFT and CWT would be most suitable for analysing non-stationary neural data. In both frequency and time domains, methods to test whether there are two coherent signals presented in recorded data are presented. Our approach is then applied to the electroencephalogram (EEG) and surface electromyogram (EMG) during wrist movements in healthy subjects and the local field potential (LFP) and surface EMG during resting tremor in patients with Parkinson's disease. A software package including all results presented in the current paper is available at
To improve a recently developed mel-cepstrum audio steganalysis method, we present in this paper a method based on Fourier spectrum statistics and mel-cepstrum coefficients, derived from the second-order derivative of the audio signal.... more
To improve a recently developed mel-cepstrum audio steganalysis method, we present in this paper a method based on Fourier spectrum statistics and mel-cepstrum coefficients, derived from the second-order derivative of the audio signal. Specifically, the statistics of the high-frequency spectrum and the mel-cepstrum coefficients of the second-order derivative are extracted for use in detecting audio steganography. We also design a wavelet-based spectrum and mel-cepstrum audio steganalysis. By applying support vector machines to these features, unadulterated carrier signals (without hidden data) and the steganograms (carrying covert data) are successfully discriminated. Experimental results show that proposed derivative-based and wavelet-based approaches remarkably improve the detection accuracy. Between the two new methods, the derivative-based approach generally delivers a better performance.
We study the statistical performance of multiresolution-based estimation procedures for the scaling exponents of multifractal processes. These estimators rely on the computation of multiresolution quantities such as wavelet, increment or... more
We study the statistical performance of multiresolution-based estimation procedures for the scaling exponents of multifractal processes. These estimators rely on the computation of multiresolution quantities such as wavelet, increment or aggregation coefficients. Estimates are obtained by linear fits performed in log of structure functions of order q versus log of scale plots. Using various and recent types of multiplicative cascades and a large variety of multifractal processes, we study and benchmark, by means of numerical simulations, the statistical performance of these estimation procedures. We show that they all undergo a systematic linearisation effect: for a range of orders q, the estimates account correctly for the scaling exponents; outside that range, the estimates significantly depart from the correct values and systematically behave as linear functions of q. The definition and characterisation of this effect are thoroughly studied. In contradiction with interpretations ...
This paper is showing that the saturation space of the minimax rate associated to a L, loss and linear estimators is the Besov space Bj". More precisely, it is shown that if a function space included in L, is such that its minimax rate is... more
This paper is showing that the saturation space of the minimax rate associated to a L, loss and linear estimators is the Besov space Bj". More precisely, it is shown that if a function space included in L, is such that its minimax rate is the usual one s/(1 +2s) and if this rate is attained by a sequence of linear estimators, then this space is included in a ball of the space B,"". This implies, for example, that the minimax rates that have been estimated for the Sobolev balls are in fact only a consequence of their inclusions in such Besov balls
In the last decade, event-related potential and functional magnetic resonance imaging studies were very useful in temporal and spatial localization of brain processes involved in the recognition of emotional facial expressions. However,... more
In the last decade, event-related potential and functional magnetic resonance imaging studies were very useful in temporal and spatial localization of brain processes involved in the recognition of emotional facial expressions. However, frequency characteristics of the underlying processes have been less studied. Besides, most of the studies did not take into account personality-related individual differences. In this study, effects of explicit and implicit anxiety on the oscillatory dynamics of cortical responses elicited by presentation of angry, neutral, and happy faces were investigated using time-frequency decomposition by means of wavelet transform. Both explicit and implicit anxiety were associated with higher alpha band desynchronization, which was most pronounced during presentation of angry faces. Within theta and delta bands, effects appeared to be opposite for explicit and implicit anxiety measures. In implicitly anxious subjects, frontal delta and theta synchronization upon the presentation of angry and happy (but not neutral) faces was found to be higher than in low anxiety ones, whereas explicit anxiety was associated with a lower theta band synchronization.
Coccolithophore assemblages recovered from Integrated Ocean Drilling Program (IODP) Site U1313 are investigated to reconstruct the palaeoceanographic evolution of a sector of the North Atlantic during Marine Isotope Stage (MIS) 19... more
Coccolithophore assemblages recovered from Integrated Ocean Drilling Program (IODP) Site U1313 are investigated to reconstruct the palaeoceanographic evolution of a sector of the North Atlantic during Marine Isotope Stage (MIS) 19 (~790–760 ka) at orbital, suborbital and millennial time-scales. The end of the glacial MIS 20 is marked by the arrival of cold waters, deriving fromice melting at higher latitudes, as recorded by Coccolithus pelagicus pelagicus. The MIS 19c is characterised by an extension of warmer North Atlantic Transitional Waters (NATW) reflecting the intensification of subtropical gyre influence on Site U1313, identified by an increase in warm species Umbilicosphaera sibogae. The influence on the site of cooler NATW is inferred from higher percentages of Gephyrocapsa margereli, which starts increasing from ~779 ka. Finally, the transition from MIS 19a to MIS 18 is characterised by a southward shift of the palaeoceanographic system due to a southward extension of the subpolar front, as indicated by an increase of cold subspecies C. pelagicus pelagicus. The evolution of the palaeoproductivity proxy records (number of coccoliths/g of sediment and Nannofossil Accumulation Rate — NAR) and the abundances of small Gephyrocapsa, G. margereli, U. sibogae and C. pelagicus pelagicus show that fluctuations in sea-surface productivity and alternations between cooling and warming phases occurred on the precessional timescale. Spectral analyses of U. sibogae, number of coccoliths/g of sediment and NAR indicate a significant concentration of variance close to half precession (~10 kyr); this reflects a nonlinear response of the mid-latitude North Atlantic surface ocean to low-latitude insolation forcing. Furthermore, millennial to multi-centennial instability in sea-surface dynamics is highlighted by spectral and wavelet analyses of U. sibogae, G. margereli and C. pelagicus pelagicus percentages, number of coccoliths/g of sediment and NAR, which reveal the occurrence of rapid events, related to stadial and interstadial-type conditions.
Elastic full waveform inversion of seismic reflection data represents a data-driven form of analysis leading to quantification of sub-surface parameters in depth. In previous studies attention has been given to P-wave data recorded in the... more
Elastic full waveform inversion of seismic reflection data represents a data-driven form of analysis leading to quantification of sub-surface parameters in depth. In previous studies attention has been given to P-wave data recorded in the marine environment, using either acoustic or elastic inversion schemes. In this paper we exploit both P-waves and mode-converted S-waves in the marine environment in the inversion for both P-and S-wave velocities by using wide-angle, multi-component, ocean-bottom cable seismic data. An elastic waveform inversion scheme operating in the time domain was used, allowing accurate modelling of the full wavefield, including the elastic amplitude variation with offset response of reflected arrivals and mode-converted events. A series of one-and two-dimensional synthetic examples are presented, demonstrating the ability to invert for and thereby to quantify both P-and S-wave velocities for different velocity models. In particular, for more realistic low velocity models, including a typically soft seabed, an effective strategy for inversion is proposed to exploit both P-and mode-converted PS-waves. Whilst P-wave events are exploited for inversion for P-wave velocity, examples show the contribution of both P-and PS-waves to the successful recovery of S-wave velocity.
The purpose of this paper is to extract features from vibration signals measured from motors subjected to accelerated bearing fluting aging and to detect the effects of bearing fluting at each aging cycle of induction motors. This... more
The purpose of this paper is to extract features from vibration signals measured from motors subjected to accelerated bearing fluting aging and to detect the effects of bearing fluting at each aging cycle of induction motors. This trending of bearing degradation is achieved by monitoring the changes in the vibration signals at various sub-band levels. This is accomplished by the application of wavelet transforms and multi-resolution analysis in order to extract information from selected frequency bands with minimum signal distortion.
We introduce an harmonic analysis for iterated function systems (IFS) (X, mu) which is based on a Markov process on certain paths. The probabilities are determined by a weight function W on X. From W we define a transition operator R_W... more
We introduce an harmonic analysis for iterated function systems (IFS) (X, mu) which is based on a Markov process on certain paths. The probabilities are determined by a weight function W on X. From W we define a transition operator R_W acting on functions on X, and a corresponding class of R_W-harmonic functions. The properties of these functions determine the spectral theory of L^2(mu). For affine IFSs we establish orthogonal bases in L^2(mu). These bases are generated by paths with infinite repetition of finite words. We use this in the last section to analyze tiles in R^d.
Baseline wandering can mask some important features of the Electrocardiogram (ECG) signal hence it is desirable to remove this noise for proper analysis and display of the ECG signal. This paper presents the implementation and evaluation... more
Baseline wandering can mask some important features of the Electrocardiogram (ECG) signal hence it is desirable to remove this noise for proper analysis and display of the ECG signal. This paper presents the implementation and evaluation of different methods to remove this noise. The parameters i.e. Power Spectral density (PSD), average Power & Signal to noise ratio (SNR) are calculated of signals to compare the performance of different filtering methods. IIR zero phase filtering has been proved efficient method for the removal of Baseline wander from ECG signal. The results have been concluded using Matlab software and MIT-BIH arrhythmia database.
The aim of this study was to investigate the application of wavelet denoising in noise reduction of multichannel high resolution ECG signals. In particular, the influence of the selection of wavelet function and the choice of... more
The aim of this study was to investigate the application of wavelet denoising in noise reduction of multichannel high resolution ECG signals. In particular, the influence of the selection of wavelet function and the choice of decomposition level on efficiency of denoising process were considered and whole procedures of noise reduction were implemented in MatLab environment. The Fast Wavelet Transform was use. The advantage of used denoising method is noise level decreasing in ECG signals, in which noise reduction by averaging has limited application, i.e. in case of arrhythmia, or in presence of extrasystols.
It is well known that removing cloud-contaminated portions of a remotely sensed image and then filling in the missing data represent an important photo editing cumbersome task. In this paper, an efficient inpainting technique for the... more
It is well known that removing cloud-contaminated portions of a remotely sensed image and then filling in the missing data represent an important photo editing cumbersome task. In this paper, an efficient inpainting technique for the reconstruction of areas obscured by clouds or cloud shadows in remotely sensed images is presented. This technique is based on the Bandelet transform and the multiscale geometrical grouping. It consists of two steps. In the first step, the curves of geometric flow of different zones of the image are determined by using the Bandelet transform with multiscale grouping. This step allows an efficient representation of the multiscale geometry of the image's structures. Having well represented this geometry, the information inside the cloud-contaminated zone is synthesized by propagating the geometrical flow curves inside that zone. This step is accomplished by minimizing a functional whose role is to reconstruct the missing or cloud contaminated zone independently of the size and topology of the inpainting domain. The proposed technique is illustrated with some examples on processing aerial images. The obtained results are compared with those obtained by other clouds removal techniques.
P300-based GKT (guilty knowledge test) has been suggested as an alternative approach for conventional polygraphy. The purpose of this study is to evaluate three classifying methods for this approach and compare their performances in a lab... more
P300-based GKT (guilty knowledge test) has been suggested as an alternative approach for conventional polygraphy. The purpose of this study is to evaluate three classifying methods for this approach and compare their performances in a lab analogue. Several subjects went through the designed GKT paradigm and their respective brain signals were recorded. For the analysis of signals, BAD (bootstrapped amplitude difference) and BCD (bootstrapped correlation difference) methods as two predefined methods alongside a new approach consisting of wavelet features and a statistical classifier were implemented. The rates of correct detection in guilty and innocent subjects were 74-80%. The results indicate the potential of P300-based GKT for detecting concealed information, although further research is required to increase its accuracy and precision and evaluating its vulnerability to countermeasures.
Recent trends in clinical and telemedicine applications highly demand automation in electrocardiogram (ECG) signal processing and heart beat classification. A patient-adaptive cardiac profiling scheme using repetition-detection concept is... more
Recent trends in clinical and telemedicine applications highly demand automation in electrocardiogram (ECG) signal processing and heart beat classification. A patient-adaptive cardiac profiling scheme using repetition-detection concept is proposed in this paper. We first employ an efficient wavelet-based beat-detection mechanism to extract precise fiducial ECG points. Then, we implement a novel local ECG beat classifier to profile each patient's normal cardiac behavior. ECG morphologies vary from person to person and even for each person, it can vary over time depending on the person's physical condition and/or environment. Having such profile is essential for various diagnosis (e.g., arrhythmia) purposes. One application of such profiling scheme is to automatically raise an early warning flag for the abnormal cardiac behavior of any individual. Our extensive experimental results on the MIT-BIH arrhythmia database show that our technique can detect the beats with 99.59% accuracy and can identify abnormalities with a high classification accuracy of 97.42%. Adnan Saeed received the B.Sc. degree in electrical engineering from the National University of Sciences and Technology, Islamabad, Pakistan, in 2000, the M.Sc. degree in electrical engineering from The University of Texas at Dallas, Richardson, in 2008, where he is involved in the research on sensor nodes for body-area networks. He is currently working toward the Ph.D. degree from the University of Texas at Dallas, where he is also a member
A new hybrid model, the wavelet-bootstrap-ANN (WBANN), for daily discharge forecasting is proposed in this study. The study explores the potential of wavelet and bootstrapping techniques to develop an accurate and reliable ANN model. The... more
A new hybrid model, the wavelet-bootstrap-ANN (WBANN), for daily discharge forecasting is proposed in this study. The study explores the potential of wavelet and bootstrapping techniques to develop an accurate and reliable ANN model. The performance of the WBANN model is also compared with three more models: traditional ANN, wavelet-based ANN (WANN) and bootstrapbased ANN (BANN). Input vectors are decomposed into discrete wavelet components (DWCs) using discrete wavelet transformation (DWT) and then appropriate DWCs sub-series are used as inputs to the ANN model to develop the WANN model. The BANN model is an ensemble of several ANNs built using bootstrap resamples of raw datasets, whereas the WBANN model is an ensemble of several ANNs built using bootstrap resamples of DWCs instead of raw datasets. The results showed that the hybrid models WBANN and WANN produced significantly better results than the traditional ANN and BANN, whereas the BANN model is found to be more reliable and consistent. The WBANN and WANN models simulated the peak discharges better than the ANN and BANN models, whereas the overall performance of WBANN, which uses the capabilities of both bootstrap and wavelet techniques, is found to be more accurate and reliable than the remaining three models.
This paper proposes a new strategy for surface roughness analysis and characterisation based on the wavelet transform. After a short review of wavelet-based methods used in the field of surface roughness analysis, results obtained using a... more
This paper proposes a new strategy for surface roughness analysis and characterisation based on the wavelet transform. After a short review of wavelet-based methods used in the field of surface roughness analysis, results obtained using a new tool of analysis called the frequency normalised wavelet transform (FNWT) are presented.
A fast multilevel wavelet collocation method for the solution of partial differential equations in multiple dimensions is developed. The computational cost of the algorithm is independent of the dimensionality of the problem and is O(N),... more
A fast multilevel wavelet collocation method for the solution of partial differential equations in multiple dimensions is developed. The computational cost of the algorithm is independent of the dimensionality of the problem and is O(N), where N is the total number of collocation points. The method can handle general boundary conditions. The multilevel structure of the algorithm provides a simple way to adapt computational refinements to local demands of the solution. High resolution computations are performed only in regions where singularities or sharp transitions occur. Numerical results demonstrate the ability of the method to resolve localized structures such as shocks, which change their location and steepness in space and time. The present results indicate that the method has clear advantages in comparison with well established numerical algorithms.
The electromyographic (EMG) signal observed at the surface of the skin is the sum of thousands of small potentials generated in the muscle fiber. There are many approaches to analyzing EMG signals with spectral techniques. In this study,... more
The electromyographic (EMG) signal observed at the surface of the skin is the sum of thousands of small potentials generated in the muscle fiber. There are many approaches to analyzing EMG signals with spectral techniques. In this study, the short time Fourier Transform (STFT) and wavelet transform (WT) were applied to EMG signals and coefficients were obtained. In these studies, MATLAB 7.01 program was used. According to obtained results, it was determined that WT is more useful than STFT in the fields of eliminating of resolution problem and providing of changeable resolution during analyze.
This paper investigates the relationship between energy consumption and economic growth with over eighty decades of Italian dataset. The wavelet analysis is applied to decompose series into different time scales whereas the frequency... more
This paper investigates the relationship between energy consumption and economic growth with over eighty decades of Italian dataset. The wavelet analysis is applied to decompose series into different time scales whereas the frequency domain technique is used to examine time-specific shocks. Results of both unit root and stationarity tests indicate all series are integrated of order one, however, no evidence of long-run relationship is reported between energy consumption and economic development. We observe that the causal flow from economic growth to energy consumption becomes dominant at lower scales (up to 4 years), while at higher scales the strength of causality from energy use to growth declines. Therefore, the influence of energy consumption on economic growth can significantly be detected only at lower scales. If only original series and lower scales are considered, causal findings lean towards the feedback mechanism, with bidirectional causal relationship. This bidirectional causality is reinforced at all frequency bands, thus, causality from energy consumption to economic growth is observed only at frequencies between 1.3-1.8 (3.49-4.83 years) and 2.2-2.4 (2.61-2.85 years). However, when higher scales are considered, the causality test results are in line with the conservation hypothesis. More precisely, causality from economic growth to energy consumption is reinforced by frequency technique at higher time scales (8-32 years) but only at a frequency more than 0.6 (more than 10.47 years). The differences in the applied results provide alternative policy implications, justifying the use of wavelet approach to decompose time series into various time scales.
Abnormal autonomic nerve traffic has been associated with a number of peripheral neuropathies and cardiovascular disorders prompting the development of genetically altered mice to study the genetic and molecular components of these... more
Abnormal autonomic nerve traffic has been associated with a number of peripheral neuropathies and cardiovascular disorders prompting the development of genetically altered mice to study the genetic and molecular components of these diseases. Autonomic function in mice can be assessed by directly recording sympathetic nerve activity. However, murine sympathetic spikes are typically detected using a manually adjusted voltage threshold and no unsupervised detection methods have been developed for the mouse. Therefore, we tested the performance of several unsupervised spike detection algorithms on simulated murine renal sympathetic nerve recordings, including an automated amplitude discriminator and wavelet-based detection methods which used both the discrete wavelet transform (DWT) and the stationary wavelet transform (SWT) and several wavelet threshold rules. The parameters of the wavelet methods were optimized by comparing basal sympathetic activity to postmortem recordings and recordings made during pharmacological suppression and enhancement of sympathetic activity. In general, SWT methods were found to outperform amplitude discriminators and DWT methods with similar wavelet coefficient thresholding algorithms when presented with simulations with varied mean spike rates and signalto-noise ratios. A SWT method which estimates the noise level using a "noise-only" wavelet scale and then selectively thresholds scales containing the physiologically important signal information was found to have the most robust spike detection. The proposed noise-level estimation method was also successfully validated during pharmacological interventions.
Wave heights and periods are the significant inputs for coastal and ocean engineering applications. These applications may require to obtain information about the sea conditions in advance. This study aims to propose a forecasting scheme... more
Wave heights and periods are the significant inputs for coastal and ocean engineering applications. These applications may require to obtain information about the sea conditions in advance. This study aims to propose a forecasting scheme that enables to make forecasts up to 48 h lead time. The combination of wavelet and fuzzy logic approaches was employed as a forecasting methodology. Wavelet technique was used to separate time series into its spectral bands. Subsequently, these spectral bands were estimated individually by fuzzy logic approach. This combination of techniques is called wavelet fuzzy logic (WFL) approach. In addition to WFL method, fuzzy logic (FL), artificial neural networks (ANN), and autoregressive moving average (ARMA) methods were employed to the same data set for comparison purposes. It is seen that WFL outperforms those methods in all cases. The superiority of the WFL in model performances becomes very clear especially in higher lead times such as 48 h. Significant wave height and average wave period series obtained from buoys located off west coast of US were used to train and test the proposed models.
The dynamical properties of large-scale, long-term phase synchronization behavior in the alpha range of electroencephalographic signals were investigated. We observed dynamical phase synchronization and presented evidence of an underlying... more
The dynamical properties of large-scale, long-term phase synchronization behavior in the alpha range of electroencephalographic signals were investigated. We observed dynamical phase synchronization and presented evidence of an underlying spatiotemporal ordering. Fluctuations in the duration of episodes of intermittent synchrony are scale-invariant. Moreover, the exponent used to describe this behavior is stable across different normal subjects. The results provide a new feature of self-organization in human brain activity and constitute a quantitative basis for modeling its dynamics. q