Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

    Silong Peng

    ABSTRACT This paper presents a fast single-image super-resolution approach that involves learning multiple adaptive interpolation kernels. Based on the assumptions that each high-resolution image patch can be sparsely represented by... more
    ABSTRACT This paper presents a fast single-image super-resolution approach that involves learning multiple adaptive interpolation kernels. Based on the assumptions that each high-resolution image patch can be sparsely represented by several simple image structures and that each structure can be assigned a suitable interpolation kernel, our approach consists of the following steps. First, we cluster the training image patches into several classes and train each class-specific interpolation kernel. Then, for each input low-resolution image patch, we select few suitable kernels of it to make up the final interpolation kernel. Since the proposed approach is mainly based on simple linear algebra computations, its efficiency can be guaranteed. And experimental comparisons with state-of-the-art super-resolution reconstruction algorithms on simulated and real-life examples can validate the performance of our proposed approach.
    Research Interests:
    ABSTRACT This paper presents a fast single-image super-resolution approach that involves learning multiple adaptive interpolation kernels. Based on the assumptions that each high-resolution image patch can be sparsely represented by... more
    ABSTRACT This paper presents a fast single-image super-resolution approach that involves learning multiple adaptive interpolation kernels. Based on the assumptions that each high-resolution image patch can be sparsely represented by several simple image structures and that each structure can be assigned a suitable interpolation kernel, our approach consists of the following steps. First, we cluster the training image patches into several classes and train each class-specific interpolation kernel. Then, for each input low-resolution image patch, we select few suitable kernels of it to make up the final interpolation kernel. Since the proposed approach is mainly based on simple linear algebra computations, its efficiency can be guaranteed. And experimental comparisons with state-of-the-art super-resolution reconstruction algorithms on simulated and real-life examples can validate the performance of our proposed approach.
    ABSTRACT The operator-based signal separation approach uses an adaptive operator to separate a signal into additive subcomponents. And different types of operator can depict different properties of a signal. In this paper, we define a new... more
    ABSTRACT The operator-based signal separation approach uses an adaptive operator to separate a signal into additive subcomponents. And different types of operator can depict different properties of a signal. In this paper, we define a new kind of integral operator which can be derived from the second kind of Fredholm integral equation. Then, we analyze the properties of the proposed integral operator and discuss its relation to the second condition of Intrinsic Mode Function (IMF). To demonstrate the robustness and efficacy of the proposed operator, we incorporate it into the Null Space Pursuit algorithm to separate several multicomponent signals, including a real-life signal.
    ABSTRACT In recent years, many image deblurring algorithms have been proposed, most of which assume the noise in the deblurring process satisfies the Gaussian distribution. However, it is often unavoidable in practice both in non-blind... more
    ABSTRACT In recent years, many image deblurring algorithms have been proposed, most of which assume the noise in the deblurring process satisfies the Gaussian distribution. However, it is often unavoidable in practice both in non-blind and blind image deblurring, due to the error on the input kernel and the outliers in the blurry image. Without proper handing these outliers, the recovered image estimated by previous methods will suffer severe artifacts. In this paper, we mainly deal with two kinds of non-Gaussian noise in the image deblurring process, inaccurate kernel and compressed blurry image, and find that handling the noise as Laplacian distribution can get more robust result in these cases. Based on this point, the new non-blind and blind image deblurring algorithms are proposed to restore the clear image. To get more robust deblurred result, we also use 8 direction gradients of the image to estimate the blur kernel. The new minimization problem can be efficiently solved by the Iteratively Reweighted Least Squares(IRLS) and the experimental results on both synthesized and real-world images show the efficiency and robustness of our algorithm.
    ... and Q(t) can be represented as column vectors s, r, p and q respectively; and 1 and 2 can be represented as the ma-trices of ... left column: From the second row to the fifth row are the extracted subcomponents by the NSP-AMFM... more
    ... and Q(t) can be represented as column vectors s, r, p and q respectively; and 1 and 2 can be represented as the ma-trices of ... left column: From the second row to the fifth row are the extracted subcomponents by the NSP-AMFM algorithm and the residual signal is at the end row. ...
    Active contour model and mean shift are both motion detection algorithms. Each of them has its own merits and shortcomings. An active contour tends to be tracked by noise points and results in a false boundary. A mean shift vector always... more
    Active contour model and mean shift are both motion detection algorithms. Each of them has its own merits and shortcomings. An active contour tends to be tracked by noise points and results in a false boundary. A mean shift vector always points to the edge area when the start point is around the object With initial curves given near the
    Subpixel Edge Location Using Improved LSR MO Yi, Zhongxuan Liu, Silong Peng National. ... However, continuous edges are always digitized during the process of imagery digitizing. Practicalline model turns to surface model which lead to... more
    Subpixel Edge Location Using Improved LSR MO Yi, Zhongxuan Liu, Silong Peng National. ... However, continuous edges are always digitized during the process of imagery digitizing. Practicalline model turns to surface model which lead to edge discontinuity and zigzag effect. ...
    ABSTRACT We propose an algorithm to recover the latent image from the blurred and compressed input. In recent years, although many image deblurring algorithms have been proposed, most of the previous methods do not consider the... more
    ABSTRACT We propose an algorithm to recover the latent image from the blurred and compressed input. In recent years, although many image deblurring algorithms have been proposed, most of the previous methods do not consider the compression effect in blurry images. Actually, it is unavoidable in practice that most of the real-world images are compressed. This compression will introduce a typical kind of noise, blocking artifacts, which do not meet the Gaussian distribution assumed in most existing algorithms. Without properly handling this non-Gaussian noise, the recovered image will suffer severe artifacts. Inspired by the statistic property of compression error, we model the non-Gaussian noise as hyper-Laplacian distribution. Based on this model, an efficient nonblind image deblurring algorithm based on variable splitting technique is proposed to solve the resulting nonconvex minimization problem. Finally, we also address an effective blind image deblurring algorithm which can deal with the compressed and blurred images efficiently. Extensive experiments compared with state-of-the-art nonblind and blind deblurring methods demonstrate the effectiveness of the proposed method. (C) 2014 SPIE and IS&T
    ABSTRACT The conventional multiple component image compression approach separates the input image into several components, each of which is predicted and encoded independently. This approach creates redundancy because the prediction... more
    ABSTRACT The conventional multiple component image compression approach separates the input image into several components, each of which is predicted and encoded independently. This approach creates redundancy because the prediction methods as well as the residual subcomponents must be transmitted. In this paper, we propose a new multiple-component predictive coding framework. First, we separate the reconstructed image into several subcomponents. Then, we use the previously encoded subcomponent to predict the current block, and then combine the prediction residuals of each subcomponent. To separate an image into multiple subcomponents, we designed a fast operator-based image separation algorithm. The numerical results demonstrate that the algorithm outperforms the H.264/AVC intra-frame prediction algorithm and the JPEG2000 algorithm on images with ample textures.
    ABSTRACT Empirical mode decomposition (EMD) is an adaptive and data-driven approach for analyzing multicomponent nonlinear and nonstationary signals. The stop criterion, envelope technique, and mode-mixing problem are the most important... more
    ABSTRACT Empirical mode decomposition (EMD) is an adaptive and data-driven approach for analyzing multicomponent nonlinear and nonstationary signals. The stop criterion, envelope technique, and mode-mixing problem are the most important topics that need to be addressed in order to improve the EMD algorithm. In this paper, we study the envelope technique and the mode-mixing problem caused by separating multicomponent AM-FM signals with the EMD algorithm. We present a new necessary condition on the envelope that questions the current assumption that the envelope passes through the extreme points of an intrinsic mode function (IMF). Then, we present a solution to the mode-mixing problem that occurs when multicomponent AM-FM signals are separated. We experiment on several signals, including simulated signals and real-life signals, to demonstrate the efficacy of the proposed method in resolving the mode-mixing problem.
    ABSTRACT In this paper, we propose a more practical and accurate method for calibrating the roadside camera used in traffic surveillance systems. Considering the characteristics of the traffic scenes, we propose a minimum calibration... more
    ABSTRACT In this paper, we propose a more practical and accurate method for calibrating the roadside camera used in traffic surveillance systems. Considering the characteristics of the traffic scenes, we propose a minimum calibration condition that consists of two vanishing points and a vanishing line, which can be easily satisfied in most traffic scenes. Based on the minimum calibration condition, we provide a calibration method to estimate camera intrinsic parameters and rotation angles, which employs least squares optimization instead of closed-form computation. Compared with the existing calibration methods, our method is suitable for more traffic scenes and is able to accurately determine more camera parameters including the principal point. By making full use of video information, multiple observations of the vanishing points are available from different objects. For more accurate calibration, we present a dynamic calibration method using these observations to correct camera parameters. As for the estimation of the camera translation vector, known lengths in the road or known heights above the road are exploited. The experimental results on synthetic data and real traffic images demonstrate the accuracy, robustness, and practicability of the proposed calibration method.
    ABSTRACT In the hyperspectral imaging, acquired images are inherently affected by noise, whose levels may vary from band to band. It is not a trivial task to remove this kind of noise while preserving the edges and details of... more
    ABSTRACT In the hyperspectral imaging, acquired images are inherently affected by noise, whose levels may vary from band to band. It is not a trivial task to remove this kind of noise while preserving the edges and details of hyperspectral images (HSIs). This paper provides a maximum a posterior (MAP)-based denoising approach for HSIs corrupted by band-varying noise. Compared with the classical MAP-based methods for 2-D degraded image restoration, the proposed approach uses 3-D edge preserving priors to keep sharp edges while smoothing the 3-D HSIs. In order to adapt to the characteristics of bandvarying noise statistics and high dynamic ranges of HSIs, we adaptively estimate the noise variance and scaling parameter of each point. For minimizing the cost function, the half-quadratic optimization algorithm is used. Both denoising and classification experimental results confirm the superiority and validity of the proposed method.
    A partitioning-based placement algorithm with priori wirelength estimation called HJ-hPl is presented in this paper. We propose a new methodology to estimate proximity of wirelengths in a netlist, which is capable of estimating not only... more
    A partitioning-based placement algorithm with priori wirelength estimation called HJ-hPl is presented in this paper. We propose a new methodology to estimate proximity of wirelengths in a netlist, which is capable of estimating not only short interconnects but long interconnects accurately. We embed the wirelength estimation into the partitioning tool of our global placement, which can guide our placement towards a solution with shorter wirelengths. In addition, we employ a regular structure clustering technique to reduce the size of the original placement, which can also bring on a tighter placement result. Experimental results show that, compared to Capo10.5, mPL6, and NTU place, HJ-hPl outperforms theirs in term of wirelength and run time. The improvements in terms of average wirelength over Capo10.5, mPL6 and NPUplace are 13%, 3%, and 9%with only 19%, 91%, and 99% of their runtime,respectively. By integrating our estimated wirelength driven clustering into Capo10.5, we are able to reduce average wirelength by 3%.
    An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components... more
    An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and ...
    ... Data and Results Case I: Clyde Platform The real-time hydraulics software was used both in 12¼-in. and 9⅞-in. section. ... References 1. Zamora, M.: “Virtual Rheology and Hydraulics Improve Use of Oil and Synthetic-Based Muds,” Oil... more
    ... Data and Results Case I: Clyde Platform The real-time hydraulics software was used both in 12¼-in. and 9⅞-in. section. ... References 1. Zamora, M.: “Virtual Rheology and Hydraulics Improve Use of Oil and Synthetic-Based Muds,” Oil & Gas Journal (3 Mar 1997) 43. ...
    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)",... more
    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%).
    This paper presents a wavelet-domain Hidden Markov Tree(HMT)-based color image superresolution algorithm using multi-channel data fusion. Because there exists correlations among the three channels of a RGB color image, a channel by... more
    This paper presents a wavelet-domain Hidden Markov Tree(HMT)-based color image superresolution algorithm using multi-channel data fusion. Because there exists correlations among the three channels of a RGB color image, a channel by channel superresolution method almost certain leads to color distortion. In order to solve this problem, first the low-resolution color image is converted into a gray-scale image using the
    A calibration transfer method for near-infrared (NIR) spectra based on spectral regression is proposed. Spectral regression method can reveal low dimensional manifold structure in high dimensional spectroscopic data and is suitable to... more
    A calibration transfer method for near-infrared (NIR) spectra based on spectral regression is proposed. Spectral regression method can reveal low dimensional manifold structure in high dimensional spectroscopic data and is suitable to transfer the NIR spectra of different instruments. A comparative study of the proposed method and piecewise direct standardization (PDS) for standardization on two benchmark NIR data sets is presented. Experimental results show that spectral regression method outperforms PDS and is quite competitive with PDS with background correction. When the standardization subset has sufficient samples, spectral regression method exhibits excellent performance.
    A low power reconfigurable DCT architecture is proposed, which can be run at three transform precision levels for different demands. Using the character of energy distribution of the DCT matrix after 2D DCT operation, we selected the best... more
    A low power reconfigurable DCT architecture is proposed, which can be run at three transform precision levels for different demands. Using the character of energy distribution of the DCT matrix after 2D DCT operation, we selected the best DCT bases which achieve considerable power reduction in DCT operation with minimum image quality degradation. The reconfigurable architecture can achieve power saving
    A partitioning-based placement algorithm with priori wirelength estimation called HJ-hPl is presented in this paper. We propose a new methodology to estimate proximity of wirelengths in a netlist, which is capable of estimating not only... more
    A partitioning-based placement algorithm with priori wirelength estimation called HJ-hPl is presented in this paper. We propose a new methodology to estimate proximity of wirelengths in a netlist, which is capable of estimating not only short interconnects but long interconnects accurately. We embed the wirelength estimation into the partitioning tool of our global placement, which can guide our placement towards a solution with shorter wirelengths. In addition, we employ a regular structure clustering technique to reduce the size of the original placement, which can also bring on a tighter placement result. Experimental results show that, compared to Capo10.5, mPL6, and NTU place, HJ-hPl outperforms theirs in term of wirelength and run time. The improvements in terms of average wirelength over Capo10.5, mPL6 and NPUplace are 13%, 3%, and 9%with only 19%, 91%, and 99% of their runtime,respectively. By integrating our estimated wirelength driven clustering into Capo10.5, we are able to reduce average wirelength by 3%.
    Abstract This paper presents a novel framework based on three-dimensional (3D) grid graph for automatic mosaic construction of multilayer microscopic images. Firstly the multilayer structure is divided into several slices, and... more
    Abstract This paper presents a novel framework based on three-dimensional (3D) grid graph for automatic mosaic construction of multilayer microscopic images. Firstly the multilayer structure is divided into several slices, and two-dimensional (2D) scanning manner is ...
    This paper presents a method to deals with two types of alignment error for construction of mosaics, which are built from large-scale microscopic images. The type I error is defined as the difference between the mapping model used for the... more
    This paper presents a method to deals with two types of alignment error for construction of mosaics, which are built from large-scale microscopic images. The type I error is defined as the difference between the mapping model used for the alignment and the actual between-image geometric distortion, and the type II error is defined as the erroneous alignment due to mismatch. Firstly, a global alignment model based on two-dimensional (2D) grid graph is proposed to eliminate error accumulation induced by type I error. Secondly, characteristic of global alignment error caused by type II error is analyzed. Finally, a minimum cycle method is proposed to eliminate the type II error. Iteratively solving the global alignment model leads to a global consistently mosaics for large-scale microscopic images
    We present a new receiver scheme, termed aswavelet receiver, for doubly-selective channels to combat the annoying Doppler effect. The key point is to convert the Doppler effect to Doppler diversity, taking advantage of the diversity... more
    We present a new receiver scheme, termed aswavelet receiver, for doubly-selective channels to combat the annoying Doppler effect. The key point is to convert the Doppler effect to Doppler diversity, taking advantage of the diversity technique to improve system performance. To this end, a new framework based on multiresolution analysis (MRA) is established. In this framework, we find that RAKE
    The size of training set as well as the usage thereof is an important issue of learning-based super-resolution. In this paper, we presented an adaptive learning method for face hallucination using locality preserving projections (LPP). By... more
    The size of training set as well as the usage thereof is an important issue of learning-based super-resolution. In this paper, we presented an adaptive learning method for face hallucination using locality preserving projections (LPP). By virtue of the ability to reveal the non-linear structure hidden in the high-dimensional image space, LPP is an efficient manifold learning method to analyze
    Recently, there has ken a growing interest in investigating signal-adaptive multirate filter banks. In this paper, we propose a direct method of constructing signal-adaptive orthogonal wavelet filter banks with better energy compaction... more
    Recently, there has ken a growing interest in investigating signal-adaptive multirate filter banks. In this paper, we propose a direct method of constructing signal-adaptive orthogonal wavelet filter banks with better energy compaction performance than that of Daubechies wavelet ...
    ... National ASIC Design Engineering Center Institute of Automation, Chinese Academy of Sciences Beijing 100080-2728, China { tao.chen}{ruosan.guo}{silong.peng} @mail.ia ... A novel algorithm based on multiscale first fun-damental form... more
    ... National ASIC Design Engineering Center Institute of Automation, Chinese Academy of Sciences Beijing 100080-2728, China { tao.chen}{ruosan.guo}{silong.peng} @mail.ia ... A novel algorithm based on multiscale first fun-damental form was presented by Scheunders et al in (41 ...
    Abstract In this paper the technique of directional empirical mode decomposition (DEMD) and its application to texture segmentation are presented. Empirical mode decomposition (EMD) decomposes signals by sifting and then analyzes the... more
    Abstract In this paper the technique of directional empirical mode decomposition (DEMD) and its application to texture segmentation are presented. Empirical mode decomposition (EMD) decomposes signals by sifting and then analyzes the instantaneous frequency of ...

    And 22 more