ABSTRACT In many applications of Independent Component Analysis (ICA) and Blind Source Separation (BSS) estimated sources signals and the mixing or separating matrices have some special structure or some constraints are imposed for the... more
ABSTRACT In many applications of Independent Component Analysis (ICA) and Blind Source Separation (BSS) estimated sources signals and the mixing or separating matrices have some special structure or some constraints are imposed for the matrices such as symmetries, orthogonality, non-negativity, sparseness and specified invariant norm of the separating matrix. In this paper we present several algorithms and overview some known transformations which allows us to preserve several important constraints.
Research Interests:
Research Interests:
SUMMARY In many applications of Independent Compo- nent Analysis (ICA) and Blind Source Separation (BSS) esti- mated sources signals and the mixing or separating matrices have some special structure or some constraints are imposed for the... more
SUMMARY In many applications of Independent Compo- nent Analysis (ICA) and Blind Source Separation (BSS) esti- mated sources signals and the mixing or separating matrices have some special structure or some constraints are imposed for the matrices such as symmetries, orthogonality, non-negativity, sparseness and specified invariant norm of the separating ma- trix. In this paper we present several algorithms and overview some known transformations which allows us to preserve several important constraints. Computer simulation experiments con- firmed validity and usefulness of the developed algorithms.
Research Interests:
In many applications of independent component analysis (ICA) and blind source separation (BSS) the mixing or separating matrices have some special structure or some constraints are imposed for the matrices like symmetry, orthogonality,... more
In many applications of independent component analysis (ICA) and blind source separation (BSS) the mixing or separating matrices have some special structure or some constraints are imposed for the matrices like symmetry, orthogonality, nonnegativity, sparseness and unit (or specified invariant norm) of the matrix. We present several algorithms and overview some known transformations which allows us to preserve such constraints. Especially, we propose algorithms for a blind identification problem with non-negativity constraints.
Research Interests:
Preface.- List of Contributors.- 1 Optimizing Organ Allocation and Acceptance: Oguzhan Alagoz, Andrew J. Schaefer, Mark S. Roberts.- 2 Can We Do Better? Optimization Models for Breast Cancer Screening: Julie Simmons Ivy.- 3 Optimization... more
Preface.- List of Contributors.- 1 Optimizing Organ Allocation and Acceptance: Oguzhan Alagoz, Andrew J. Schaefer, Mark S. Roberts.- 2 Can We Do Better? Optimization Models for Breast Cancer Screening: Julie Simmons Ivy.- 3 Optimization Models and Computational Approaches for Three-dimensional Conformal Radiation Treatment Planning: Gino J. Lim.- 4 Continuous Optimization of Beamlet Intensities for Intensity Modulated Photon and Proton Radiotherapy: Rembert Reemtsen and Markus Alber.- 5 Multicriteria Optimization in Intensity Modulated Radiotherapy Planning: Karl-Heinz Kufer, Michael Monz, Alexander Scherrer, Philipp Suss, Fernando Alonso, Ahmad Saher Azizi Sultan, Thomas Bortfeld, Christian Thieke.-6 Algorithms for Sequencing Multileaf Collimators: Srijit Kamath, Sartaj Sahni, Jatinder Palta, Sanjay Ranka, Jonathan Li.- 7 Image Registration and Segmentation Based on Energy Minimization: Michael Hintermuller, Stephen L. Keeling.- 8 Optimization Techniques for Data Representations wi...
Research Interests:
ABSTRACT
Research Interests:
Research Interests:
Research Interests:
Research Interests:
... of International Conference on Acoustics and Statistical Signal Processing (ICASSP2004), Montreal, Canada, May 17-21, 2004. [GCB04] Georgiev P., Cichocki A ... [HKOOl] A. Hyvarinen, J. Karhunen and E. Oja, Independent Component... more
... of International Conference on Acoustics and Statistical Signal Processing (ICASSP2004), Montreal, Canada, May 17-21, 2004. [GCB04] Georgiev P., Cichocki A ... [HKOOl] A. Hyvarinen, J. Karhunen and E. Oja, Independent Component Analysis, John Wiley & Sons, 2001. ...
Research Interests:
Kernel Principal Component Analysis (KPCA) is a dimension reduction method that is closely related to Principal Component Analysis (PCA). This report gives an overview of kernel PCA and presents an implementation of the method in MATLAB.... more
Kernel Principal Component Analysis (KPCA) is a dimension reduction method that is closely related to Principal Component Analysis (PCA). This report gives an overview of kernel PCA and presents an implementation of the method in MATLAB. The implemented method is tested in a transductive setting on two data bases. Two methods for labeling data points are considered, the nearest neighbor method and kernel regression, together with some possible improvements of the methods.
Research Interests:
We extend the idea of reproducing kernel Hilbert spaces (RKHS) to Banach spaces, developing a theory of pairs of reproducing kernel Banach spaces (RKBS) without the requirement of existence of semi-inner product (which requirement is... more
We extend the idea of reproducing kernel Hilbert spaces (RKHS) to Banach spaces, developing a theory of pairs of reproducing kernel Banach spaces (RKBS) without the requirement of existence of semi-inner product (which requirement is already explored in another construction of RKBS). We present several natural examples, which involve RKBS of functions with supremum norm and with l p -norm (1 ≤ p ≤ ∞). Special attention is devoted to the case of a pair of RKBS \((B,{B}^{\sharp })\) in which B has sup-norm and \({B}^{\sharp }\) has l 1-norm. Namely, we show that if \((B,{B}^{\sharp })\) is generated by a universal kernel and B is furnished with the sup-norm, then \({B}^{\sharp }\), furnished with the l 1-norm, is linearly isomorphically embedded in the dual of B. We reformulate the classical classification problem (support vector machine classifier) to RKBS and suggest that it will have sparse solutions when the RKBS is furnished with the l 1-norm.
Research Interests:
... For example, for mixture of two uniform distributed sources, independent components and sparse components are quite ... 2. ROBUST SPARSE SIGNAL REPRESENTATIONS Sparse Component Analysis (SCA) and sparse signals rep-resentations (SSR)... more
... For example, for mixture of two uniform distributed sources, independent components and sparse components are quite ... 2. ROBUST SPARSE SIGNAL REPRESENTATIONS Sparse Component Analysis (SCA) and sparse signals rep-resentations (SSR) arise in many scientific ...
Research Interests:
ABSTRACT
Research Interests:
Abstract: We show how the necessary conditions for a local minimum can be used to obtain a sufficient optimality condition of first order for a global minimum of a locally Lipschitz function on a closed convex set. Using a theorem of... more
Abstract: We show how the necessary conditions for a local minimum can be used to obtain a sufficient optimality condition of first order for a global minimum of a locally Lipschitz function on a closed convex set. Using a theorem of Clarke, we obtain a short proof and an extension to Banach spaces of a result of Hiriart-Urruty and Ledyaev.
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
Research Interests:
ABSTRACT
Research Interests:
Abstract We define a fuzzy subspace skeleton of data points and propose an algorithm for finding it. Such a skeleton is con-nected with data representation: if the data points (represented as columns of a given matrix X) belong exactly... more
Abstract We define a fuzzy subspace skeleton of data points and propose an algorithm for finding it. Such a skeleton is con-nected with data representation: if the data points (represented as columns of a given matrix X) belong exactly to this fuzzy skeleton, then under some ...
Research Interests:
Research Interests:
Research Interests:
Research Interests: Mathematics, Computer Science, Signal Processing, Independent Component Analysis, Convergence, and 15 moreOptimality Theory, Blind Source Separation, Algorithm, Signals, Eigenvalues, Lebesgue measure, Matrix Decomposition, Covariance Matrix, Eigenvalues and Eigenvectors, Mixture, White Noise, Blind Signal Separation, Correlation function, Eigenvalue Decomposition, and Symmetric Matrices
Research Interests:
Research Interests:
Research Interests: Mathematics, Computer Science, Biophysics, Data Mining, Statistical Analysis, and 15 moreIndependent Component Analysis, Blind Source Separation, High Frequency, Sparse Matrices, IEEE, Dictionaries, Identifiability, Matrix Multiplication, Physik, Sparse Matrix, DDC, Boolean Satisfiability, Sparse component analysis, Blind Signal Separation, and permutation matrix
Research Interests:
Research Interests:
Research Interests:
We introduce identifiability conditions for the blind source separation (BSS) problem, combining the second and fourth order statistics. We prove that under these conditions, well known methods (like eigen-value decomposition and joint... more
We introduce identifiability conditions for the blind source separation (BSS) problem, combining the second and fourth order statistics. We prove that under these conditions, well known methods (like eigen-value decomposition and joint diagonalization) can be applied with probability one, i.e. the set of parameters for which such a method doesn’t solve the BSS problem, has a measure zero.
Research Interests:
In this paper, we consider several kinds of convexity and semicontinuity of multifunctions with respect to ordering cones, and we investigate inherited properties of convexity and semicontinuity through scalarization of multifunctions. We... more
In this paper, we consider several kinds of convexity and semicontinuity of multifunctions with respect to ordering cones, and we investigate inherited properties of convexity and semicontinuity through scalarization of multifunctions. We introduce four kinds of scalarizing functions to characterize images of a multifunction by using Tchebyshev scalarization; these (real-valued) scalarizing functions have the same sorts of convexity and semicontinuity which correspond to those of parent multifunctions.
Research Interests:
Research Interests:
We present two methods for data representations based on matrix fac- torization: Independent Component Analysis (ICA) and Sparse Component Analysis (SCA). Our presentation focuses on the mathematical foundation of ICA and SCA based on... more
We present two methods for data representations based on matrix fac- torization: Independent Component Analysis (ICA) and Sparse Component Analysis (SCA). Our presentation focuses on the mathematical foundation of ICA and SCA based on optimization theory, which appears to be enough for rigorous justification of the methods, although the ICA methods usually are justified by principles from physics, such as
Research Interests:
Research Interests:
Research Interests:
Abstract Solutions φ (x) of the functional equation φ (φ (x))= f (x) are called iterative roots of the given function f (x). They are of interest in dynamical systems, chaos and complexity theory and also in the modeling of certain... more
Abstract Solutions φ (x) of the functional equation φ (φ (x))= f (x) are called iterative roots of the given function f (x). They are of interest in dynamical systems, chaos and complexity theory and also in the modeling of certain industrial and financial processes. The problem of computing this “square root” of a function or operator remains a hard task. While the theory of functional equations provides some insight for real and complex valued functions, iterative roots of nonlinear mappings from R^ n to R^ n are less studied from a theoretical and ...
Research Interests:
ABSTRACT We consider four variants of Fan's type inequality for vector-valued multifunctions in topological vector spaces with respect to a cone preorder in the target space. In order to establish these results, firstly we prove a... more
ABSTRACT We consider four variants of Fan's type inequality for vector-valued multifunctions in topological vector spaces with respect to a cone preorder in the target space. In order to establish these results, firstly we prove a two-function result of Simons directly by the scalar Fan's inequality, after that, by its help we derive a new two-function result, which is the base of our proofs.