Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content
Pedro Forero

    Pedro Forero

    Using data-driven methods for characterizing spectral and spatiotemporal structures in underwater acoustic environments complements the traditional use of physics-based acoustic propagation models that often focus on capturing average... more
    Using data-driven methods for characterizing spectral and spatiotemporal structures in underwater acoustic environments complements the traditional use of physics-based acoustic propagation models that often focus on capturing average environmental characteristics. One-way clustering of acoustic data in time, space and frequency can reveal individual, that is per mode, structure, but fails to identify couplings among the structures in these modes. Co-clustering is a clustering approach that can identify groups of similar elements (co-clusters) across all modes in a tensor. Our proposed co-clustering approach acts on a tensor build with acoustic data captured by a hydrophone array. It identifies the co-clusters, and the spectral and spatiotemporal structure that defines them by coupling one-way clustering with a Tucker approximation model. One-way clustering reveals the tensor structure per mode, namely time, frequency, and space, while the Tucker model captures trilinear structures ...
    Notwithstanding the popularity of conventional clustering algorithms such as K-means and probabilistic clustering, their clustering results are sensiti ve to the presence of outliers in the data. Even a few outliers can compromise the... more
    Notwithstanding the popularity of conventional clustering algorithms such as K-means and probabilistic clustering, their clustering results are sensiti ve to the presence of outliers in the data. Even a few outliers can compromise the ability of these algorithms to identify meaningful hidden structures rendering their outcome unreliable. This paper develops robust clustering algorithms that not only aim to cluster the data, but also to identify the outliers. The novel approa ches rely on the infrequent presence of outliers in the data which translates to sparsity in a judiciously cho sen domain. Capitalizing on the sparsity in the outlier domain, outlier-aware robust K-means and probabilistic clustering approaches are proposed. Their novelty lies on identifying outliers while effecting sparsity in the outlier domain through carefully chosen regularization. A block coordinate descent approach is developed to obtain iterative algorithms with convergence guarantees and small excess com...
    Co-clustering of tensor data is an unsupervised learning task aiming to identify multidimensional structures hidden in a tensor. These structures are critical for understanding interdependencies across variables belonging to different... more
    Co-clustering of tensor data is an unsupervised learning task aiming to identify multidimensional structures hidden in a tensor. These structures are critical for understanding interdependencies across variables belonging to different tensor dimensions, often referred to as modes, which are frequently disregarded when tensor data are represented via one- or two-dimensional data structures. This work proposes a new tensor co-clustering algorithm that uses a class of Bregman divergences to measure the coherence of co-clusters on an individual mode basis, while ensuring that the interactions of their prototyping elements capture the tensor intra-modal structure. A co-clustering algorithm based on the alternating-direction method of multipliers is developed. The proposed algorithm decouples the co-clustering problem into an iterative two-step process whose steps are reminiscent of classical one-way clustering and Tucker decomposition problems. The performance of the proposed method is i...
    Advances in technology, systems, computation and storage have led to smaller, more capable underwater vessels with reduced capital and operating costs. Long-lived underwater hubs that can operate with minimal human intervention can be... more
    Advances in technology, systems, computation and storage have led to smaller, more capable underwater vessels with reduced capital and operating costs. Long-lived underwater hubs that can operate with minimal human intervention can be networked and deployed together with other underwater vehicles for diverse uses. In this paper, we propose APOLL, an advanced polling-based medium access control (MAC) algorithm that improves on existing CSMA and TDMA algorithms by intelligently allocating transmission opportunities and exploiting known characteristics of the acoustic environment. The paper also presents initial results from a simulation study on the performance of APOLL. The results demonstrate that APOLL outperforms CSMA/MACA-based MAC protocol and TDMA in terms of traffic throughput and channel utilization in typical underwater acoustic environments.
    Anomalies in data have traditionally been considered as nuisances whose presence, if ignored, can bring detrimental effects on the output of many data processing tasks. Nevertheless, in many situations anomalies correspond to events of... more
    Anomalies in data have traditionally been considered as nuisances whose presence, if ignored, can bring detrimental effects on the output of many data processing tasks. Nevertheless, in many situations anomalies correspond to events of interest and as such should be promptly identified before their presence is masked by the data preprocessing schemes being used to reduce the complexity of the main data processing task. This work develops a robust dictionary learning algorithm that exploits the notions of sparsity and local geometry of the data to identify anomalies while constructing sparse representations for the data. Sparsity is used to model the presence of anomalies in a dataset, and local geometry is exploited to better qualify a datum as an anomaly. The robust dictionary learning problem is cast as a regularized least-squares problem where sparsity-inducing and Laplacian regularization terms are used. Efficient iterative solvers based on block-coordinate descent and proximal gradient are developed to tackle the resulting joint dictionary learning and anomaly detection problems. The proposed framework is extended to address variations of classical dictionary learning and matrix factorization problems. Numerical tests on real datasets with artificial and real anomalies are used to illustrate the performance of the proposed algorithms.
    Matched-field processing techniques can achieve localization of undersea acoustic sources in both range and depth when sufficient environmental information is available. Unfortunately, these techniques are sensitive to environmental... more
    Matched-field processing techniques can achieve localization of undersea acoustic sources in both range and depth when sufficient environmental information is available. Unfortunately, these techniques are sensitive to environmental mismatch and often fail when localizing multiple acoustic sources. This work presents a family of acoustic source-localization techniques that similarly to matched-field processing exploit environmental information for localizing acoustic sources in both range and depth. Unique features of these methods are their explicit use of a sparse representation of the source-localization map and ability to model environmental mismatch. Tools from the areas of compressive sensing and mathematical optimization are leveraged for developing computationally tractable solvers that enable fast processing of high-dimensional source-localization maps. These localization techniques are also extended for tracking multiple acoustic sources. In this case, it is possible to exploit the inherent spar...
    Multidimensional scaling (MDS) seeks an embedding of N objects in a p < N dimensional space such that inter-vector distances approximate pair-wise object dissimilarities. Despite their popularity, MDS algorithms are sensitive to... more
    Multidimensional scaling (MDS) seeks an embedding of N objects in a p < N dimensional space such that inter-vector distances approximate pair-wise object dissimilarities. Despite their popularity, MDS algorithms are sensitive to outliers, yielding grossly erroneous embeddings even if few outliers contaminate the available dissimilarities. This work introduces a robust MDS approach exploiting the degree of sparsity in the outliers present. Links with compressive sampling lead to a robust MDS solver capable of coping with outliers. The novel algorithm relies on a majorization-minimization (MM) approach to minimize a regularized stress function, whereby an iterative MDS solver involving Lasso operators is obtained. The resulting scheme identifies outliers and obtains the desired embedding at a computational cost comparable to that of non-robust MDS alternatives. Numerical tests illustrate the merits of the proposed algorithm.
    Online outlier detection is fundamental for expediting the processing of data and focusing processing resources on portions of data that may be most informative. This work develops online robust dictionary learning algorithms that are... more
    Online outlier detection is fundamental for expediting the processing of data and focusing processing resources on portions of data that may be most informative. This work develops online robust dictionary learning algorithms that are able to identify outliers in the training data. The algorithms are based on lasso updates for computing the vector of expansion coefficients for a new training vector and gradient descent updates for updating the dictionary. An outlier is identified based on the so-called outlier vector. The weight associated with the group lasso regularizer that encourages an outlier vector to be set to zero is computed based on the outlierness score of the corresponding training data vector. Outlier vectors are thus more likely to be nonzero if they feature a high outlierness score. Outlierness scores are obtained from density-based outlier detection algorithms and help to enhance the selection of outliers. Both soft and hard outlier removal algorithms are developed. In the latter case, outliers are identified and a residual obtained after removing the outlier contribution is used to update the dictionary. The performance of the proposed algorithms is illustrated via numerical experiments on real video data.
    Abstract : This work focused on estimating the acoustic source gains of broadband sources in a shallow-water environment. The results show that accurate acoustic source gain estimation is possible if the sources were correctly localized.... more
    Abstract : This work focused on estimating the acoustic source gains of broadband sources in a shallow-water environment. The results show that accurate acoustic source gain estimation is possible if the sources were correctly localized. Knowledge about the environment was a fundamental enabler for the acoustic source gain estimation. When dealing with multiple sources, the source gain estimation does not perform as well, but if localization was achieved, then it is still able to give accurate results at higher signal-to-noise ratios. Future research will explore how this algorithm performs on real data set(s) and in the presence of environmental uncertainty.
    This work examines joint anomaly detection and dictionary learning approaches for identifying anomalies in persistent surveillance applications that require data compression. We have developed a sparsity-driven anomaly detector that can... more
    This work examines joint anomaly detection and dictionary learning approaches for identifying anomalies in persistent surveillance applications that require data compression. We have developed a sparsity-driven anomaly detector that can be used for learning dictionaries to address these challenges. In our approach, each training datum is modeled as a sparse linear combination of dictionary atoms in the presence of noise. The noise term is modeled as additive Gaussian noise and a deterministic term models the anomalies. However, no model for the statistical distribution of the anomalies is made. An estimator is postulated for a dictionary that exploits the fact that since anomalies by definition are rare, only a few anomalies will be present when considering the entire dataset. From this vantage point, we endow the deterministic noise term (anomaly-related) with a group-sparsity property. A robust dictionary learning problem is postulated where a group-lasso penalty is used to encourage most anomaly-related noise components to be zero. The proposed estimator achieves robustness by both identifying the anomalies and removing their effect from the dictionary estimate. Our approach is applied to the problem of ship detection and tracking from full-motion video with promising results.
    Passive sonar is an attractive technology for underwater acoustic-source localization that enables the localization system to conceal its presence and does not perturb the maritime environment. Notwithstanding its appeal,... more
    Passive sonar is an attractive technology for underwater acoustic-source localization that enables the localization system to conceal its presence and does not perturb the maritime environment. Notwithstanding its appeal, passive-sonar-based localization is challenging due to the complexities of underwater acoustic propagation. Different from alternatives based on matched-field processing whose localization performance severely deteriorate when localizing multiple sources and when faced with model mismatch, this work casts the broadband underwater acoustic-source localization problem as a multitask learning (MTL) problem, thereby enabling robust and high-resolution localization. Here, each task refers to a sparse signal approximation problem over a single frequency. MTL provides an elegant framework for exchanging information across the individual regression problems and constructing an aggregate (across frequencies) source localization map. The localization problem is formulated as a stochastic least-squ...
    Underwater source localization via passive sonar is a challenging task due to the dynamic and complex nature of the acoustic environment. Different from approaches based on matched-field processing, this work explores broadband underwater... more
    Underwater source localization via passive sonar is a challenging task due to the dynamic and complex nature of the acoustic environment. Different from approaches based on matched-field processing, this work explores broadband underwater source localization within a multitask learning (MTL) framework. Here, each task refers to a robust signal approximation problem over a single frequency. MTL provides a natural framework for exchanging information across the narrowband signal-approximation problems and constructing an aggregate (across frequencies) source-localization map. Efficient algorithms based on block coordinate descent (BCD) are developed for solving the source-localization problem. Complex-valued predictor screening rules for reducing the computational complexity of the algorithm are also developed. These rules discard map locations from the set of possible source locations prior to using BCD. They reduce the computational complexity of the localization algorithm without compromising the localization results. Tests of these approaches on synthetic and real data for the SWellEX-3 environment compare the performance of the proposed algorithm to that of alternative methods.
    Passive sonar is an attractive technology for stealthy underwater source localization. Notwithstanding its appeal, passive-sonar-based localization is challenging due to the complexities of underwater acoustic propagation. This work casts... more
    Passive sonar is an attractive technology for stealthy underwater source localization. Notwithstanding its appeal, passive-sonar-based localization is challenging due to the complexities of underwater acoustic propagation. This work casts broadband underwater source localization as a multitask learning (MTL) problem, where each task refers to a robust sparse signal approximation problem over a single frequency. MTL provides a framework for exchanging information across the individual regression problems and constructing an aggregate (across frequencies) source localization map. Efficient algorithms based on block coordinate descent are developed for solving the localization problem. Numerical tests on the SWellEX-3 dataset illustrate and compare the localization performance of the proposed algorithm to the one of competitive alternatives.
    Using passive sonar for underwater acoustic source localization in a shallow-water environment is challenging due to the complexities of underwater acoustic propagation. Matched-field processing (MFP) exploits both measured and... more
    Using passive sonar for underwater acoustic source localization in a shallow-water environment is challenging due to the complexities of underwater acoustic propagation. Matched-field processing (MFP) exploits both measured and model-predicted acoustic pressures to localize acoustic sources. However, the ambiguity surface obtained through MFP contains artifacts that limit its ability to reveal the location of the acoustic sources. This work introduces a robust scheme for shallow-water source localization that exploits the inherent sparse structure of the localization problem and the use of a model characterizing the acoustic propagation environment. To this end, the underwater acoustic source-localization problem is cast as a sparsity-inducing stochastic optimization problem that is robust to model mismatch. The resulting source-location map (SLM) yields reduced ambiguities and improved resolution, even at low signal-to-noise ratios, when compared to those obtained via classical MFP...
    Passive sonar is an attractive technology for stealthy underwater source localization. Notwithstanding its appeal, passive-sonar-based localization is challenging due to the complexities of underwater acoustic propagation. This work casts... more
    Passive sonar is an attractive technology for stealthy underwater source localization. Notwithstanding its appeal, passive-sonar-based localization is challenging due to the complexities of underwater acoustic propagation. This work casts broadband underwater source localization as a multitask learning (MTL) problem, where each task refers to a robust sparse signal approximation problem over a single frequency. MTL provides a framework for exchanging information across the individual regression problems and constructing an aggregate (across frequencies) source localization map. Efficient algorithms based on block coordinate descent are developed for solving the localization problem. Numerical tests on the SWellEX-3 dataset illustrate and compare the localization performance of the proposed algorithm to the one of competitive alternatives.
    Research Interests:
    Passive sonar is an attractive technology for underwater acoustic-source localization that enables the localization system to conceal its presence and does not perturb the maritime environment. Notwithstanding its appeal,... more
    Passive sonar is an attractive technology for underwater acoustic-source localization that enables the localization system to conceal its presence and does not perturb the maritime environment. Notwithstanding its appeal, passive-sonar-based localization is challenging due to the complexities of underwater acoustic propagation. Different from alternatives based on matched-field processing whose localization performance severely deteriorate when localizing multiple sources and when faced with model mismatch, this work casts the broadband underwater acoustic-source localization problem as a multitask learning (MTL) problem, thereby enabling robust and high-resolution localization. Here, each task refers to a sparse signal approximation problem over a single frequency. MTL provides an elegant framework for exchanging information across the individual regression problems and constructing an aggregate (across frequencies) source localization map. The localization problem is formulated as a stochastic least-squ...
    Research Interests:
    Research Interests:
    Using passive sonar for underwater acoustic source localization in a shallow-water environment is challenging due to the complexities of underwater acoustic propagation. Matched-field processing (MFP) exploits both measured and... more
    Using passive sonar for underwater acoustic source localization in a shallow-water environment is challenging due to the complexities of underwater acoustic propagation. Matched-field processing (MFP) exploits both measured and model-predicted acoustic pressures to localize acoustic sources. However, the ambiguity surface obtained through MFP contains artifacts that limit its ability to reveal the location of the acoustic sources. This work introduces a robust scheme for shallow-water source localization that exploits the inherent sparse structure of the localization problem and the use of a model characterizing the acoustic propagation environment. To this end, the underwater acoustic source-localization problem is cast as a sparsity-inducing stochastic optimization problem that is robust to model mismatch. The resulting source-location map (SLM) yields reduced ambiguities and improved resolution, even at low signal-to-noise ratios, when compared to those obtained via classical MFP approaches. An iterative solver based on block-coordinate descent is developed whose computational complexity per iteration is linear with respect to the number of locations considered for the SLM. Numerical tests illustrate the performance of the algorithm.
    Research Interests:
    Research Interests:
    Matched-field processing (MFP) is a generalization of classical beamforming that has been traditionally used in underwater source localization problems. However, MFP suffers from low resolution, sensitivity to model mismatch, and is... more
    Matched-field processing (MFP) is a generalization of classical beamforming that has been traditionally used in underwater source localization problems. However, MFP suffers from low resolution, sensitivity to model mismatch, and is challenged when more than one source is present. This work develops a robust high-resolution underwater source localization algorithm that capitalizes on the sparsity inherent in the underwater source localization problem. Similar to MFP, the sparsity-cognizant approach developed here capitalizes on a model for the acoustic propagation environment and casts the localization problem as a regularized least-squares (LS) one. The resulting regularizer encourages sparsity on the grid-based source location map. An efficient solver whose computational complexity scales linearly with the grid size is developed and its performance illustrated via numerical tests.
    This paper develops algorithms to train linear support vector machines (SVMs) when training data are distributed across different nodes and their communication to a centralized node is prohibited due to, for example, communication... more
    This paper develops algorithms to train linear support vector machines (SVMs) when training data are distributed across different nodes and their communication to a centralized node is prohibited due to, for example, communication overhead or privacy reasons. To accomplish this ...
    Automatic modulation classification (AMC) is a critical prerequisite for demodulation of communication signals in tactical scenarios. Depending on the number of un-known parameters involved, the complexity of AMC can be prohibitive.... more
    Automatic modulation classification (AMC) is a critical prerequisite for demodulation of communication signals in tactical scenarios. Depending on the number of un-known parameters involved, the complexity of AMC can be prohibitive. Existing maximum-likelihood and feature-based ...
    Multidimensional scaling (MDS) seeks an embedding of N objects in a p < N dimensional space such that inter-vector distances approximate pair-wise object dissimilarities. Despite their popularity, MDS algorithms are sensitive to... more
    Multidimensional scaling (MDS) seeks an embedding of N objects in a p < N dimensional space such that inter-vector distances approximate pair-wise object dissimilarities. Despite their popularity, MDS algorithms are sensitive to outliers, yielding grossly erroneous embeddings even if few outliers contaminate the available dissimilarities. This work introduces a robust MDS approach exploiting the degree of sparsity in the outliers present. Links with compressive sampling lead to a robust MDS solver capable of coping with outliers. The novel algorithm relies on a majorization-minimization (MM) approach to minimize a regularized stress function, whereby an iterative MDS solver involving Lasso operators is obtained. The resulting scheme identifies outliers and obtains the desired embedding at a computational cost comparable to that of non-robust MDS alternatives. Numerical tests illustrate the merits of the proposed algorithm.