Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

scholarly journals A Statistically and Numerically Efficient Independence Test Based on Random Projections and Distance Covariance

Author(s):  
Cheng Huang ◽  
Xiaoming Huo

Testing for independence plays a fundamental role in many statistical techniques. Among the nonparametric approaches, the distance-based methods (such as the distance correlation-based hypotheses testing for independence) have many advantages, compared with many other alternatives. A known limitation of the distance-based method is that its computational complexity can be high. In general, when the sample size is n, the order of computational complexity of a distance-based method, which typically requires computing of all pairwise distances, can be O(n2). Recent advances have discovered that in the univariate cases, a fast method with O(n log  n) computational complexity and O(n) memory requirement exists. In this paper, we introduce a test of independence method based on random projection and distance correlation, which achieves nearly the same power as the state-of-the-art distance-based approach, works in the multivariate cases, and enjoys the O(nK log  n) computational complexity and O( max{n, K}) memory requirement, where K is the number of random projections. Note that saving is achieved when K < n/ log  n. We name our method a Randomly Projected Distance Covariance (RPDC). The statistical theoretical analysis takes advantage of some techniques on the random projection which are rooted in contemporary machine learning. Numerical experiments demonstrate the efficiency of the proposed method, relative to numerous competitors.

Author(s):  
Stephan Schlupkothen ◽  
Gerd Ascheid

Abstract The localization of multiple wireless agents via, for example, distance and/or bearing measurements is challenging, particularly if relying on beacon-to-agent measurements alone is insufficient to guarantee accurate localization. In these cases, agent-to-agent measurements also need to be considered to improve the localization quality. In the context of particle filtering, the computational complexity of tracking many wireless agents is high when relying on conventional schemes. This is because in such schemes, all agents’ states are estimated simultaneously using a single filter. To overcome this problem, the concept of multiple particle filtering (MPF), in which an individual filter is used for each agent, has been proposed in the literature. However, due to the necessity of considering agent-to-agent measurements, additional effort is required to derive information on each individual filter from the available likelihoods. This is necessary because the distance and bearing measurements naturally depend on the states of two agents, which, in MPF, are estimated by two separate filters. Because the required likelihood cannot be analytically derived in general, an approximation is needed. To this end, this work extends current state-of-the-art likelihood approximation techniques based on Gaussian approximation under the assumption that the number of agents to be tracked is fixed and known. Moreover, a novel likelihood approximation method is proposed that enables efficient and accurate tracking. The simulations show that the proposed method achieves up to 22% higher accuracy with the same computational complexity as that of existing methods. Thus, efficient and accurate tracking of wireless agents is achieved.


Author(s):  
Xian Wang ◽  
Paula Tarrío ◽  
Ana María Bernardos ◽  
Eduardo Metola ◽  
José Ramón Casar

Many mobile devices embed nowadays inertial sensors. This enables new forms of human-computer interaction through the use of gestures (movements performed with the mobile device) as a way of communication. This paper presents an accelerometer-based gesture recognition system for mobile devices which is able to recognize a collection of 10 different hand gestures. The system was conceived to be light and to operate in a user-independent manner in real time. The recognition system was implemented in a smart phone and evaluated through a collection of user tests, which showed a recognition accuracy similar to other state-of-the art techniques and a lower computational complexity. The system was also used to build a human-robot interface that enables controlling a wheeled robot with the gestures made with the mobile phone


Acta Numerica ◽  
2014 ◽  
Vol 23 ◽  
pp. 369-520 ◽  
Author(s):  
G. Dimarco ◽  
L. Pareschi

In this survey we consider the development and mathematical analysis of numerical methods for kinetic partial differential equations. Kinetic equations represent a way of describing the time evolution of a system consisting of a large number of particles. Due to the high number of dimensions and their intrinsic physical properties, the construction of numerical methods represents a challenge and requires a careful balance between accuracy and computational complexity. Here we review the basic numerical techniques for dealing with such equations, including the case of semi-Lagrangian methods, discrete-velocity models and spectral methods. In addition we give an overview of the current state of the art of numerical methods for kinetic equations. This covers the derivation of fast algorithms, the notion of asymptotic-preserving methods and the construction of hybrid schemes.


2020 ◽  
Vol 10 (3) ◽  
pp. 24
Author(s):  
Stefania Preatto ◽  
Andrea Giannini ◽  
Luca Valente ◽  
Guido Masera ◽  
Maurizio Martina

High Efficiency Video Coding (HEVC) is the latest video standard developed by the Joint Video Exploration Team. HEVC is able to offer better compression results than preceding standards but it suffers from a high computational complexity. In particular, one of the most time consuming blocks in HEVC is the fractional-sample interpolation filter, which is used in both the encoding and the decoding processes. Integrating different state-of-the-art techniques, this paper presents an architecture for interpolation filters, able to trade quality for energy and power efficiency by exploiting approximate interpolation filters and by halving the amount of required memory with respect to state-of-the-art implementations.


Electronics ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 1046 ◽  
Author(s):  
Abeer D. Algarni ◽  
Ghada M. El Banby ◽  
Naglaa F. Soliman ◽  
Fathi E. Abd El-Samie ◽  
Abdullah M. Iliyasu

To circumvent problems associated with dependence on traditional security systems on passwords, Personal Identification Numbers (PINs) and tokens, modern security systems adopt biometric traits that are inimitable to each individual for identification and verification. This study presents two different frameworks for secure person identification using cancellable face recognition (CFR) schemes. Exploiting its ability to guarantee irrevocability and rich diversity, both frameworks utilise Random Projection (RP) to encrypt the biometric traits. In the first framework, a hybrid structure combining Intuitionistic Fuzzy Logic (IFL) with RP is used to accomplish full distortion and encryption of the original biometric traits to be saved in the database, which helps to prevent unauthorised access of the biometric data. The framework involves transformation of spatial-domain greyscale pixel information to a fuzzy domain where the original biometric images are disfigured and further distorted via random projections that generate the final cancellable traits. In the second framework, cancellable biometric traits are similarly generated via homomorphic transforms that use random projections to encrypt the reflectance components of the biometric traits. Here, the use of reflectance properties is motivated by its ability to retain most image details, while the guarantee of the non-invertibility of the cancellable biometric traits supports the rationale behind our utilisation of another RP stage in both frameworks, since independent outcomes of both the IFL stage and the reflectance component of the homomorphic transform are not enough to recover the original biometric trait. Our CFR schemes are validated on different datasets that exhibit properties expected in actual application settings such as varying backgrounds, lightings, and motion. Outcomes in terms standard metrics, including structural similarity index metric (SSIM) and area under the receiver operating characteristic curve (AROC), suggest the efficacy of our proposed schemes across many applications that require person identification and verification.


2018 ◽  
Vol 37 (4) ◽  
pp. 860-870 ◽  
Author(s):  
Jian Fang ◽  
Chao Xu ◽  
Pascal Zille ◽  
Dongdong Lin ◽  
Hong-Wen Deng ◽  
...  

2018 ◽  
Vol 14 (7) ◽  
pp. 155014771879075 ◽  
Author(s):  
Chi Yoon Jeong ◽  
Hyun S Yang ◽  
KyeongDeok Moon

In this article, we propose a fast method for detecting the horizon line in maritime scenarios by combining a multi-scale approach and region-of-interest detection. Recently, several methods that adopt a multi-scale approach have been proposed, because edge detection at a single is insufficient to detect all edges of various sizes. However, these methods suffer from high processing times, requiring tens of seconds to complete horizon detection. Moreover, the resolution of images captured from cameras mounted on vessels is increasing, which reduces processing speed. Using the region-of-interest is an efficient way of reducing the amount of processing information required. Thus, we explore a way to efficiently use the region-of-interest for horizon detection. The proposed method first detects the region-of-interest using a property of maritime scenes and then multi-scale edge detection is performed for edge extraction at each scale. The results are then combined to produce a single edge map. Then, Hough transform and a least-square method are sequentially used to estimate the horizon line accurately. We compared the performance of the proposed method with state-of-the-art methods using two publicly available databases, namely, Singapore Marine Dataset and buoy dataset. Experimental results show that the proposed method for region-of-interest detection reduces the processing time of horizon detection, and the accuracy with which the proposed method can identify the horizon is superior to that of state-of-the-art methods.


2018 ◽  
Vol 29 (12) ◽  
pp. 1850119
Author(s):  
Jingming Zhang ◽  
Jianjun Cheng ◽  
Xiaosu Feng ◽  
Xiaoyun Chen

Identifying community structure in networks plays an important role in understanding the network structure and analyzing the network features. Many state-of-the-art algorithms have been proposed to identify the community structure in networks. In this paper, we propose a novel method based on closure extension; it performs in two steps. The first step uses the similarity closure or correlation closure to find the initial community structure. In the second step, we merge the initial communities using Modularity [Formula: see text]. The proposed method does not need any prior information such as the number or sizes of communities, and it is able to obtain the same resulting communities in multiple runs. Moreover, it is noteworthy that our method has low computational complexity because of considering only local information of network. Some real-world and synthetic graphs are used to test the performance of the proposed method. The results demonstrate that our method can detect deterministic and informative community structure in most cases.


2019 ◽  
Vol 9 (20) ◽  
pp. 4255 ◽  
Author(s):  
Artem Leichter ◽  
Martin Werner

This work proposes a fast and straightforward method, called natural point correspondences (NaPoCo), for the extraction of road segment shapes from trajectories of vehicles. The algorithm can be expressed with 20 lines of code in Python and can be used as a baseline for further extensions or as a heuristic initialization for more complex algorithms. In this paper, we evaluate the performance of the proposed method. We show that (1) the order of the points in a trajectory can be used to cluster points among the trajectories for road segment shape extraction and (2) that preprocessing using polygonal approximation improves the results of the approach. Furthermore, we show based on “averaging GPS segments” competition results, that the algorithm despite its simplicity and low computational complexity achieves state-of-the-art performance on the challenge dataset, which is composed of data from several cities and countries.


Author(s):  
Pengfei Liu ◽  
Xuejun Ma ◽  
Wang Zhou

We construct a high-order conditional distance covariance, which generalizes the notation of conditional distance covariance. The joint conditional distance covariance is defined as a linear combination of conditional distance covariances, which can capture the joint relation of many random vectors given one vector. Furthermore, we develop a new method of conditional independence test based on the joint conditional distance covariance. Simulation results indicate that the proposed method is very effective. We also apply our method to analyze the relationships of PM2.5 in five Chinese cities: Beijing, Tianjin, Jinan, Tangshan and Qinhuangdao by the Gaussian graphical model.


Export Citation Format

Share Document