Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

linear transformation
Recently Published Documents


TOTAL DOCUMENTS

850
(FIVE YEARS 126)

H-INDEX

34
(FIVE YEARS 4)

2022 ◽  
Vol 3 (1) ◽  
pp. 1-26
Author(s):  
Omid Hajihassani ◽  
Omid Ardakanian ◽  
Hamzeh Khazaei

The abundance of data collected by sensors in Internet of Things devices and the success of deep neural networks in uncovering hidden patterns in time series data have led to mounting privacy concerns. This is because private and sensitive information can be potentially learned from sensor data by applications that have access to this data. In this article, we aim to examine the tradeoff between utility and privacy loss by learning low-dimensional representations that are useful for data obfuscation. We propose deterministic and probabilistic transformations in the latent space of a variational autoencoder to synthesize time series data such that intrusive inferences are prevented while desired inferences can still be made with sufficient accuracy. In the deterministic case, we use a linear transformation to move the representation of input data in the latent space such that the reconstructed data is likely to have the same public attribute but a different private attribute than the original input data. In the probabilistic case, we apply the linear transformation to the latent representation of input data with some probability. We compare our technique with autoencoder-based anonymization techniques and additionally show that it can anonymize data in real time on resource-constrained edge devices.


Author(s):  
Yangzhou Chen ◽  
Guangyue Xu ◽  
Jingyuan Zhan

This paper studies the leader-following state consensus problem for heterogeneous linear multi-agent systems under fixed directed communication topologies. First, we propose a consensus protocol consisting of four parts for high-order multi-agent systems, in which different agents are allowed to have different gain matrices so as to increase the degree of design freedom. Then, we adopt a state linear transformation, which is constructed based on the incidence matrix of a directed spanning tree of the communication topology, to equivalently transform the state consensus problem into a partial variable stability problem. Meanwhile, the results of the partial variable stability theory are used to derive a sufficient and necessary consensus criterion, expressed as the Hurwitz stability of a real matrix. Then, this criterion is further expressed as a bilinear matrix inequality condition, and, based on this condition, an iterative algorithm is proposed to find the gain matrices of the protocol. Finally, numerical examples are provided to verify the effectiveness of the proposed protocol design method.


2021 ◽  
Vol 37 ◽  
pp. 598-612
Author(s):  
Irwin S. Pressman

This work studies the kernel of a linear operator associated with the generalized k-fold commutator. Given a set $\mathfrak{A}= \left\{ A_{1}, \ldots ,A_{k} \right\}$ of real $n \times n$ matrices, the commutator is denoted by$[A_{1}| \ldots |A_{k}]$. For a fixed set of matrices $\mathfrak{A}$ we introduce a multilinear skew-symmetric linear operator $T_{\mathfrak{A}}(X)=T(A_{1}, \ldots ,A_{k})[X]=[A_{1}| \ldots |A_{k} |X] $. For fixed $n$ and $k \ge 2n-1, \; T_{\mathfrak{A}} \equiv 0$ by the Amitsur--Levitski Theorem [2] , which motivated this work. The matrix representation $M$ of the linear transformation $T$ is called the k-commutator matrix. $M$ has interesting properties, e.g., it is a commutator; for $k$ odd, there is a permutation of the rows of $M$ that makes it skew-symmetric. For both $k$ and $n$ odd, a provocative matrix $\mathcal{S}$ appears in the kernel of $T$. By using the Moore--Penrose inverse and introducing a conjecture about the rank of $M$, the entries of $\mathcal{S}$ are shown to be quotients of polynomials in the entries of the matrices in $\mathfrak{A}$. One case of the conjecture has been recently proven by Brassil. The Moore--Penrose inverse provides a full rank decomposition of $M$.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Onur Kulce ◽  
Deniz Mengu ◽  
Yair Rivenson ◽  
Aydogan Ozcan

AbstractSpatially-engineered diffractive surfaces have emerged as a powerful framework to control light-matter interactions for statistical inference and the design of task-specific optical components. Here, we report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linear transformations between an input (Ni) and output (No), where Ni and No represent the number of pixels at the input and output fields-of-view (FOVs), respectively. First, we consider a single diffractive surface and use a matrix pseudoinverse-based method to determine the complex-valued transmission coefficients of the diffractive features/neurons to all-optically perform a desired/target linear transformation. In addition to this data-free design approach, we also consider a deep learning-based design method to optimize the transmission coefficients of diffractive surfaces by using examples of input/output fields corresponding to the target transformation. We compared the all-optical transformation errors and diffraction efficiencies achieved using data-free designs as well as data-driven (deep learning-based) diffractive designs to all-optically perform (i) arbitrarily-chosen complex-valued transformations including unitary, nonunitary, and noninvertible transforms, (ii) 2D discrete Fourier transformation, (iii) arbitrary 2D permutation operations, and (iv) high-pass filtered coherent imaging. Our analyses reveal that if the total number (N) of spatially-engineered diffractive features/neurons is ≥Ni × No, both design methods succeed in all-optical implementation of the target transformation, achieving negligible error. However, compared to data-free designs, deep learning-based diffractive designs are found to achieve significantly larger diffraction efficiencies for a given N and their all-optical transformations are more accurate for N < Ni × No. These conclusions are generally applicable to various optical processors that employ spatially-engineered diffractive surfaces.


Export Citation Format

Share Document