An Operational Measure For Squeezing: Home Search Collections Journals About Contact Us My Iopscience
An Operational Measure For Squeezing: Home Search Collections Journals About Contact Us My Iopscience
An Operational Measure For Squeezing: Home Search Collections Journals About Contact Us My Iopscience
This content has been downloaded from IOPscience. Please scroll down to see the full text.
(http://iopscience.iop.org/1751-8121/49/44/445304)
View the table of contents for this issue, or go to the journal homepage for more
Download details:
IP Address: 207.162.240.147
This content was downloaded on 14/10/2016 at 04:18
E-mail: martin.idel@tum.de
Abstract
We propose and analyse a mathematical measure for the amount of squeezing
contained in a continuous variable quantum state. We show that the proposed
measure operationally quantifies the minimal amount of squeezing needed to
prepare a given quantum state and that it can be regarded as a squeezing
analogue of the ‘entanglement of formation’. We prove that the measure is
convex and subadditive and we provide analytic bounds as well as a numerical
convex optimisation algorithm for its computation. By example, we then show
that the amount of squeezing needed for the preparation of certain multi-mode
quantum states can be significantly lower than naive state preparation
suggests.
1. Introduction
The interplay between quantum optics and the field of quantum information processing, in
particular via the subfield of continuous variable quantum information, has been developing
for several decades and is interesting also due to its experimental success (see [KL10] for a
thorough introduction).
Coherent bosonic states and the broader class of Gaussian bosonic states, quantum states
whose Wigner function is characterised by its first and second moments, are of particular
interest in the theory of continuous variable quantum information. Their interest is also due to
the fact that modes of light in optical experiments behave like Gaussian coherent states.
For any bosonic state, its matrix of second moments, the so called covariance matrix,
must fulfil Heisenbergʼs uncertainty principle in all modes. If the state possesses a mode,
where despite this inequality Dx Dp 2 either Dx or Dp is strictly smaller than 2 , it
is called squeezed. The production of squeezed states is experimentally possible, but it
requires the use of nonlinear optical elements [Bra05], which are more difficult to produce
and handle than the usual linear optics (i.e. beam splitters and phase shifters). Nevertheless,
squeezed states play a crucial role in many experiments in quantum information processing
and beyond. Therefore, it is natural both theoretically and practically to investigate the
amount of squeezing which is necessary to create an arbitrary quantum state.
As a qualitative answer, squeezing is known to be an irreducible resource with respect to
linear quantum optics [Bra05]. In the Gaussian case, it is also known to be closely related to
entanglement of states [WEP03] and the non-additivity of quantum channel capacities
[LGW13]. In addition, quantitative measures of squeezing have been provided on multiple
occasions [Kra+03, Lee88], yet none of these measures are operational for more than a single
mode in the sense that they do not measure the minimal amount of squeezing necessary to
prepare a given state.
The goal of this paper is therefore twofold: first, we define and study operational squeezing
measures, especially measures quantifying the amount of squeezing needed to prepare a given
state. Second, we reinvestigate in how far squeezing is a resource in a mathematically rigorous
manner and study the resulting resource theory by defining preparation measures.
In order to give a brief overview of the results, we assume the reader is familiar with
standard notation of the field, which is also gathered in section 2. In particular, let γ denote
covariance matrices. A squeezed state is a state where at least one of the eigenvalues of γ is
smaller than one.
To obtain operational squeezing measures, we first study operational squeezing in section 3:
suppose we want to implement an operation on our quantum state corresponding to some unitary
U. Any such unitary can be implemented as the time-evolution of Hamiltonians. Recall that any
quantum-optical Hamiltonian can be split into ‘passive’ and ‘active’ parts, where the passive
parts are implementable by linear optics and the active parts require nonlinear media. We assume
that the active transformations available are single-mode squeezers with Hamiltonian
Hsqueeze, j = i (aj2 - aj† 2) ,
2
where the j denotes squeezing in the jth mode. We therefore consider any Hamiltonian of the
form
H = Hpassive (t ) + åck (t ) Hsqueeze, j , ( 1)
k
where ck are complex coefficients, which can be seen as the interaction strength of the
medium and Hpassive is an arbitrary passive Hamiltonian. Then, a natural measure of the
squeezing costs to implement this Hamiltonian would be given by
2
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
functions ci are step functions and later on in the more general case of measurable c
(section 3.2). In particular, the result implies that the minimum amount of squeezing to
implement the symplectic matrix S Î 2n ´ 2n is given by
n
F (S ) ≔ å log si (S), ( 2)
i=1
⎧
⎪
n ⎫
⎪
G (g ) ≔ inf ⎨
⎪
å log sj
( S ) g S T S , S Î Sp (2n) ⎬.
⎪
( 3)
⎩ j=1 ⎭
One of the main results of this paper, which will be proven in section 5, is that this measure is
indeed operational in that it quantifies the minimal amount of single-mode squeezing
necessary to prepare a state with covariance matrix γ, using linear optics with single-mode
squeezers, ancillas, measurements, convex combinations and addition of classical noise.
We also define a second squeezing measure, which is a squeezing-analogue of the
entanglement of formation, the ‘squeezing of formation’, i.e.the amount of single-mode
squeezed resource states needed to prepare a given state using only passive operations and
adding of noise. This is done in section 5.3, where we also prove that this measure is equal
to G.
In addition, we prove several structural facts about G in section 4. In particular, G is
convex, lower semicontinuous everywhere, continuous on the interior and subadditive.
Moreover, we show
1 n
å log (lj (g )) G (g )
2 lj < 1
with the eigenvalues lj of γ. Equality in this lower bound is usually not achievable, albeit
numerical tests have shown that the bound is often very good.
The measure would lose a lot of its appeal, if it could not be computed. Although we
cannot give an efficient analytical formula for more than one mode, we provide a numerical
algorithm to obtain G for any state. To demonstrate that this works in principle, we calculate
G approximately for a state studied in [MK08] (section 6). The calculations also demonstrate
that the preparation procedure obtained from minimising G can greatly lower the squeezing
costs when compared to naive preparation procedures. Finally, we critically discuss the
flexibility and applicability of our measures in section 7. We believe that while we managed
to give reasonable measures and interesting tools to study the resource theory of squeezing
from a theoretical perspective, G might not reflect the experimental reality in all parts. In
particular, it becomes extraordinarily difficult to achieve high squeezing in a single mode
[And+15], which is not reflected by taking the logarithm of the squeezing parameter. We
show that this shortcoming can be easily corrected for a broad class of cost functions. In
addition, the form of the active part of the Hamiltonian (1) might not reflect the form of the
Hamiltonian in the lab. This cannot be corrected as easily but in any case, our measure will
give a lower bound.
3
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
2. Preliminaries
In this section, we collect basic notions from continuous variable quantum information and
symplectic linear algebra that we need later on. For a broader overview, we refer to
[ARL14, BL05].
Consider a bosonic system with n-modes, each of which is characterised by a pair of cano-
nical variables {Qk , Pk}. Setting R = (Q1, P1, ¼, Qn , Pn )T the canonical commutation relations
(CCRs) take on the form [Rk , Rl ] = iskl with the standard symplectic form
n
s=⨁
i=1
(
0 1 .
-1 0 )
Since it will sometimes be convenient, we also introduce another basis of the canonical
variables: let R˜ = (Q1, Q2, ¼, Qn , P1, P2, ¼, Pn )T , then the symplectic CCRs take on the form
[R˜k , R˜l ] = iJkl with the symplectic form
⎛ 0 n⎞
J=⎜ ⎟.
⎝- n 0 ⎠
Clearly, J and σ differ only by a permutation, since R and R̃ differ only by a permutation.
From functional analysis, it is well-known that the operators Qk and Pk cannot be represented
by bounded operators on a Hilbert space. In order to avoid complications associated to
unbounded operators, it is usually easier to work with a representation of the CCR-relations
on some Hilbert space , instead. The standard representation is known as the Schrödinger
representation and defines the Weyl system, a family of unitaries Wx with x Î 2n and
Wx ≔ exp (ixsR) , x Î 2n
fulfiling the Weyl relations Wx Wh = exp-i 2xsh Wx + h for all x , h . Such a system is unique up to
isomorphism under further assumptions of continuity and irreducibility as obtained by the
Stone–von Neumann theorem. Given Wx it is important to note that
Wx Rk Wx* = Rk + xk " x Î 2n. ( 4)
In this paper, we will not use many properties of the Weyl system, since instead, we can work
with the much simpler moments of the state: given a quantum state r Î 1(L2 (2n)) (trace-
class operators on L2), its first and second centred moments are given by
dk ≔ tr (rRk ) , (5)
gkl ≔ tr (r {Rk - dk , Rl - dl }+ ) (6)
with {·, ·} + the regular anticommutator. We will write Γ instead of γ for the covariance
matrix, if we work with R̃ instead of R. Again, a simple permutation relates the two.
An important question one can ask is when a matrix γ can occur as a covariance matrix of
a quantum state. The answer is given by Heisenberg’s principle, which here takes the form of
a matrix inequality:
Proposition 2.1. Let g Î 2n ´ 2n , then there exists a quantum state r with covariance matrix
g if and only if
g is ,
4
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
Another question one might ask is when a covariance matrix belongs to a pure quantum
state. This question cannot be answered without more information about the higher order
terms If we however require the state to be uniquely determined by its first and second
moments, i.e.if we consider the so called Gaussian states, we have an answer (see [ASI04]):
Proposition 2.2.Let r be an n -mode Gaussian state (i.e. completely determined by its first
and second moments), then r is pure if and only if det (gr ) = 1.
A very important set of operations on a quantum system are those, that leave the CCRs
invariant, i.e.linear transformations S such that [SRk , SRl ] = iskl . Such transformations are
called symplectic transformations.
Definition 2.3. Given a symplectic form σ on 2n ´ 2n , the set of matrices S Ì 2n ´ 2n such
that S T sS = s is called the linear symplectic group and is denoted by Sp (2n , , s ).
We will usually drop both σ and in the description of the symplectic group since this
will be clear from the context. The linear symplectic group is a Lie group and as such contains
a lot of structure. For more information on the linear symplectic group and its connection to
physics, we refer the reader to [Gos06, MS98] chapter 2. An overview for physicists is also
found in [Arv+95a]. All of the following can be found in that paper:
Definition 2.4. Let O (2n , ) be the real orthogonal group, Then we define the following
three subsets of Sp (2n ):
K (n) ≔ Sp (2n , ) Ç O (2n , ) ,
Z (n) ≔ {2 ( j - 1) Å diag (si , si-1) Å 2 (n - ( j + 1)) ∣s 0, j = 1, ¼, n} ,
P (n) ≔ {S Î Sp (2n , )∣S 0}.
The first subset is the maximally compact subgroup of Sp (2n ), the second subset is the subset
of single-mode-squeezers. It generates the multiplicative subgroup (2n ), a maximally
abelian subgroup of Sp (2n ). The third set is the set of positive definite symplectic matrices.
In addition, since Sp (2n ) is a Lie group, it possesses a Lie algebra. Let us collect a
number of relevant facts about the Lie algebra and some subsets:
5
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
⎛ ⎞
(3) p (n ) ≔ {A Î 2n ´ 2n∣A = ⎜ a b ⎟ , a = aT , b = bT } the subspace of the Lie algebra
⎝b - a⎠
sp (2n ) corresponding to P (n ).
Since the Lie algebra is a vector space, it is spanned by a set of vectors, the generators. A
standard decomposition is given by taking the generators of k (n ), the so called passive
transformations as one part and the generators of p (n ), the so called active transformations as
the other part. That these two sets together determine the Lie algebra completely can be seen
with the polar decomposition:
A basis for the Lie algebras k (n ) and p (n ) therefore characterises the complete Lie
algebra sp (2n ). Elements of the Lie algebras are also called generators and a basis of
generators therefore fixes the Lie algebra. Via the polar decomposition, this implies that they
also generate the whole Lie group. We will need a set of generators gij( p ) Î k (n ) and
gij(a) Î p (n ) later on, which we will fix via the metaplectic representation:
Since we have the liberty of a phase, this is not really a representation of the symplectic
group, but of its two-fold cover, the metaplectic group. We can also study the generators of
this representation, which are given by 1 2 {Rk , Rl }+.
For the reader familiar with annihilation and creation operators, if we denote by ai , ai† the
annihilation and creation operators of the n bosonic modes, the generators of the metaplectic
representation are given by
Gijp (1) ≔ i (aj† ai - ai† aj ) Gijp (2) ≔ aj† aj + aj† ai , (7)
where the p stands for ‘passive’ and the a for ‘active’. The passive generators are also
frequently called linear transformations in the literature (see [Kok+07]). We can now define
a set of generators of the symplectic group Sp (2n ) by using the set of metaplectic generators
Gij above and take corresponding generators gij in the Lie algebra sp (2n ) in a consistent way.
As one would expect from the name, the passive metaplectic generators correspond to a set of
passive generators of k (n ) and the set of active metaplectic generators corresponds to a set of
active generators of p (n ). The details of the correspondence are irrelevant (they are explicitly
spelled out in equation (6.6b) in [Arv+95a]), except for the fact that the set Giia (3) , i = 1,K,n
corresponds to the generators giia (3) generating matrices in Zn.
Given a Hamiltonian, the associated time evolution corresponds to a path on the Lie
group: for a (sufficiently regular) path g : [0, 1] Sp (2n ) we can find a function
A (t ) Î sp (2n ) such that
6
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
g ¢ ( t ) = A (t ) g (t ). ( 9)
Instead of directly studying Hamiltonians with time-dependent coefficients as in equation (1),
it is equivalent to study functions A : [0, 1] sp (2n ).
There are a number of decompositions of the Lie group and its subgroup in addition to
the polar decomposition. We will mostly be concerned with the so called Euler decomposition
(sometimes called Bloch–Messiah decomposition) and Williamson’s decomposition:
Proposition 2.8 (Euler decomposition [Arv+95a]). Let S Î Sp (2n ), then there exist
K , K ¢ Î K (n ) and A Î (n ) such that S = KAK ¢.
In particular, for M Î P (n ), this implies that M has a symplectic square root. Since
covariance matrices are always positive definite, this implies also that a Gaussian state is pure
if and only if its covariance matrix is symplectic. Heisenberg’s uncertainty principle has also a
Williamson version:
We have already noted that an important class of operations are those, which leave the CCR-
relations invariant, namely the symplectic transformations. Given a quantum state ρ, the
action of the symplectic group on the canonical variables R descends to a subgroup of unitary
transformations on ρ via the metaplectic representation (see [Arv+95b]). Its action on the
covariance matrix gr of ρ is even easier: Given S Î Sp (2n ),
gr S T gr S. (10)
In quantum optics, symplectic transformations can be implemented by the means of
(1) beam splitters and phase shifters, implementing operations in K(n) ([Rec+94])
(2) single-mode squeezers, implementing operations in Z(n).
Via the Euler decomposition, this implies that any symplectic transformation can be
implemented (approximately) by a combination of those three elements.
Definition 2.11. An n -mode bosonic state r is called squeezed, if its covariance matrix gr
possesses an eigenvalue l < 1.
7
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
states where the Heisenberg uncertainty relations are satisfied with equality for at least one
mode. These definitions however are well-known to be equivalent (see [SMD94]).
Throughout this section, we will always use σ as our standard symplectic form.
We will now define a first operational squeezing measure for symplectic transformations,
which will later be used to define a measure for operational squeezing.
Note that we sum only over half of the singular values. Restricting this function to
symplectic matrices will yield an operational squeezing measure for symplectic transforma-
tions: recall that the symplectic group is generated by symplectic orthogonal matrices and
single-mode squeezers. The orthogonal matrices are easy to implement and therefore will be
considered a free resource. The squeezers have singular values s and s-1 and they are
experimentally hard to implement and should therefore be assigned a cost that depends on the
squeezing parameter s. Using this, the amount of squeezing seems to be characterised by the
largest singular values. Here, we quantify the amount of squeezing by a cost log(s ), which
can be seen as the interaction strength of the Hamiltonian needed to implement the squeezing.
Let us make this more precise: define the map
D : Sp (2n) ⋃ Sp (2n)´m
mÎ
S ⋃ {(S1, ¼, Sm)∣S = S1 Sm, Si Î K (n) È Z (n)}.
mÎ
The image of Δ for a given symplectic matrix contains all possible ways to construct S as a
product of matrices from K(n) or Z(n). We define:
8
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
Let us write the last observation in ( * ) as a small lemma for later use:
Up to now, we have only considered products of symplectic matrices, which would corre-
spond to a chain of beam splitters, phase shifters and single-mode squeezers. The goal of this
section is to prove that one cannot improve the results with arbitrary paths on Sp (2n ),
corresponding to general Hamiltonians of the form of equation (1) as described in section 2.
Let r (S ) be the set of absolutely continuous paths a : [0, 1] Sp (2n ) with a derivative
which is bounded almost everywhere such that a (0) = and a (1) = S . Such paths seem to
capture most if not all physically relevant cases.
Recall the set of generators g of sp (2n ) defined in section 2 and order them in a single
vector. Usin equation (9), any a Î r (S ) corresponds to a A Î L¥ ([0, 1], sp (2n)). Since
the generators g form a basis, we can write A (t ) = ca (t ) · g with a function
ca Î L¥ ([0, 1], sp (2n )). Both A or ca together with the condition a (0) = uniquely
define α.
The goal of this section is to prove that this does not give us any better way to avoid
squeezing:
The proof of this theorem is quite lengthy in details, thus we split it up into several
lemmata. The general idea is easy to relate: we first show that paths corresponding to products
of symplectic matrices of type Z(n) or K(n) produce the same outcome in (13) and (12). We
then use an approximation argument: given any path, we can approximate it by a path of
products of symplectic matrices to arbitrary precision.
To start, we prove the following lemma:
9
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
Proof. First note that F is continuous in S since the singular values are. Using the Trotter-
formula, we obtain:
Let us define yet another version of F which we call F̂ in the following way:
⎧
⎪
N ⎫
2⎪
C N (S ) ≔ ⎨
⎪
(c1a , c1p, ¼, c Na , c Np ) S = exp (c ja g a + c jp g p) , cj Î 4n ⎬
⎪
,
⎩ j=1 ⎭
C (S ) ≔ ⋃ C N (S ) ,
N Î
⎧ ⎫
Fˆ (S ) ≔ inf ⎨åcia 1 c Î C (S )⎬.
⎪ ⎪
⎩ i ⎭
⎪ ⎪
This implies
Fˆ (S ) åcia 1 = åF (exp (cia g a)) = åF (exp ((cia)(i) gia)) = å log s1 (Ai ) = F (S).
i i i i
Here we used that (cia )(i) is also the largest singular value of exp ((cia )(i) gia ) Î Z (n ), as
a a
F (exp ((ci )(i) gi )) = (ci )(i) by normalisation of g.
a
For the other direction Fˆ F , let S be arbitrary. Let c Î C (S ) and consider each vector
ci separately. We drop the index i for readability, since we need to consider the entries of the
vector ci . To make the distinction clear, we denote the jth entry of the vector c by c( j). Recall
that the active generators are exactly those generating the positive matrices. Then:
10
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
Lemma 3.6
F (exp (cg)) F ( exp (c ag a)) = lim F (( exp (c(ai) gia n))n )
n ¥ i
åF (exp (c(ai) gia)) = å∣c(ai) ∣ = c a1 ,
i i
where we basically redid the calculations we used to prove lemma 3.6, using the continuity of
F and the Trotter formula from matrix analysis. Until now, we have considered only one ci of
c Î C (S ). Now, if we define Si = exp (ci g), then we have i Si = S and hence, using
lemma 3.4, we find:
Lemma 3.4
F (S ) åF (Si) åF (exp (ci g)) åcia 1 " c Î C (S ) .
i i i
where we used that the integral over the interval [0, 1 (n + 2)) and [(n + 1) (n + 2), 1] is
empty due to the fact that all active components are zero. In the last step, we used that for
the Euler decomposition, which takes the minimum in F̂ , this value is exactly
åi∣(cia+ 1)(i) ∣ = åi cia+ 1 1, since (cia+ 1)( j) = 0 for j ¹ i . Taking the infimum on the left-hand
side only decreases the value. ,
11
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
For the other direction, we need some facts about ordinary differential equations that are
collected in appendix A.
Proof. Let S Î Sp (2n ) be arbitrary. Combining the proof of lemma 3.8 with proposition 3.3
and lemma 3.7 we have already proved:
⎧ 1
F (S ) = inf ⎨
⎩ ò0 caa (t )1 dt a Î r (S ) ,
a
˙ (t ) = (cap (t ) g p (a (t )) , caa (t ) g a (a (t )))T , c step fct.}.
The only thing left to prove is that we can drop the step-function assumption. This will be
done by a standard approximation argument: let F˜ (S ) denote the right-hand side of
equation (15). Let e > 0 and consider an arbitrary A Î L¥ such that
1
ò0 caa (t )1 dt - F˜ (S ) < e (16)
i.e.A corresponds to a path that is close to the infimum in the definition of F̃ . We can now
approximate ca by step-functions ca¢ (corresponding to a function A¢ , see lemma A.2) such
that
1
ò0 ca (t ) - ca¢ 1 dt < e . (17)
Using the fact that the propagators UA, UA ¢ are differentiable almost everywhere (proposition
A.1) and absolutely continuous when one entry is fixed, we can define a function
f (s ) ≔ UA (0, s ) U A¢ (s, t ), which is also differentiable almost everywhere. Furthermore, the
fundamental theorem of calculus holds for f (s) (see [Rud87], theorems 6.10 and 7.8).
d
f (s) = - UA (0, s) A (s) U A¢ (s , t ) + UA (0, s) A¢ (s) U A¢ (s , t )
ds
almost everywhere, which implies:
t d
U A¢ (0, t ) - UA (0, t ) = f (t ) - f (0) = ò0 ds
f (s) ds
t
= ò0 UA (0, s)(A¢ (s) - A (s)) U A¢ (s , t ) ds.
12
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
1
ò0 ca¢ (t )1 dt - F˜ (S ) < 2e. (19)
Throughout this section, for convenience, we will switch to using J as symplectic form.
Having defined the measure F, we will now proceed to define a squeezing measure for
creating an arbitrary (mixed) state:
Definition 4.1. Let r be an n -mode bosonic quantum state with covariance matrix G. We
then define:
G (r ) º G (G) ≔ inf {F (S )∣G S T S , S Î Sp (2n)}. (23)
Note that G is always finite: for any given covariance matrix Γ, by Williamson’s theorem
and corollary 2.10, we can find S Î Sp (2n ) and D̃ such that G = S T DS ˜ S T S . Fur-
thermore G is also non-negative since F is non-negative for symplectic S. We will prove in
section 5 that this is indeed an operational measure.
13
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
We will now give several reformulations of the squeezing measure and prove some of its
properties. In particular, G is convex and one of the crucial steps towards proving convexity
of G is given by a reformulation of G with the help of the Cayley transform. For the reader
unfamiliar with the Cayley transform, a definition and basic properties are provided in
appendix B.
= { (
H=
B -A )
A B Î 2m´ 2m ∣ AT = A , BT = B , spec (H ) Ì ( - 1, 1)
} . (26)
Proof. First note that the infimum in all three expressions is actually attained. We can see
this most easily in the definition (23): the matrix inequalities G S T S (iJ ) imply that the set
of feasible S in the minimisation is compact, hence its minimum is attained. To see
(23) = (24), first note that (24) (23) since any S Î Sp (2n ) also fulfils S T S iJ ,
hence G S T S iJ . For equality, note that for any G G0 iJ , using Williamson’s
theorem we can find S Î Sp (2n ) and a diagonal D̃ (via corollary 2.10) such
that G0 = S T DS S T S iJ . But since F (G10 2) F ((S T S )1 2 ) = F (S ) via the Weyl
monotonicity principle, the infimum is achieved on symplectic matrices.
Finally, let us prove equality with (25). First observe that we can replace Sp (2n ) by
using proposition B.1(4).
Using the fact that si (S ) = li (S T S )1 2 = li ( (H ))1 2 and the fact that H is
diagonalised by the same unitary matrices as (H ) = ( + H ) · ( - H )-1 whence its
eigenvalues are
1 + li (H )
li ( (H )) = ,
1 - li (H )
we have:
⎧ 1
⎛ 1 + l (H ) ⎞ 2 ⎫
⎪ n ⎪
inf {F (S )∣G S T S, S Î Sp (2n)} = inf ⎨log
⎪
⎜ i
⎟ ∣G ( H ) , H Î ⎬.
⎪
⎩ i=1 ⎝ 1 - l i (H ) ⎠ ⎭
Next we claim li (H ) = si (A + iB) for i = 1,K,n. To see this note:
= ⎛⎜ 0 A + iB ⎞⎟
(
1 i
2 - i
· A B ·
B -A )(
- i i ⎝ )(
A - iB 0 ⎠
.) (27)
The singular values of the matrix on the right-hand side of equation (27) are the eigenvalues
of diag ((A + iB)† (A + iB), (A + iB)(A + iB)†)1 2 , which are the singular values of A + iB
with double multiplicity. From the structure of H, it is immediate that the eigenvalues of the
right-hand side of equation (27) and thus of H come in pairs si (A + iB). Hence
14
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
4.2. Convexity
Theorem 4.3. G is convex on the set of covariance matrices {G Î 2n ´ 2n∣G iJ}.
If we restrict f to symmetric matrices A and B such that si (A + iB) < 1 for all i = 1, ¼, n , f
is jointly convex in A, B , i.e.
f (tA + (1 - t ) A¢ , tB + (1 - t ) B ¢) tf (A , B) + (1 - t ) f (A¢ , B ¢) " t Î [0, 1] .
with pp 0 and å p pp = 1. In ( * ), we used that unitaries do not change the spectrum. Now
each summand in equation (28) is the Cayley transform of a singular value. We can use the
15
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
where in (**) we use that the sum of all eigenvalues is of course not dependent on the order of
the eigenvalues. ,
Proof of theorem 4.3. We can now finish the proof of the convexity of G.
First note that using the definition of f in lemma 4.4 we can reformulate (25) to
Let G iJ , G¢ iJ be two covariance matrices and let H , H ¢ Î be the matrices that attain
the minimum of G (G), G (G¢) respectively. Then, in particular, tH + (1 - t ) H ¢ Î .
Furthermore, since -1(G) H and -1(G¢) H ¢ we have
(*)
-1(t G + (1 - t ) G¢) t -1(G) + (1 - t ) -1(G¢) tH + (1 - t ) H ¢ ,
The convexity now follows directly from lemma 4.4 and the fact that we chose H and H ¢ to
attain G (G) and G (G¢). ,
From the convexity of G on the set of covariance matrices, it follows from general arguments
in convex analysis that G is continuous on the interior of the set of covariance matrices (see
[Roc97], theorem 10.1). What more can we say about the boundary?
16
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
The ultimate goal is to extend continuity from the interior to the exterior, which we do
not know how to do at present. The proof will need a few notions from set-valued analysis
that we review in appendix C.
G Î ⋂ Cn = (G0) .
n Υ
Therefore, G (G0) lime 0 G (G0 + e ), but since G0 G0 + e for all e > 0 we also have
G (G) lime 0 G (G0 + e ). ,
Now we consider additivity properties of G. We switch our basis again and use γ and σ.
Proposition 4.6. For any covariance matrices gA Î 2n1´ 2n1 and gB Î 2n2 ´ 2n2 , we have
1
(G (gA) + G (gB)) G (gA Å gB) G (gA) + G (gB).
2
In particular, G is subadditive.
Proof. For subadditivity, let S T S gA and S ¢ T S ¢ gB obtain the minimum in G (gA) and
G (gB ) respectively. Then S Å S ¢ is symplectic and (S Å S ¢)T (S Å S ¢) gA Å gB
hence, G (gA Å gB ) G (A) + G (B).
To prove the lower bound, we need the following equation that we will only prove later
on (see equation (46)):
a1: G (gA) G (gA Å a n2). (32)
Assuming this inequality, let a 1 be such that a n2 gB , then
G (gA Å a n2) G (gA Å gB)
hence G (gA) G (gA Å gB ) and since we can do the same reasoning for gB , we have
G (gA) + G (gB ) 2G (gA Å gB ). ,
Corollary 4.7. Let gA Î 2n1´ 2n1 and gB Î Sp (2n 2 ), be two covariance matrices (i.e. gB is a
covariance matrix of a pure state). Then G is additive.
Proof. Subadditivity has already been proven in the lemma. For superadditivity, we use the
second reformulation of the squeezing measure in equation (24): note that there is only one
matrix gB g is , namely gB itself. Now write
⎛ ˜ ⎞
gA Å gB ⎜ A C ⎟ is
⎝C T B˜ ⎠
for A˜ Î 2n1´ 2n1 and B˜ Î 2n2 ´ 2n2 . Then in particular gB - B˜ 0 , but also B˜ is , hence
gB B˜ is and therefore B˜ = gB . But then
⎛ ˜ ⎞ ⎛ g - A˜ C ⎞
gA Å gB - ⎜ A C ⎟ = ⎜ A ⎟
⎝C B˜ ⎠ ⎝ C T
T
0⎠
hence also C = 0 and the matrix that takes the minimum in G (gA Å gB ) must be block-
diagonal. Then gA Å gB A˜ Å gB 0 and à is in the feasible set of G (gA). ,
18
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
Corollary 4.8. For any covariance matrices gA Î 2n1´ 2n1 and gB Î 2n2 ´ 2n2 ,
⎛⎛ g C ⎞⎞
G (gA) + G (gB) 2G ⎜⎜ ⎜ T ⎟ ⎟⎟.
A
⎝ ⎝C gB ⎠ ⎠
If G is superadditive, then this inequality holds without the factor of two.
Proof.
⎛ ⎛g 0 ⎞ ⎞ ⎛1 ⎛ g C ⎞ 1 ⎛ gA - C ⎞ ⎞
G (gA) + G (gB) 2G ⎜ ⎜ A ⎟ ⎟ = 2G ⎜⎜ ⎜ T ⎟⎟
A
⎟+ ⎜
⎝ ⎝ 0 gB ⎠ ⎠ ⎝ 2 ⎝C gB ⎠ 2 ⎝- C T gB ⎠ ⎟⎠
(*) ⎛ ⎛ gA C ⎞ ⎞ ⎛⎛ g - C ⎞ ⎞ (**) ⎛⎛ g C ⎞⎞
G ⎜⎜ ⎜ T ⎟ ⎟⎟ + G ⎜⎜ ⎜ ⎟ ⎟⎟ = 2G ⎜⎜ ⎜ T ⎟ ⎟.
A A
⎝⎝ C gB⎠ ⎠ ⎝⎝ - C T g B ⎠⎠ ⎝⎝C gB ⎠ ⎟⎠
Here we used proposition 4.6 and then convexity of G in ( * ). Finally, in (**) we used that
for every
⎛ gA C ⎞ ⎛ SA C˜ ⎞ ⎛ SA C˜ ⎞T
⎜ T ⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ Î Sp (2 (n1 + n 2)) (33)
⎝C gB ⎠ ⎝C˜ T SB ⎠ ⎝C˜ T SB ⎠
we also have:
⎛ gA - C ⎞ ⎛ SA - C˜ ⎞ ⎛ SA - C˜ ⎞T
⎜ ⎟ ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ Î Sp (2 (n1 + n 2)) (34)
⎝- C T gB ⎠ ⎝- C˜ T SB ⎠ ⎝- C˜ T SB ⎠
and vice versa. Since the two matrices on the right-hand side of equations (33) and (34) have
equal spectrum, the two squeezing measures of the matrices on the left-hand side need to be
equal. ,
4.5. Bounds
Proposition 4.9 (Spectral bounds). Let G iJ be a valid covariance matrix and l (G) be
the vector of eigenvalues in decreasing order. Then:
n
1 1
- å log (li (G)) G (G) 2
2 l i (G) < 1
å log li (G) = F (G1 2) . (35)
i=1
Proof. According to the Euler decomposition, a symplectic positive definite matrix has
positive eigenvalues that come in pairs s, s-1 and we can find O Î SO (2n ) such that for any
STS G
OT GO diag (s1, ¼, sn , s1-1, ¼, sn-1) .
But then, lk (G) lk (diag (s1, ¼, sn, s1-1, ¼, sn-1)) via the Weyl inequalities li (A) li (B)
for all i and A - B 0 (see [Bha96], theorem III.2.3). This implies:
19
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
n n
G (G) å log (max {si , si-1}) å log li (G)1 2.
i=1 i=1
For the lower bound, given an optimal matrix S with eigenvalues si, we have
Now, - 2 å i2=n 2n - k + 1 li (G) can be upper bounded by restricting to eigenvalues li (G) < 1.
1
This implies
1
- å log (li (G)) G (G)
2 l i (G) < 1
using that the number of eigenvalues li (G) < 1 can at most be n (hence k n in the
inequality of equation (36)), since G S T S and S TS has at least n eigenvalues bigger than
one. ,
Numerics suggest that the lower bound is often very good for low dimensions. In fact, it
can sometimes be achieved:
Proposition 4.10. Let G iJ be a covariance matrix, then G achieves the lower bound in
equation (35) if there exists an orthonormal eigenvector basis {vi}i2=n 1 of Γ with viT Jvj = di, n + j .
Conversely, if G achieves the lower bound, then viT Jvj = 0 for all normalised eigenvectors
vi, vj of G with li, lj < 1.
Proof. Suppose that the lower bound in equation (35) is achieved. Via Weyl’s inequalities
(see [Bha96] theorem III.2.3), for all S T S G in the definition of G we have
li (S T S ) li (G). For the particular S achieving G, this implies that for all li (G) < 1 we
have li (S T S ) = li (G). But then G S T S implies that S TS and Γ share all eigenvectors to the
smallest eigenvalue. Iteratively, every eigenvector of Γ with li (G) < 1 must be an
eigenvector of S TS with the same eigenvalue.
Since the matrix diagonalising S TS also diagonalises -1(S T S ), the eigenvectors of the
two matrices are the same. Now, since -1(S T S ) Î by reformulation (25), for any
eigenvector vi of any eigenvalue -1(li ) < 0 , Jvi is also an eigenvector of -1(S T S ) to the
eigenvalue --1(li ), implying viT Jvj = 0 for all i, j . By definition, this means that {vi, Jvj}
forms a symplectic basis. Above, we already saw that the eigenvectors of Γ for li (G) < 1 are
also eigenvalues of S TS, hence viT Jvj = 0 for all i such that li (G) < 1.
Conversely, suppose we have an orthonormal basis {vi}i2=n 1 such that viT Jvj = di, j + n
(modulo 2n if necessary) for all eigenvectors of Γ, i.e.Γ is diagonalisable by a symplectic
orthonormal matrix O˜ Î U (n ). Then
20
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
T
O˜ GO˜ = diag (l1, ¼, l2n) .
Since G iJ we have li l2i 1. Assume that li ln + i for all i = 1,K,n and the ln + i are
ordered in decreasing order. Then ln + r < 1 ln + r - 1 for some r n and
S T S = O˜ diag (1, ¼, 1, l- -1
T
r , ¼, l n , 1, ¼, 1, l n + r , ¼, l 2n) O
1 ˜
In contrast to this, the upper bound can be arbitrarily bad. For instance, consider the
thermal state G = (2N + 1) · for increasing N. It can easily be seen that G (G) = 0 , since
G Î P (n ) and F () = 0 , hence G (G) 0 . However, the upper bound in equation (35) is
n 2 log (2N + 1) ¥ for N ¥, therefore arbitrarily bad.
We can achieve better upper bounds by using Williamson’s normal form:
Proposition 4.11 (Williamson bounds). Let G Î 2n ´ 2n be such that G iJ and consider
its Williamson normal form G = S T DS . Then:
Proof. Since D via G iJ , the upper bound follows directly from the definition. Also,
F (S ) F (G1 2), which makes this bound trivially better than the spectral upper bound in
equation (35).
The lower bound follows from:
⎛ 2n ⎞ i = 1 li (G)
n
(36) 1 1
G (G) log ⎜ li (G)-1⎟ = log
i = 1 li (G)
2n
2 ⎝i = n + 1 ⎠ 2
= F (G1 2) - log (det (G)1 2 ) F ((S T S )1 2 ) - log ( det (G) )
= F (S ) - log ( det (G) )
using Weyl’s inequalities once again, implying that since S T S G, we also have
F (S )2 = F (S T S ) F (G). ,
The upper bound here can also be arbitrarily bad. One just has to consider
G ≔ S T (N · ) S with S 2 = diag (N - 1, ¼, N - 1, (N - 1)-1, ¼, (N - 1)-1) Î Sp (2n).
Then G , i.e.G (G) = 0 , but F (S ) ¥ for N ¥.
where p (n ) was defined in proposition 2.5 as the Lie algebra of the positive semidefinite
symplectic matrices. This infimum can be computed efficiently as a semidefinite programme.
Proof. Recall that the logarithm is operator monotone on positive definite matrices. Using
this, we have:
21
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
⎧ n ⎫
G (G) = log inf ⎨ li (S T S )1 2 ∣G S T S ⎬
⎩i = 1 ⎭
⎧n ⎫
inf ⎨å log li ( exp (g0))1 2 ∣log G g0, g0 Î p (n)⎬
⎩i = 1 ⎭
⎧1 n ⎫
= inf ⎨ åli (g0)∣log G g0, g0 Î p (n)⎬
⎩ 2 i=1 ⎭
⎧1 2 n ⎫
= inf ⎨ åsi (g0)∣log G g0, g0 Î p (n)⎬.
⎩ 4 i=1 ⎭
The last step is valid, because the eigenvalues of matrices g0 Î p (n ) come in pairs li . Since
the sum of all the singular values is just the trace-norm, we are done.
It remains to see that this can be computed by a semidefinite programme. First note that
since the matrices H Î p (n ) are those symmetric matrices with HJ + JH = 0 , the constraints
are already linear semidefinite matrix inequalities. The trace norm is an SDP by standard
reasoning [RFP10, VB96]:
⎧1 ⎛ A g0 ⎞ ⎫
g0 1 = min ⎨ tr (A + B) ⎜ ⎟ 0⎬
⎩2 ⎝ g0 B ⎠ ⎭
which is clearly a semidefinite programme. ,
Numerics for small dimensions suggest that this bound is mostly smaller than the spectral
lower bounds.
We claim that G answers the question: given a state, what is the minimal amount of single-
mode squeezers needed to prepare it? In other words, it quantifies the amount of squeezing
needed for the preparation of a state.
5.1. Operations for state preparation and an operational measure for squeezing
We first specify the preparation procedure. Since we want to quantify squeezing, it seems
natural that we allow to freely draw states from the vacuum or a thermal bath to start with.
Furthermore, we can perform an arbitrary number of the following operations for free:
(1) Add ancillary states also from a thermal bath or the vacuum.
(2) Add Gaussian noise.
(3) Implement any gate from linear optics.
(4) Perform Weyl-translations of the state.
(5) Perform selective or non-selective Gaussian measurements such as homodyne or
heterodyne detection.
(6) Forget part of the state.
(7) Create convex combinations of ensembles.
In addition, the following operation comes with an associated cost:
(8) Implement single-mode squeezers at a cost of log(s ), where s is the squeezing parameter.
22
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
All these operations are standard operations in quantum optics and they should capture
all important Gaussian operations except for squeezing.
It is well-known that all of these operations are captured by the following set of
operations on the covariance matrix (for a justification, see appendix D):
(O0) We can always draw N-mode states with g Î 2N ´ 2N for any dimension N from the
vacuum g = or a bath fulfiling g .
(O1) We can always add ancillary modes from the vacuum ganc = or a bath g and
consider g Å ganc .
(O2) We can freely add noise with gnoise 0 to our state, which is simply added to the
covariance matrix of a state.
(O3) We can perform any beam splitter or phase shifter and in general any operation
S Î K (n ), which translates to a map g S T gS on covariance matrices of states.
(O4) We can perform any single-mode squeezer S = diag (1, ¼, 1, s, s-1, 1 ¼ ,1) for
some s Î +.
(O5) We can perform any Weyl-translation leaving the covariance matrix invariant.
(O6) Given two states with covariance matrices g1 and g2 , we can always take their convex
combination pg1 + (1 - p ) g2 for any p Î [0, 1].
(O7) At any point, we can perform a selective measurement of the system corresponding to
a projection into a finitely or infinitely squeezed state. Given a state with covariance
⎛ A B⎞
matrix g = ⎜ T ⎟, this maps
⎝B C⎠
g A - C (B - gG ) MP C T ,
MP
where denotes the Moore–Penrose pseudoinverse.
Only operation (O4) comes at a cost of log(s ), all other operations are free.
We are now ready to state our main theorem, which states that the minimal squeezing
cost for any possible preparation procedure consisting of operations (1)–(8).is given by G.
Theorem 5.1. Let r be a quantum state with covariance matrix g . Consider arbitrary
sequences
gN ≔ g0 g1 gN ,
where g0 fulfils (O0) and every arrow corresponds to an arbitrary operation (O1)–(O5) or
(O7). Using (O6), we can merge two sequences g N1 and g N2 to one resulting tree with
g N1+ N2 + 1 = lg N1 + (1 - l ) g N2 for some l Î (0, 1). Iteratively, we can construct trees of any
depth and width using operations (O1)–(O7).
Let ON (g ) be the set of such trees with N operations ending with γ (i.e. gN = g ).
Let O (g ) = ⋃¥ N = 1 ON (g ).
Furthermore, for any tree gˆ Î ON (g ), let s = {si}iM= 1 be the sequence of the largest
singular values of any single-mode squeezer (O4) implemented along the sequence (in
particular, M N ). Then
⎧ ⎫
G (g ) = inf ⎨å log si∣si Î s , gˆ Î O (g )⎬ . (39)
⎩ i ⎭
23
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
Since we consider many different operations, the proof is rather lengthy, where the main
difficulties will be in showing that measurements do not squeeze. In order to increase
readability, the proof will be split into several lemmata.
(O1) (O2) ( O 3) , ( O 4) T
g0 g0 Å ganc g0 Å ganc + gnoise S (g0 Å ganc + gnoise) S
(O 7)
(S T (g0 Å ganc + gnoise) S ) (41)
⎛ A C⎞
g= ⎜ ⎟; (g ) = A - C (pBp ) MP C T
⎝C T B ⎠
24
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
= S T AS - S T C (pBp ) MP C T S
hence the final covariance matrices are the same. By the same reasoning as in (3), the
costs are equivalent.
(6) Any sequence g (g ) (g ) + gnoise can be converted into a sequence
g g + g˜noise (g + g˜noise ) by setting g˜noise = gnoise Å 0 , with 0 on the last mode
being measured. Since no symplectic matrices are involved, the costs are equivalent.
(7) Any sequence g (g ) (g ) Å ganc can be changed into a sequence
g g Å ganc ˜ (g Å g ), where the measurement ̃ measures the last mode of
anc
γ, i.e.
⎛⎛ A C 0 ⎞⎞
⎜ ⎟
M˜ ⎜ ⎜C T B 0 ⎟ ⎟ = (A Å ganc) - (C Å 0)(pBp ) MP (C Å 0)T .
⎜⎜ 0 0 g ⎟⎟
⎝⎝ anc ⎠ ⎠
Clearly, the resulting covariance matrices of the two sequences are the same and the costs
are equivalent.
We can now easily prove the lemma. Let g0 gn be an arbitrary sequence with
operations of type (O1)–(O5) or (O7). We can first move all measurements to the right of the
sequence, i.e.we first perform all operations of type (O1)–(O5) and then all measurements.
This is done using the observations above. Note also that this step is similar to the quantum
circuit idea to ‘perform all measurements last’ (see [NC00], chapter 4).
Similarly, we can combine operations of type (O3) and (O4) and rearrange the other
operations to obtain a new sequence as in equation (41) with at most the costs of the sequence
g1 gm we started with. ,
Proof. First note that for any g is , we can find S Î Sp (2n ), g0 Î 2n ´ 2n with g0 and
gnoise Î 2n ´ 2n with gnoise 0 such that g = S T (g0 + gnoise ) S by using Williamson’s
theorem, hence the feasible set is never empty. The lemma is immediate by observing that for
any g = S T (g0 Å ganc + gnoise ) S since (g0 Å ganc + gnoise ) we have g S T S and
conversely, for any g S T S , defining g0 ≔ S-T gS-1 , we have g = S T g0 S . ,
25
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
Then we have:
Note here, that equation (45) together with the following proposition 5.5 finishes the
proof of proposition 4.6 via:
G (g ) = inf {G (g˜ )∣g = (g˜ ) , g˜ is} G (g Å a n2) (46)
for a 1, using that measuring the last modes we obtain (g Å a n2) = g and therefore,
g Å a n2 is in the feasible set of G˜ (g ) = G (g ).
G˜ (g ) = G (g ) .
Proof. Using lemma 5.4, the proof of this proposition reduces to the question whether:
inf {F (gˆ 1 2)∣g˜ gˆ is , (g˜ ) = g} = inf {F (g 1 2)∣g g is}. (47)
Since we do not need to use measurements, is obvious.
Let g˜ gˆ is for some (g˜ ) = g . Our first claim is that
g (gˆ ) is (48)
(gˆ ) is is clear from the fact that ĝ is a covariance matrix and a measurement takes states
to states. g (gˆ ) is proved using Schur complements. Let be a Gaussian measurement
as in equation (68) with gG = diag (d , 1 d ) with d Î +. It is well-known that
(g ) = ( Å diag (1 d , d ) g ( Å diag (1 d , d )) + 0 Å 2 )S ,
26
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
where S denotes the Schur complement of the block in the lower-right corner of the matrix.
For homodyne measurements, we take the limit d ¥. Since for any g˜ gˆ 0 , the Schur
complements of the lower right block fulfil g˜ S gˆ S 0 (see [Bha07], exercise 1.5.7), we
have g (gˆ ) as claimed in equation (48).
Next, we claim
To prove this claim, note that via the monotonicity of the exponential function on , it
suffices to prove
m n
sj ( (gˆ )) sj (gˆ )
j=1 j=1
Now we use Cauchy’s interlacing theorem (see [Bha96], corollary III.1.5): as  is a submatrix
of ĝ , we have li (Aˆ ) li (gˆ ) for all i = 1, ¼, 2m . Since at least m eigenvalues of  are
bigger or equal one and at least n eigenvalues of ĝ are bigger or equal one, this implies
m m m n n
sj (Aˆ ) = lj (Aˆ ) lj (gˆ ) lj (gˆ ) = sj (gˆ ) . (50)
j=1 j=1 j=1 j=1 j=1
27
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
We have now seen that the measure G can be interpreted as a measure of the amount of
single-mode squeezing needed to create a state ρ. Let us now take a different perspective,
which is the analogue of the entanglement of formation for squeezing: consider covariance
matrices of the form
⎛s 0 ⎞
gs ≔ ⎜ ⎟. (51)
⎝ 0 s-1⎠
These are single-mode squeezed states with squeezing parameter s 1. We will now allow
these states as resources and ask the question: given a (Gaussian) state ρ with covariance
matrix γ, what is the minimal amount of these resources needed to construct γ, if we can
freely transform the state by the same operations as before excluding squeezing ((O1)–(O7)
excluding (O4)).
The corresponding measure is once again G:
Theorem 5.6. Let r be an n -mode state with covariance matrix g Î 2n ´ 2n . Then
⎧m 1 m ⎞⎫
G (g ) = inf ⎨å log (sm)∣g = (⨁gsi⎟ ⎬ , (52)
⎩i = 1 2 i=1 ⎠ ⎭
Proof. : Note that for any feasible S Î Sp (2n ) in G (g ), i.e.any S with S T S g , we can
find O Î Sp (2n ) Ç O (2n ) and D = ⨁in= 1 gsi with S T S = OT DO via the Euler
decomposition. Using that the Euler decomposition minimises F, we have
F (S ) = 2 F (D) = å in= 1 2 log (si ). But then, since we can find gnoise 0 such that
1 1
g = OT ⨁in= 1 gsi O + gnoise , we have that D is a feasible resource state to produce γ. This
implies G resource (g ) G (g ).
: For the other direction, the proof proceeds exactly as the proof of theorem 5.1. First,
we exclude convex combinations. Then, we realise that we can change the order of the
different operations (even if we include adding resource states during any stage of the
preparation process) according to lemma 5.2, making sure that any preparation procedure can
be implemented via:
⎛ ⎛m ⎞ ⎞
g = ⎜O ⎜⨁gsi Å 2m¢ + gnoise ⎟ OT ⎟ ,
⎝ ⎝i = 1 ⎠ ⎠
¢ ¢
where O Î Sp (2m + 2m¢) Ç O (2m + 2m¢), gnoise Î 2m + 2m ´ 2m + 2m with gnoise 0 and
a measurement. Now the only difference to proof of 5.1 is that we had the vacuum instead of
⨁im= 1 gsi Å 2m¢ and an arbitrary symplectic matrix S instead of O, but the two ways of writing
the maps are completely interchangeable, so that the proof proceeds as in theorem 5.1. ,
28
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
We could call this measure the ‘(Gaussian) squeezing of formation’, as it is the analogue
to the Gaussian entanglement of formation. Note also that the measure is similar to the
Gaussian entanglement of formation as defined in [Wol+04]. One natural further question
would be whether ‘distillation of squeezing’ is possible with Gaussian operations. It is
impossible in some sense for the minimal eigenvalue via [Kra+03], while it is possible and
has been investigated for non-Gaussian states in many papers (see [Fil13, Hee+06] and
references therein). In our case, it is not immediately clear whether extraction of single-mode
squeezed states with less squeezing is possible or not. This could be investigated in
future work.
We have seen that the measure G is operational. However, to be useful, we need a way to
compute it.
1
Proposition 6.1. Let n = 1, then G (G) = - mini log (li (G)) for all G Î 2n ´ 2n .
2
Proof. Note that this is the lower bound in proposition 4.9, hence
- 2 mini log (li (G)) G (G). Now consider the diagonalisation G = O diag (l1, l2 ) OT
1
s-1. Since diag (l1, l2 ) O-T S T SO-1, this implies in particular that s-1 l2 by Weyl’s
inequality. Since F (S T S ) = log s , in order to minimise F (S) over S T S G, we need to
maximize s-1. Setting s-1 = l2 we obtain s = l- 2 l1 and diag (l1, l2 ) diag (s , s ).
1 -1
G (G) = F (S ) = 2 log l-
1 1
2 . ,
Proposition 6.2. Let r be a pure, Gaussian state with covariance matrix G Î 2n ´ 2n .
Then G (G) = F (G1 2).
Proof. From proposition 2.2, we know that det (G) = 1 in particular. Therefore, the bounds
in proposition 4.11 are tight and G (G) = F (G1 2). ,
The crucial observation to numerically find the optimal squeezing measure is given in
lemma 4.4: if we use G in the form of equation (25), we know that the function to be
minimised is convex on . In general, convex optimisation with convex constraints is
efficiently implementable and there is a huge literature on the topic (see [BV04] for an
overview).
In our case, a certain number of problems occur when performing convex optimisation:
(1) The function f in equation (28) is highly nonlinear. It is also not differentiable at
eigenvalue crossings of A + iB or H Î . In particular, it is not differentiable when one
of the eigenvalues becomes zero, which is to be expected at the minimum.
29
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
(2) While the constraints -1(g ) H and > H > - are linear in matrices, they are
nonlinear in simple parameterisations of matrices.
(3) For γ on the boundary of the set of allowed density operators, the set of feasible solutions
might not have an inner point.
The first and second problem imply that most optimisation methods are unsuitable, as
they are either gradient-based or need more problem structure. It also means that there is no
guarantee for good stability of the solutions. The third problem implies that interior point
methods become unsuitable on the boundary, which limits applications. For instance, our
example of the next section (see equation (53)) lies on the boundary. As a proof of principle
implementation, we used the MATLAB-based solver SOLVOPT (for details see the manual
[KK97]). We believe our implementation could be made more efficient and more stable, but it
seems to work well in most cases for less than ten modes. More information on the pro-
gramme is provided in appendix E.
Let us now work with a particular example that has been studied in the quantum information
literature. In [MK08], Mišta Jrand Korolkova define the following three-parameter group of
three-mode states where the modes are labelled A, B, C :
g = gAB Å C + x (q1 q1T + q2 q2T ) (53)
with
⎛ e 2d a 0 - e 2d c 0 ⎞
⎜ - 2d - ⎟
gAB = ⎜ 02d e a 0 e 2d c ⎟ ,
⎜⎜- e c 0 e 2d a 0 ⎟⎟
⎝ 0 -
e c2 d 0 e-2d a ⎠
q1 = (0, sin f , 0, - sin f , 2, 2 )T ,
q2 = (cos f , 0, cos f , 0 2 , 2 )T ,
where a = cosh (2r ), c = sinh (2r ), tan f = e-2r sinh (2d ) + 1 + e-4r sinh2 (2d ) . The
remaining parameters are d r > 0 and x 0 . For
2 sinh (2r )
x = xsep
e 2d sin2 f + e-2d cos2 f
the state becomes fully separable [MK08]. The state as such is a special case of a bigger
family described in [Gie+01]. In [MK08], it was used to entangle two systems at distant
locations using fully separable mediating ancillas (here the system labelled C). Therefore,
Mišta Jr and Korolkova considered also an LOCC procedure to prepare the state characterised
by (53). For our purposes, this is less relevant and we allow for arbitrary preparations of the
state. This was also done in [MK08] by first preparing modes A and B each in a pure
squeezed-state with position quadratures e 2(d - r ) and e 2(d + r ) . A vacuum mode in C was added
and x (q1 q1T + q2 q2T ) was added as random noise. Therefore, the squeezing needed to produce
this state in this protocol is given by
1
c=log (e 2 (d - r ) · e 2 (d + r ) ) = 2d . (54)
2
We numerically approximated the squeezing measure for gABC , choosing x = xsep , which
leaves a two-parameter family of states. We chose parameters d and r according to
30
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
31
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
(3) Define g˜ ≔ S-T gABC S-1 + (1 - min {1, l2n}) . Calculate the largest singular value
of S T g˜ S - g .
If S was a feasible point, then S T g˜ S = g . Since it is obvious how to prepare g̃ with
operations specified in section 5, the largest singular value of S T g˜ S - g is an indicator of how
well we can approximate the state we want to prepare by a state with comparably low
squeezing costs.
The results of the numerical computation are shown in figure 1. We computed the
minimum both with the help of numerical and analytical subgradients and took the value with
a better approximation error. At rare occasions, one algorithm failed to obtain a minimum.
Possible reasons for this are discussed in appendix E. The optimal values computed by the
algorithm are close to the lower bound and a lot better than the upper bound and the costs
obtained by equation (54). One can easily see that gABC cannot achieve the spectral lower
bound as the assumptions of lemma 4.10 are not met.
will not be equally hard to prepare although G (g ) = G (g ¢). This is due to the fact that we
quantified the cost of a single-mode squeezer by log s .
To amend this, one could propose an easy modification to the definition of F in
equation (11):
n
Fg (g ) = å log (g (si (S))) (57)
i=1
by inserting another function g : to make sure that for the corresponding measure
Gg (r ) º Gg (g ), we have Gg (g ) ¹ Gg (g ¢) in equation (56). We pose the following natural
restrictions on g:
• We need g (1) = 1 since Gg (r ) should be zero for unsqueezed states.
• Squeezing should get harder with larger parameter, hence g should be monotonously
increasing.
• For simplicity, we assume g to be differentiable.
Let us first consider squeezing operations and the measure Fg. We proved in proposition
3.3 and theorem 3.5 that F is minimised by the Euler decomposition. A crucial part was given
by lemma 3.4. In order to be useful for applications, we must require the same to be true for
Fg, i.e.
32
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
n n
å log (g (si (SS¢))) å [log (g (si (S))) + log (g (si (S¢)))].
i=1 i=1
This puts quite strong restraints on g: considering n = 1 and assuming that S and S ¢ are
diagonal with ordered singular values, this implies that g must fulfill g (xy ) g (x ) g ( y) for
x, y 1. This submultiplicativity restraint rules out all interesting classes of functions:
Assume for instance that g (2) = c , then g (2n) c n , where equality is attained if
g (x ) = c · x . Therefore, all submultiplicative functions g(x) for x 1 must lie below
g (x ) = c · x at least periodically. Hence, lemma 3.4 does not hold if we consider increasingly
growing functions g. This implies that one could make the measure arbitrarily small by
splitting the single-mode squeezer into many successive single-mode squeezers with smaller
squeezing parameter, which does not reflect experimental reality.
A way to circumvent the failure of lemma 3.4 would be to work with the ‘squeezing of
formation’ measure. Likewise, one could require that there was only one operation of type
(O4) as specified in section 5 in any preparation procedure. In that case we have:
Proof. The first condition replaces the log -convexity of the Cayley transform in the proof of
theorem 4.3, making the measure convex. Using [Bha96], II.3.5 (v), the second condition
makes sure that equation (50) still holds. The second condition can probably be relaxed while
the proof of theorem 5.1 is still applicable. A function g fulfilling these prerequisites is
g (x ) = exp (x ), which would correspond to a squeezing cost increasing linearly in the
squeezing parameter. One could even introduce a cutoff after which g would be infinite. ,
A simpler way to reflect the problems of equation (56) would be to consider the measures
G and GminEig together (calculating GminEig of both the state and the minimal preparation
procedure in G).
Another problem is associated with the form of the Hamiltonian (1). In the lab, the
Hamiltonians that can be implemented might not be single-mode squeezers, but other
operations such as symmetric two-mode squeezers (e.g. [SZ97], chapter 2.8). It is clear how
to define a measure G¢ for these kinds of squeezers. Using the Euler decomposition, G is a
lower bound to G¢, but we did not investigate this any further.
Acknowledgments
MI thanks Konstantin Pieper for discussions about convex optimisation and Alexander
Müller-Hermes for discussions about MATLAB. MI is supported by the Studienstiftung des
deutschen Volkes.
Let us collect facts about ordinary differential equations needed in the proof:
33
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
¶t U (s , t ) - U (s , t ) A (t ) = 0
U (s , s) = (59)
(8) U (s, t ) Î Sp (2n ) for all t , s Î [0, 1] and g (t ) = U (0, t ) fulfills equation (9)
with g (0) = .
Proof. The proof of this (except for the part about U (s, t ) Î Sp (2n )) can be found in
[Son98] (theorem 55 and lemma C.4.1) for the transposed differential
equation x˙ (t ) = A (t ) x (t ).
For the last part, note that since U (s, s ) = Î Sp (2n ), we have U (s, s )T JU (s, s ) = J .
We can now calculate almost everywhere:
¶t (U (t , s)T JU (t , s)) = - U (t , s)T (AT (t ) J - JA (t )) U (t , s) = 0
since A (t ) Î sp (2n ) and therefore AT (t ) J - JA (t ) = 0 .
But this implies U (t , s )T JU (t , s ) = J , hence U is symplectic. Obviously, U (0, t ) solves
equation (9). ,
Lemma A.2. Let A : [0, 1] sp (2n ), A Î L¥ ([0, 1], 2n ´ 2n). Then A can be
approximated in ·1-norm by step-functions, which we can assume to map to sp (2n )
without loss of generality.
34
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
Proposition B.1. Define the Cayley transform and its inverse via:
: {H Î n ´ n∣spec (H ) Ç { + 1} = Æ} n ´ n
+H (60)
H ,
-H
-1{S Î n ´ n∣spec (H ) Ç { - 1} = Æ} n ´ n
S-
S (61)
S+
is a diffeomorphism onto its image with inverse -1. Furthermore, it has the following
properties:
(1) is operator monotone and operator convex on matrices A with spec (A) Ì (-1, 1).
(2) -1 is operator monotone and operator concave on matrices A with spec (A) Ì (-1, ¥).
(3) : with (x ) = (1 + x ) (1 - x ) is log-convex on [0, 1).
(4) For n = 2m even, H Î 2m ´ 2m and H Î if and only if (H ) Î Sp (2m, )
and (H ) iJ .
where is defined via:
⎧ ⎫
⎩ B -A ( )
= ⎨H = A B Î 2m´ 2m AT = A , BT = B , spec (H ) Ì ( - 1, 1), .⎬
⎭
The definition and the fact that this maps the upper half plane of positive definite matrices
to matrices inside the unit circle is present in [AG88] (I.4.2) and [MS98] (proposition 2.51,
proof 2). Since no proof is given in the references and they do not cover the whole propo-
sition, we provide them here.
We start with well-definedness:
Lemma B.2. and -1 are well-defined and inverses of each other. Moreover, is a
diffeomorphism onto its image dom (-1).
35
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
and thus
For the inverse of the Jordan blocks, we can use the well-known formula:
⎛ 1 -1 ⎞
( - 1) n i - 1
-1 ⎜ 1 - li ¼ ⎟
⎛1 - li 1 ¼ 0 ⎞ (1 - l i ) 2 (1 - l i ) n i
⎜ ⎟ ⎜ 1 ( - 1) n i - 2 ⎟
1 - li ¼ ¼ (1 - l ) n i - 1 ⎟
=⎜
⎜ 0 0 ⎟ 0 1 - li .
⎜ ⎟
i
⎜⎜ ⎟⎟
⎝ 0 0 ¼ 1 - li ⎠ ⎜ ⎟
⎜ 0 0 ¼
1 ⎟
⎝ 1 - li ⎠
In particular, this is still upper triangular. Then J (ni , 1 + li ) J (ni , 1 - li )-1 is still upper
triangular with diagonal entries (1 + li ) (1 - li ). Since (1 + li ) (1 - li ) ¹ -1 for all
li Î , we find that J (ni , 1 + li ) J (ni , 1 - li )-1 cannot have eigenvalue −1 for any i,
hence spec ( (H )) Ç {-1} ¹ Æ.
Finally, we observe:
+H
-H
- +H-+H
-1 (H ) = +H
= = H.
+ +H+-H
-H
Moreover, set f1 (A) = -2A - for all matrices A Î m ´ m , f2 (A) = A-1 for all invertible
matrices A Î m ´ m and f3 (A) = A - for all matrices A Î m ´ m . Then we have
⎛ 1 ⎞ 2
f1 ◦ f2 ◦ f3 (H ) = f1 ◦ f2 (H - ) = f1 ⎜ ⎟ = - - = (H ) . (64)
⎝H - ⎠ H-
Since fi are differentiable for all i = 1, 2, 3, we have that is invertible.
The same considerations with a few signs reversed also lead us to conclude that -1 is
well-defined and indeed the inverse of . We can similarly decompose -1 to show that it is
differentiable, making a diffeomorphism. Here, we define g1 (A) = 2A + for all
A Î m ´ m , g2 (A) = A-1 for all invertible A Î m ´ m and g3 (A) = A + for all
A Î m ´ m . A quick calculation shows
g1 ◦ g2 ◦ g3 (S ) = -1(S ). (65)
,
where H < means that - H is positive definite (not just positive semidefinite). We can
then prove the Cayley trick:
36
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
0 = -B A =- 0
HJ = ( A B
B -A )( - 0 ) (
A B - 0) ( )( )
A B = - JH .
B -A
Then we can calculate:
( + H ) · ( - H )-1J = - ( + H ) · (J ( - H ))-1 = - ( + H ) · (( + H ) J )-1
= ( + H ) J ( + H )-1 = J ( - H ) · ( + H )-1,
hence (H ) J = J (H )-1 and as (H ) is Hermitian, we have (H )T J (H ) = J and (H ) is
symplectic. Via corollary 2.10, as (H ) is symplectic and positive definite, we can conclude
that (H ) iJ .
Conversely, let S Î Sp (2n ) and S iJ . Then S −iJ by complex conjugation and
S 0 after averaging the two inequalities. Since any element of Sp (2n ) is invertible, this
implies S > 0 . From this we obtain:
S -
> - as S + > ,
S +
S -
< always.
S +
Write (S - ) · (S + )-1 =
We have on the one hand
C D ( )
A B . As S is Hermitian, AT = A and C = BT , DT = D.
S-
J = (S - ) · ( - S-T J - J )-1 = (S - )( - J )-1 (S-T + )-1
S+
= (SJ - J ) · (S-T + )-1 = J (S-T - ) S T S-T (S-T + )-1
S-
=- J
S+
and on the other hand
⎛ A B⎞ ⎛ -B A ⎞
⎜ ⎟J = ⎜ ⎟,
⎝ BT D ⎠ ⎝- D BT ⎠
⎛ A B⎞ ⎛ T -D ⎞
- J⎜ T ⎟ = ⎜- B ⎟.
⎝B D⎠ ⎝ A B ⎠
Put together this implies B = BT and D = -A, hence -1(S ) Î , which is what we
claimed. ,
Proposition B.4. The Cayley transform is operator monotone and operator convex on the
set of A = AT Î m ´ m with spec (A) Ì (-1, 1). -1 is operator monotone and operator
concave on the set of A = AT Î m ´ m with spec (A) Ì (-1, ¥).
Proof. Recall equation (64) and the definition of f1 , f2 , f3. f1 and f3 are affine and thus for all
X Y : f3 (X ) f3 (Y ) and f1 (X ) f1 (Y ). For X Y 0 , we also have f2 (Y )
f2 (X ) 0 since matrix inversion is antimonotone. Now let - Y X 1, then
-2 f3 (Y ) f3 (X ) 0 and -1 2 f2 ◦ f3 (Y ) f2 ◦ f3 (X ) 0 and finally (X )
(Y ) 0 , proving monotonicity of . Similarly, one can prove that -1 is monotonous using
equation (65).
For the convexity of , we note that since f1 , f3 are affine they are both convex and
concave. It is well-known that 1 x is operator convex for positive definite and operator
concave for negative definite matrices (to prove this, consider convexity/concavity of the
37
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
functions ⟨y, X-1y⟩ for all ψ). It follows that for - H we have f3 (x ) 0, hence
f2 ◦ f3 is operator concave on - H . As f1 (A) = -2A - , this implies that
= f1 ◦ f2 ◦ f3 is operator convex.
For the concavity of -1, recall equation (65) and the definitions of g1, g2 , g3. Then, given
- X , we have g3 (X ) is positive definite and concave as an affine map. g2 is concave on
positive definite matrices, as 1 x is convex and (-1) is order-reversing, hence -1 x is
concave on positive definite matrices. Since g1 is concave as an affine map, g1 ◦ g2 ◦ g3 = -1
is operator concave for all - X . ,
1+x
Proof. We need to see that the function h (x ) = log 1 - x is convex for x Î [0, 1). Since h is
differentiable on [0, 1), this is true iff the second derivative is non-negative:
4x
h (x ) =
(1 - x 2 ) 2
is clearly positive on [0, 1) and h is therefore log -convex. ,
Here, we provide some definitions and lemmata from set-valued analysis for the reader’s
convenience. This branch of mathematics deals with functions f : X 2Y where X and Y are
topological spaces and 2Y denotes the power set of Y.
In order to state the results interesting to us we define:
Note that the definitions are valid in all topological spaces, but we only need the case of
finite dimensional normed vector spaces. Using the metric, we can give the following
characterisation of upper semicontinuity:
38
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
Proposition C.3 ([DR79]). Let Y be a complete metric space, X a topological space and
f : X 2Y a compact-valued set-valued function. The following statements are equivalent:
• f is upper semicontinuous at x 0 .
• for each closed K Í X , K Ç f (x 0 ) is upper semicontinuous at x 0 .
An interesting question would be whether the converse is also true. Even if f (x) is always
convex, this need not be the case if K Ç f (x 0 ) has empty interior as simple counterexamples
can show. In case the interior is non-empty, another classic results guarantees a converse in
many cases:
Proposition C.4 ([Mor75]). Let X be a compact interval and Y a normed space. Let
f : X 2Y and g : X 2Y be two convex-valued set-valued functions. Suppose that
diam( f (t ) Ç g (t )) < ¥ and f (t ) Ç int (g (t )) ¹ Æ for all t . Then if f , g are continuous (in
the sense above) so is f Ç g.
In this section, we give a justification of why the operations (O0)–(O7) are enough to
implement all operations described in section 5. All of this is known albeit scattered
throughout the literature, hence we collect it here.
In order to prepare a state, we could start with the vacuum g = or alternatively a
thermal state for some bath (g = (1 2 + N ) with photon number N, see e.g. [Oli12]). Of
course, we should be able to draw arbitrary ancillary modes of this system, too. The effect of
Gaussian noise on the covariance matrix is given in [Lin00]. Since for any g we can
decompose it as g = + gnoise , this implies that the operations (O0)–(O2) are enough to
implement all operations 1.and 2.
39
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
As with other squeezing measures, passive transformations should not change the
squeezing measure, while single-mode-squeezers are not free. The effect of symplectic
transformations on the covariance matrix has already been observed in equation (10), hence
(O3) and (O4) implement operations (3) and (8).
Since we have the Weyl-system at our disposal, we can also consider its action on a
quantum state (translation in phase space). Direct computation shows that it does not affect
the covariance matrix. Including it as operation (O5) is beneficial if we consider a convex
combination of states. In an experiment, this can be done by creating ensembles of the states
of the convex combination and creating another ensemble where the ratio of the different
states is that of the convex combination. On the level of covariance matrices, we have the
following lemma:
Lemma D.1. Let r and r ¢ be two states with displacement d r and d r¢ and (centred)
covariance matrices g r and g r¢ . For l Î (0, 1), the covariance matrix of
r˜ ≔ lr + (1 - l ) r ¢ is given by:
g r˜ = lg r + (1 - l) g r ¢ + 2l (1 - l)(d r - d r ¢)(d r - d r ¢)T
A proof of this statement can be found in [WW01] (in the proof of proposition 1). Note
that for centralised states with d r = 0 and d r¢ = 0 , a convex combination of states translates
to a convex combination of covariance matrices. Since in particular,
2l (1 - l )(d r - d r ¢ )(d r - d r ¢ )T 0, any convex combination of ρ and r ¢ is on the level of
covariance matrices equivalent to
• centring the states (no change in the covariance matrices),
• taking a convex combination of the states (resulting in a convex combination of
covariance matrices),
• performing a Weyl translation to undo the centralization in the first step (no change in the
covariance matrix).
• Adding noise 2l (1 - l )(d r - d r ¢ )(d r - d r ¢ )T 0.
This implies that the effect of any convex combination of states (operation 4) on the
covariance matrix can equivalently be obtained from operations (O2), (O5) and (O6). Finally,
we consider measurements. Homodyne detection is the measurement of Q or P in one of the
modes, which corresponds to the measurement of an infinitely squeezed pure state in
lemma D.2. A broader class of measurements known as heterodyne detection measures
arbitrary coherent states [Wee+12]. Let us focus our attention on the even broader class of
projections onto Gaussian pure states.
Lemma D.2. Let r be an (n + 1)-mode quantum state with covariance matrix g and
∣gG , d ⟩⟨gG , d∣ be a pure single-mode Gaussian state with covariance matrix gG Î 2 ´ 2 and
displacement d . Let
⎛ A C⎞
g= ⎜ ⎟, B Î 2 ´ 2
⎝C T B ⎠
then the selective measurement of ∣gG , d ⟩ in the last mode results in a change of the
covariance matrix of r according to:
g ¢ = A - C (B - gG ) MP C T , (68)
40
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
This can most easily be seen on the level of Wigner functions, as demonstrated in
[ESP02, GIC02]. The generalisation to multiple modes is straightforward.
Since the covariance matrix of a Gaussian pure state is a symplectic matrix (see pro-
position 2.2), using the Euler decomposition we can implement a selective Gaussian mea-
surement by
(1) a passive symplectic transformation S Î K (n + 1),
(2) a measurement in the Gaussian state diag (d , 1 d ) for some d Î + according to
lemma D.2.
A non-selective measurement (forgetting the information obtained from measurement)
would then be a convex combination of such projected states. A measurement of a multi-
mode state can be seen as successive measurements of single-mode states since the Gaussian
states we measure are diagonal.
For homodyne detection, since an infinitely squeezed single-mode state is given by the
covariance matrix lim d ¥diag (1 d , d ), we have
g ¢ = lim (A - C (B - diag (1 d , d ))-1C T ) = A - C (pBp ) MP C T , (69)
d ¥
⎛ A C⎞
Lemma D.3. Given a covariance matrix g = ⎜ ⎟ a partial trace on the second system
⎝C T B ⎠
translates to a map g A. The partial trace can then be implemented by measurements and
adding noise.
Proof. When measuring the modes B, we note that since C (pBp )MP C T 0 in
equation (69), a partial trace is equivalent to first performing a homodyne detection on the
B-modes of the system and then adding noise. ,
Given the discussion above, lemmas D.2 and D.3 put together imply: on the level of
covariance matrices, in order to allow for general Gaussian measurements, it suffices to
consider Gaussian measurements of the state ∣gd , 0⟩⟨gd , 0∣ with covariance matrix
gd = diag (1 d , d ) for d Î + È {+¥}. All Gaussian measurements are then just combi-
nations of these special measurements and operations (O1)–(O6).
41
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
The programme tries to minimise the function f defined in equation (28) over the set .
Throughout, suppose we are given a covariance matrix γ.
Let us first describe the implementation of f: as parameterisation of , we choose the
simplest parameterisation such that for matrices with symplectic eigenvalues larger than one,
the set of feasible points has non-empty interior: we parameterise A, B via matrix units Ei, Ejk
with i Î {1, ¼, n}, k Î {1, ¼, n - 1} and j < k , where (Ei )jk = dij dik and
(Ejk )lm = d jl dkm + djmdkl . This parameterisation might not be very robust, but it is good
enough for our purpose. Instead of working with complex parameters, we compute
si (A + iB) as li (H ) for the matrix
H= ( A B .
B -A ) (70)
The evaluation of f is done in function OBJECTIVE.M. Since f is not convex for (A, B) with the
corresponding H having eigenvalues 1 or -1, the function first checks, whether this
constraint is satisfied and outputs a value that is 107-times larger than the value of the
objective function at the starting point otherwise.
The constraints are implemented in function MAXRESIDUAL.M. Via symmetry, it is
enough to check that for any H tested, l2n (H ) 1. The second constraint is given by
-1(g ) H and this is tested by computing the smallest eigenvalue of the difference.
The function which is most important for users is MINIMUM.M, which takes a covariance
matrix G iJ , its dimensions n and a number of options as arguments and outputs the
minimum. Note that the programme checks whether the covariance matrix is valid. For the
minimisation, we use the MATLAB-based solver SOLVOPT ([KK97], latest version 1.1).
SOLVOPT uses a subgradient based method and the method of exact penalization to compute
(local) minima. For convex programming, any minimum found by the solver is therefore an
absolute minimum. In order to work, the objective function may not be differentiable on a set
of measure zero and it is allowed to be non-differentiable at the minimum. Since f is dif-
ferentiable for all H with non-degenerate eigenvalues, this condition is met. In addition,
SOLVOPT needs f to be defined everywhere, as it is not an interior point method. Since f is
well-defined but not convex for H Ï and spec (H ) È {1} = Æ, we remedy this by
changing the output of OBJECTIVE.M to be very large when H Ï as described above.
Constraints are handled via the method of exact penalisation. We used SOLVOPTʼs algorithm
to compute the penalisation functions on its own.
It is possible (and for speed purposes advisable) to implement analytical gradients of both
the objective and the constraint functions. Following [Mag85], for diagonalisable matrices A
with no eigenvalue multiplicities, the derivative of an eigenvalue li (A) is given by:
¶E li (A) = vi (A)T ¶E Avi (A) , (71)
where vi(A) is the eigenvector corresponding to li (A) and ¶v (A) = limh 0 (A + hE - A)
h = E . Luckily, if A is not differentiable, this provides at least one subgradient. An easy
calculation shows that a subgradient of the objective function f for matrices H with
- < H < in the parameterisation of the matrix units Eij is given by
n ¶i lj (H ) n vjT, k F (i) vk , j
(f )i = å (1 + l (H ))(1 - l (H ))2 = å + lj (H ))(1 - lj (H ))2
(72)
j=1 j j j , k = 1 (1
with F being the matrices corresponding to the chosen parameterisation. The gradient of the
constraint function is very similar and given by equation (71) for A = g - H or A = 2 - H
depending on which constraint is violated. This is implemented in functions OBJECTIVEGRAD.
M and MAXRESIDUALGRAD.M.
42
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
43
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
References
[AG88] Arnol’d V I and Givental’ A B 1988 Symplectic geometry Dynamical Systems IV (Berlin:
Springer)
[And+15] Andersen U L et al 2016 30 years of squeezed light generation Phys. Scr. 91 053001
[ARL14] Adesso G, Ragy S and Lee A R 2014 Continuous variable quantum information: Gaussian
states and beyond Open Syst. Inf. Dyn. 21 1440001
[Arv+95a] Arvind et al 1995 The real symplectic groups in quantum mechanics and optics Pramana
45 471–97
[Arv+95b] Arvind et al 1995 Two-mode quantum systems: invariant classification of squeezing
transformations and squeezed states Phys. Rev. A 52 1609–20
[ASI04] Adesso G, Serafini A and Illuminati F 2004 Extremal entanglement and mixedness in
continuous variable systems Phys. Rev. A 70 022318
[Bha07] Bhatia R 2007 Positive Definite Matrices (Princeton, NJ: Princeton University Press)
[Bha96] Bhatia R 1996 Matrix Analyis (Berlin: Springer)
[BL05] Braunstein S L and van Loock P 2005 Quantum information with continuous variables Rev.
Mod. Phys. 77 513–77
[Bra05] Braunstein S L 2005 Squeezing as an irreducible resource Phys. Rev. A 71 055801
[BV04] Boyd S and Vandenberghe L 2004 Convex Optimization (New York: Cambridge University
Press) ISBN 0521833787
[DR79] Dolecki S and Rolewicz S 1979 Metric characterizations of upper semicontinuity J. Math.
Anal. Appl. 69 146–52
[ESP02] Eisert J, Scheel S and Plenio M B 2002 Distilling Gaussian states with Gaussian operations is
impossible Phys. Rev. Lett. 89 137903
[Fil13] Filip R 2013 Distillation of quantum squeezing Phys. Rev. A 88 063837
[GB14] Grant M and Boyd S 2014 CVX: Matlab Software for Disciplined Convex Programming,
version 2.1 (http://cvxr.com/cvx)
[GB08] Grant M and Boyd S 2008 Graph implementations for nonsmooth convex programs Recent
Advances in Learning and Control (Lecture Notes in Control and Information Sciences) ed
V Blondel, S Boyd and H Kimura (Springer) pp 95–110
[GIC02] Giedke G and Cirac J I 2002 Characterization of Gaussian operations and distillation of
Gaussian states Phys. Rev. A 66 032316
[Gie+01] Giedke G et al 2001 Separability properties of three-mode gaussian states Phys. Rev. A 64
052303
[Gos06] de Gosson M A 2006 Symplectic Geometry and Quantum Mechanics (Operator Theory:
Advances and Applications/Advances in Partial Differential Equations) (Basel: Birkhäuser) ISBN
9783764375751
[Hee+06] Heersink J et al 2006 Distillation of squeezing from non-gaussian quantum states Phys. Rev.
Lett. 96 253601
[KK97] Kuntsevich A and Kappel F 1997 SolvOpt: the solver for local nonlinear optimization
problems (manual) (Institute for Mathematics, Karl-Franzens University of Graz)
[KL10] Kok P and Lovett B W 2010 Introduction to Optical Quantum Information Processing
(Cambridge: Cambridge University Press)
[Kok+07] Kok P et al 2007 Linear optical quantum computing with photonic qubits Rev. Mod. Phys.
79 135–74
[Kra+03] Kraus B et al 2003 Entanglement generation and Hamiltonian simulation in continuous-
variable systems Phys. Rev. A 67 042314
[Lee88] Lee C T 1988 Wehrlʼs entropy as a measure of squeezing Opt. Commun. 66 52–4
[LGW13] Lercher D, Giedke G and Wolf M M 2013 Standard super-activation for gaussian channels
requires squeezing New J. Phys. 15 123003
[Lin00] Lindblad G 2000 Cloning the quantum oscillator J. Phys. A: Math. Gen. 33 5059
[Lvo15] Lvovsky A I 2015 Squeezed light Photonics Volume 1: Fundamentals of Photonics and
Physics ed D Andrews (New York: Wiley) pp 121–164
[Mag85] Magnus J R 1985 On differentiating eigenvalues and eigenvectors Econometric Theor. 1
179–91
[MK08] Mišta L and Korolkova N 2008 Distribution of continuous-variable entanglement by separable
gaussian states Phys. Rev. A 77 050302
44
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
[Mor75] Moreau J J 1975 Intersection of moving convex sets in a normed space Math. Scand. 36
159–73
[MS98] McDuff D and Salamon D 1998 Introduction to Symplectic Topology (Oxford: Oxford
University Press)
[NC00] Nielsen M and Chuang I 2000 Quantum Computation and Quantum Information (Cambridge:
Cambridge University Press) (doi:10.1017/CBO9780511976667)
[Oli12] Olivares S 2012 Quantum optics in the phase space Eur. Phys. J. Spec. Top. 203 3–24
[Rec+94] Reck M et al 1994 Experimental realization of any discrete unitary operator Phys. Rev. Lett.
73 58–61
[RFP10] Recht B, Fazel M and Parrilo P A 2010 Guaranteed minimum-rank solutions of linear matrix
equations via nuclear norm minimization SIAM Rev. 52 471–501
[Roc97] Rockafellar R T 1997 Convex Analysis (Princeton, NJ: Princeton University Press) ISBN
9780691015866
[Rud87] Rudin W 1987 Real and Complex Analysis (Mathematics Series) (New York: McGraw-Hill)
ISBN 9780070542341
[SCS99] Simon R, Chaturvedi S and Srinivasan V 1999 Congruences and canonical forms for a
positive matrix: application to the Schweinler–Wigner extremum principle J. Math. Phys. 40
3632–42
[SMD94] Simon R, Mukunda N and Dutta B 1994 Quantum-noise matrix for multimode systems: U(n )
invariance, squeezing, and normal forms Phys. Rev. A 49 1567–83
[Son98] Sontag E D 1998 Mathematical Control Theory: Deterministic Finite Dimensional Systems
2nd edn (New York: Springer)
[SZ97] Scully M O and Zubairy M S 1997 Quantum Optics (Cambridge: Cambridge University Press)
ISBN 9780521435956
[Tho76] Thompson R C 1976 Convex and concave functions of singular values of matrix sums Pac. J.
Math. 66 285–90
[VB96] Vandenberghe L and Boyd S 1996 Semidefinite programming SIAM Rev. 38 49–95
[Wee+12] Weedbrook C et al 2012 Gaussian quantum information Rev. Mod. Phys. 84 621
[WEP03] Wolf M M, Eisert J and Plenio M B 2003 Entangling power of passive optical elements Phys.
Rev. Lett. 90 047904
[Wil36] Williamson J 1936 On the algebraic problem concerning the normal forms of linear dynamical
systems Am. J. Math. 58 141–63
[Wol+04] Wolf M M et al 2004 Gaussian entanglement of formation Phys. Rev. A 69 052320
[WW01] Werner R F and Wolf M M 2001 Bound entangled Gaussian states Phys. Rev. Lett. 86
3658–61
45