Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

An Operational Measure For Squeezing: Home Search Collections Journals About Contact Us My Iopscience

Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

Home Search Collections Journals About Contact us My IOPscience

An operational measure for squeezing

This content has been downloaded from IOPscience. Please scroll down to see the full text.

2016 J. Phys. A: Math. Theor. 49 445304

(http://iopscience.iop.org/1751-8121/49/44/445304)

View the table of contents for this issue, or go to the journal homepage for more

Download details:

IP Address: 207.162.240.147
This content was downloaded on 14/10/2016 at 04:18

Please note that terms and conditions apply.


Journal of Physics A: Mathematical and Theoretical
J. Phys. A: Math. Theor. 49 (2016) 445304 (45pp) doi:10.1088/1751-8113/49/44/445304

An operational measure for squeezing


Martin Idel, Daniel Lercher and Michael M Wolf
Zentrum Mathematik, Technische Universität München, Germany

E-mail: martin.idel@tum.de

Received 5 July 2016, revised 26 August 2016


Accepted for publication 14 September 2016
Published 13 October 2016

Abstract
We propose and analyse a mathematical measure for the amount of squeezing
contained in a continuous variable quantum state. We show that the proposed
measure operationally quantifies the minimal amount of squeezing needed to
prepare a given quantum state and that it can be regarded as a squeezing
analogue of the ‘entanglement of formation’. We prove that the measure is
convex and subadditive and we provide analytic bounds as well as a numerical
convex optimisation algorithm for its computation. By example, we then show
that the amount of squeezing needed for the preparation of certain multi-mode
quantum states can be significantly lower than naive state preparation
suggests.

Keywords: squeezing, continuous variable quantum information, operational


measure, Euler decomposition, bosonic systems

(Some figures may appear in colour only in the online journal)

1. Introduction

The interplay between quantum optics and the field of quantum information processing, in
particular via the subfield of continuous variable quantum information, has been developing
for several decades and is interesting also due to its experimental success (see [KL10] for a
thorough introduction).
Coherent bosonic states and the broader class of Gaussian bosonic states, quantum states
whose Wigner function is characterised by its first and second moments, are of particular
interest in the theory of continuous variable quantum information. Their interest is also due to
the fact that modes of light in optical experiments behave like Gaussian coherent states.
For any bosonic state, its matrix of second moments, the so called covariance matrix,
must fulfil Heisenbergʼs uncertainty principle in all modes. If the state possesses a mode,
where despite this inequality Dx Dp   2 either Dx or Dp is strictly smaller than  2 , it
is called squeezed. The production of squeezed states is experimentally possible, but it

1751-8113/16/445304+45$33.00 © 2016 IOP Publishing Ltd Printed in the UK 1


J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

requires the use of nonlinear optical elements [Bra05], which are more difficult to produce
and handle than the usual linear optics (i.e. beam splitters and phase shifters). Nevertheless,
squeezed states play a crucial role in many experiments in quantum information processing
and beyond. Therefore, it is natural both theoretically and practically to investigate the
amount of squeezing which is necessary to create an arbitrary quantum state.
As a qualitative answer, squeezing is known to be an irreducible resource with respect to
linear quantum optics [Bra05]. In the Gaussian case, it is also known to be closely related to
entanglement of states [WEP03] and the non-additivity of quantum channel capacities
[LGW13]. In addition, quantitative measures of squeezing have been provided on multiple
occasions [Kra+03, Lee88], yet none of these measures are operational for more than a single
mode in the sense that they do not measure the minimal amount of squeezing necessary to
prepare a given state.
The goal of this paper is therefore twofold: first, we define and study operational squeezing
measures, especially measures quantifying the amount of squeezing needed to prepare a given
state. Second, we reinvestigate in how far squeezing is a resource in a mathematically rigorous
manner and study the resulting resource theory by defining preparation measures.
In order to give a brief overview of the results, we assume the reader is familiar with
standard notation of the field, which is also gathered in section 2. In particular, let γ denote
covariance matrices. A squeezed state is a state where at least one of the eigenvalues of γ is
smaller than one.
To obtain operational squeezing measures, we first study operational squeezing in section 3:
suppose we want to implement an operation on our quantum state corresponding to some unitary
U. Any such unitary can be implemented as the time-evolution of Hamiltonians. Recall that any
quantum-optical Hamiltonian can be split into ‘passive’ and ‘active’ parts, where the passive
parts are implementable by linear optics and the active parts require nonlinear media. We assume
that the active transformations available are single-mode squeezers with Hamiltonian

Hsqueeze, j = i (aj2 - aj† 2) ,
2
where the j denotes squeezing in the jth mode. We therefore consider any Hamiltonian of the
form
H = Hpassive (t ) + åck (t ) Hsqueeze, j , ( 1)
k

where ck are complex coefficients, which can be seen as the interaction strength of the
medium and Hpassive is an arbitrary passive Hamiltonian. Then, a natural measure of the
squeezing costs to implement this Hamiltonian would be given by

fsqueeze (H ) = ò åk ∣ck (t )∣ dt.


Our squeezing measure for the operation U is then defined as the mimimum of fsqueeze (H ) for
all Hamiltonians implementing the operation U of the form (1). With this definition, we have
an operational measure answering the question: given an operation U, what is the most
efficient way (in terms of squeezing) to implement it using passive operations and single-
mode squeezers?
Instead of working with the generators, which are unbounded operators and therefore
introduce a lot of analytic problems, we will work on the level of Wigner functions and
therefore with the symplectic group. The unitary U then corresponds to a symplectic matrix S
and we prove that the most efficient way to implement it is by using the Euler decomposition,
also known as Bloch–Messiah decomposition. We show this result first in the case where the

2
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

functions ci are step functions and later on in the more general case of measurable c
(section 3.2). In particular, the result implies that the minimum amount of squeezing to
implement the symplectic matrix S Î 2n ´ 2n is given by
n
F (S ) ≔ å log si (S), ( 2)
i=1

where si denotes the ith singular value of S ordered decreasingly.


With this in mind, we define a squeezing measure for preparation procedures where one
starts out with a covariance matrix of an unsqueezed state and then performs symplectic (and
possibly other) operations to obtain the state. More precisely, we define



n ⎫

G (g ) ≔ inf ⎨

å log sj

( S ) g  S T S , S Î Sp (2n) ⎬.

( 3)
⎩ j=1 ⎭

One of the main results of this paper, which will be proven in section 5, is that this measure is
indeed operational in that it quantifies the minimal amount of single-mode squeezing
necessary to prepare a state with covariance matrix γ, using linear optics with single-mode
squeezers, ancillas, measurements, convex combinations and addition of classical noise.
We also define a second squeezing measure, which is a squeezing-analogue of the
entanglement of formation, the ‘squeezing of formation’, i.e.the amount of single-mode
squeezed resource states needed to prepare a given state using only passive operations and
adding of noise. This is done in section 5.3, where we also prove that this measure is equal
to G.
In addition, we prove several structural facts about G in section 4. In particular, G is
convex, lower semicontinuous everywhere, continuous on the interior and subadditive.
Moreover, we show

1 n
å log (lj (g ))  G (g )
2 lj < 1

with the eigenvalues lj of γ. Equality in this lower bound is usually not achievable, albeit
numerical tests have shown that the bound is often very good.
The measure would lose a lot of its appeal, if it could not be computed. Although we
cannot give an efficient analytical formula for more than one mode, we provide a numerical
algorithm to obtain G for any state. To demonstrate that this works in principle, we calculate
G approximately for a state studied in [MK08] (section 6). The calculations also demonstrate
that the preparation procedure obtained from minimising G can greatly lower the squeezing
costs when compared to naive preparation procedures. Finally, we critically discuss the
flexibility and applicability of our measures in section 7. We believe that while we managed
to give reasonable measures and interesting tools to study the resource theory of squeezing
from a theoretical perspective, G might not reflect the experimental reality in all parts. In
particular, it becomes extraordinarily difficult to achieve high squeezing in a single mode
[And+15], which is not reflected by taking the logarithm of the squeezing parameter. We
show that this shortcoming can be easily corrected for a broad class of cost functions. In
addition, the form of the active part of the Hamiltonian (1) might not reflect the form of the
Hamiltonian in the lab. This cannot be corrected as easily but in any case, our measure will
give a lower bound.

3
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

2. Preliminaries

In this section, we collect basic notions from continuous variable quantum information and
symplectic linear algebra that we need later on. For a broader overview, we refer to
[ARL14, BL05].

2.1. Phase space in quantum physics

Consider a bosonic system with n-modes, each of which is characterised by a pair of cano-
nical variables {Qk , Pk}. Setting R = (Q1, P1, ¼, Qn , Pn )T the canonical commutation relations
(CCRs) take on the form [Rk , Rl ] = iskl with the standard symplectic form
n
s=⨁
i=1
(
0 1 .
-1 0 )
Since it will sometimes be convenient, we also introduce another basis of the canonical
variables: let R˜ = (Q1, Q2, ¼, Qn , P1, P2, ¼, Pn )T , then the symplectic CCRs take on the form
[R˜k , R˜l ] = iJkl with the symplectic form
⎛ 0 n⎞
J=⎜ ⎟.
⎝-  n 0 ⎠
Clearly, J and σ differ only by a permutation, since R and R̃ differ only by a permutation.
From functional analysis, it is well-known that the operators Qk and Pk cannot be represented
by bounded operators on a Hilbert space. In order to avoid complications associated to
unbounded operators, it is usually easier to work with a representation of the CCR-relations
on some Hilbert space  , instead. The standard representation is known as the Schrödinger
representation and defines the Weyl system, a family of unitaries Wx with x Î 2n and
Wx ≔ exp (ixsR) , x Î 2n
fulfiling the Weyl relations Wx Wh = exp-i 2xsh Wx + h for all x , h . Such a system is unique up to
isomorphism under further assumptions of continuity and irreducibility as obtained by the
Stone–von Neumann theorem. Given Wx it is important to note that
Wx Rk Wx* = Rk + xk  " x Î 2n. ( 4)
In this paper, we will not use many properties of the Weyl system, since instead, we can work
with the much simpler moments of the state: given a quantum state r Î 1(L2 (2n)) (trace-
class operators on L2), its first and second centred moments are given by
dk ≔ tr (rRk ) , (5)
gkl ≔ tr (r {Rk - dk  , Rl - dl }+ ) (6)

with {·, ·} + the regular anticommutator. We will write Γ instead of γ for the covariance
matrix, if we work with R̃ instead of R. Again, a simple permutation relates the two.
An important question one can ask is when a matrix γ can occur as a covariance matrix of
a quantum state. The answer is given by Heisenberg’s principle, which here takes the form of
a matrix inequality:

Proposition 2.1. Let g Î 2n ´ 2n , then there exists a quantum state r with covariance matrix
g if and only if
g  is ,

4
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

where  denotes the standard partial order on matrices (i.e. g  is if g - is is positive


semidefinite). Note that we leave out the usual factor of  2 to simplify notation.

Another question one might ask is when a covariance matrix belongs to a pure quantum
state. This question cannot be answered without more information about the higher order
terms If we however require the state to be uniquely determined by its first and second
moments, i.e.if we consider the so called Gaussian states, we have an answer (see [ASI04]):

Proposition 2.2.Let r be an n -mode Gaussian state (i.e. completely determined by its first
and second moments), then r is pure if and only if det (gr ) = 1.

2.2. The linear symplectic group and squeezing

A very important set of operations on a quantum system are those, that leave the CCRs
invariant, i.e.linear transformations S such that [SRk , SRl ] = iskl . Such transformations are
called symplectic transformations.

Definition 2.3. Given a symplectic form σ on 2n ´ 2n , the set of matrices S Ì 2n ´ 2n such
that S T sS = s is called the linear symplectic group and is denoted by Sp (2n , , s ).

We will usually drop both σ and  in the description of the symplectic group since this
will be clear from the context. The linear symplectic group is a Lie group and as such contains
a lot of structure. For more information on the linear symplectic group and its connection to
physics, we refer the reader to [Gos06, MS98] chapter 2. An overview for physicists is also
found in [Arv+95a]. All of the following can be found in that paper:

Definition 2.4. Let O (2n , ) be the real orthogonal group, Then we define the following
three subsets of Sp (2n ):
K (n) ≔ Sp (2n , ) Ç O (2n , ) ,
Z (n) ≔ {2 ( j - 1) Å diag (si , si-1) Å 2 (n - ( j + 1)) ∣s  0, j = 1, ¼, n} ,
P (n) ≔ {S Î Sp (2n , )∣S  0}.
The first subset is the maximally compact subgroup of Sp (2n ), the second subset is the subset
of single-mode-squeezers. It generates the multiplicative subgroup  (2n ), a maximally
abelian subgroup of Sp (2n ). The third set is the set of positive definite symplectic matrices.

In addition, since Sp (2n ) is a Lie group, it possesses a Lie algebra. Let us collect a
number of relevant facts about the Lie algebra and some subsets:

Proposition 2.5. The Lie algebra sp (2n ) of Sp (2n ) is given by

sp (2n) ≔ {T Î 2n ´ 2n∣sT + Ts = 0}


together with the commutator as Lie bracket. Certain other Lie algebras or subsets of Lie
algebras are of relevance to us:
(1) so (2n ) ≔ {A Î 2n ´ 2n∣A + AT = 0} the Lie algebra of SO (2n ).
⎛ ⎞
(2) k (n ) ≔ {A Î 2n ´ 2n∣A = ⎜ a b ⎟ , a = -aT , b = bT } the Lie algebra of K (n ).
⎝- b a ⎠

5
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

⎛ ⎞
(3) p (n ) ≔ {A Î 2n ´ 2n∣A = ⎜ a b ⎟ , a = aT , b = bT } the subspace of the Lie algebra
⎝b - a⎠
sp (2n ) corresponding to P (n ).

Since the Lie algebra is a vector space, it is spanned by a set of vectors, the generators. A
standard decomposition is given by taking the generators of k (n ), the so called passive
transformations as one part and the generators of p (n ), the so called active transformations as
the other part. That these two sets together determine the Lie algebra completely can be seen
with the polar decomposition:

Proposition 2.6 (Polar decomposition [Arv+95a]). For every symplectic matrix


S Î Sp (2n ) there exists a unique U Î K (n) and a unique P Î P (n ) such that S = UP .

A basis for the Lie algebras k (n ) and p (n ) therefore characterises the complete Lie
algebra sp (2n ). Elements of the Lie algebras are also called generators and a basis of
generators therefore fixes the Lie algebra. Via the polar decomposition, this implies that they
also generate the whole Lie group. We will need a set of generators gij( p ) Î k (n ) and
gij(a) Î p (n ) later on, which we will fix via the metaplectic representation:

Proposition 2.7 (Metaplectic representation [Arv+95a]). Let Wx be the continuous


irreducible Weyl system defined above and let S Î Sp (2n ). Then there exists an up to a phase
unique unitary US with
"x : US Wx US† = WSx .

Since we have the liberty of a phase, this is not really a representation of the symplectic
group, but of its two-fold cover, the metaplectic group. We can also study the generators of
this representation, which are given by 1 2 {Rk , Rl }+.
For the reader familiar with annihilation and creation operators, if we denote by ai , ai† the
annihilation and creation operators of the n bosonic modes, the generators of the metaplectic
representation are given by
Gijp (1) ≔ i (aj† ai - ai† aj ) Gijp (2) ≔ aj† aj + aj† ai , (7)

Gija (3) ≔ i (aj† ai† - a i aj ) Gija (4) ≔ ai† aj† + ai aj , (8)

where the p stands for ‘passive’ and the a for ‘active’. The passive generators are also
frequently called linear transformations in the literature (see [Kok+07]). We can now define
a set of generators of the symplectic group Sp (2n ) by using the set of metaplectic generators
Gij above and take corresponding generators gij in the Lie algebra sp (2n ) in a consistent way.
As one would expect from the name, the passive metaplectic generators correspond to a set of
passive generators of k (n ) and the set of active metaplectic generators corresponds to a set of
active generators of p (n ). The details of the correspondence are irrelevant (they are explicitly
spelled out in equation (6.6b) in [Arv+95a]), except for the fact that the set Giia (3) , i = 1,K,n
corresponds to the generators giia (3) generating matrices in Zn.
Given a Hamiltonian, the associated time evolution corresponds to a path on the Lie
group: for a (sufficiently regular) path g : [0, 1]  Sp (2n ) we can find a function
A (t ) Î sp (2n ) such that
6
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

g ¢ ( t ) = A (t ) g (t ). ( 9)
Instead of directly studying Hamiltonians with time-dependent coefficients as in equation (1),
it is equivalent to study functions A : [0, 1]  sp (2n ).
There are a number of decompositions of the Lie group and its subgroup in addition to
the polar decomposition. We will mostly be concerned with the so called Euler decomposition
(sometimes called Bloch–Messiah decomposition) and Williamson’s decomposition:

Proposition 2.8 (Euler decomposition [Arv+95a]). Let S Î Sp (2n ), then there exist
K , K ¢ Î K (n ) and A Î  (n ) such that S = KAK ¢.

Proposition 2.9 (Williamson’s theorem [Wil36]). Let M Î 2n ´ 2n be a positive definite


matrix, then there exists a symplectic matrix S Î Sp (2n , ) and a diagonal matrix D Î n ´ n
such that
M = S T DS,
˜
where D˜ = diag (D, D) is diagonal. The entries of D are also called symplectic eigenvalues.

In particular, for M Î P (n ), this implies that M has a symplectic square root. Since
covariance matrices are always positive definite, this implies also that a Gaussian state is pure
if and only if its covariance matrix is symplectic. Heisenberg’s uncertainty principle has also a
Williamson version:

Corollary 2.10. A positive definite matrix M is a covariance matrix of a quantum state if


and only if all symplectic eigenvalues are larger or equal to one.

The proof is simple and therefore omitted.

2.3. Quantum optical operations and squeezing

We have already noted that an important class of operations are those, which leave the CCR-
relations invariant, namely the symplectic transformations. Given a quantum state ρ, the
action of the symplectic group on the canonical variables R descends to a subgroup of unitary
transformations on ρ via the metaplectic representation (see [Arv+95b]). Its action on the
covariance matrix gr of ρ is even easier: Given S Î Sp (2n ),
gr  S T gr S. (10)
In quantum optics, symplectic transformations can be implemented by the means of
(1) beam splitters and phase shifters, implementing operations in K(n) ([Rec+94])
(2) single-mode squeezers, implementing operations in Z(n).
Via the Euler decomposition, this implies that any symplectic transformation can be
implemented (approximately) by a combination of those three elements.

Definition 2.11. An n -mode bosonic state r is called squeezed, if its covariance matrix gr
possesses an eigenvalue l < 1.

Especially in the early literature, squeezing is usually defined differently: a state ρ is


squeezed if there exists a unitary transformation K Î K (n ) such that K T gr K has a diagonal
entry smaller than one. This again comes from the physical definition of squeezed states being

7
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

states where the Heisenberg uncertainty relations are satisfied with equality for at least one
mode. These definitions however are well-known to be equivalent (see [SMD94]).

3. An operational squeezing measure for symplectic transformations

Throughout this section, we will always use σ as our standard symplectic form.

3.1. Definition and basic properties

We will now define a first operational squeezing measure for symplectic transformations,
which will later be used to define a measure for operational squeezing.

Definition 3.1. Define the function F : 2n ´ 2n  


n
F ( A) = å log (si (A)), (11)
i=1

where si are the decreasingly ordered singular values of A.

Note that we sum only over half of the singular values. Restricting this function to
symplectic matrices will yield an operational squeezing measure for symplectic transforma-
tions: recall that the symplectic group is generated by symplectic orthogonal matrices and
single-mode squeezers. The orthogonal matrices are easy to implement and therefore will be
considered a free resource. The squeezers have singular values s and s-1 and they are
experimentally hard to implement and should therefore be assigned a cost that depends on the
squeezing parameter s. Using this, the amount of squeezing seems to be characterised by the
largest singular values. Here, we quantify the amount of squeezing by a cost log(s ), which
can be seen as the interaction strength of the Hamiltonian needed to implement the squeezing.
Let us make this more precise: define the map
D : Sp (2n)  ⋃ Sp (2n)´m
mÎ
S  ⋃ {(S1, ¼, Sm)∣S = S1  Sm, Si Î K (n) È Z (n)}.
mÎ

The image of Δ for a given symplectic matrix contains all possible ways to construct S as a
product of matrices from K(n) or Z(n). We define:

Definition 3.2. Let F : Sp (2n )   be a map defined via


⎧m ⎫
F (S ) ≔ log inf ⎨  s1 (Si) ∣ (S1, ¼, Sm) Î D (S )⎬ . (12)
⎩i = 1 ⎭

Proposition 3.3. If S Î Sp (2n ) then F (S ) = F (S ).

Proof. Let S = KAK ¢ be the Euler decomposition of S with K , K ¢ Î K (n ) and A Î  (n ).


Assume without loss of generality that A = diag (a1, a1-1, ¼, a n , a n-1) and
a1  a 2  ¼  a n  1 and define Ai = diag (1, ¼, 1, ai , ai-1, 1, ¼, 1). By construction
A = A1  An and Ai Î Z (n ). Since K , K ¢ Î K (n ), (K , A1, ¼, An , K ¢) Î D (S ). Using that
si (K ) = si (K ¢) = 1 and the fact that the Euler decomposition is actually equivalent to the

8
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

singular value decomposition of S, we obtain:


⎛ n ⎞ n
F (S )  log ⎜s1 (K )  s1 (Ai ) s1 (K ¢)⎟ = log  si (S ) = F (S ).
⎝ i=1 ⎠ i=1

Conversely, consider (S1, ¼, Sm ) Î D (S ). Using that by definition for each Sj Î K (n ) È Z (n )


we have in= 1 si (Sj ) = s1 (Sj ), we conclude:
⎛ n ⎞(*) ⎛ m n ⎞ ⎛ m ⎞
F (S ) = log ⎜  si (S )⎟  log ⎜⎜   si (Sj )⎟⎟ = log ⎜⎜  s1 (Sj )⎟⎟ ,
⎝i = 1 ⎠ ⎝ j=1 i=1 ⎠ ⎝ j=1 ⎠
where in ( * ) we used a special case of a theorem by Gel’fand and Naimark ([Bha96],
theorem III.4.5 and equation (III.19)). Taking the infimum on the right-hand side gives
F (S )  F (S ). ,

Let us write the last observation in ( * ) as a small lemma for later use:

Lemma 3.4. Let S , S ¢ Î Sp (2n ). Then F (SS ¢)  F (S ) + F (S ¢).

3.2. Lie algebraic definition

Up to now, we have only considered products of symplectic matrices, which would corre-
spond to a chain of beam splitters, phase shifters and single-mode squeezers. The goal of this
section is to prove that one cannot improve the results with arbitrary paths on Sp (2n ),
corresponding to general Hamiltonians of the form of equation (1) as described in section 2.
Let  r (S ) be the set of absolutely continuous paths a : [0, 1]  Sp (2n ) with a derivative
which is bounded almost everywhere such that a (0) =  and a (1) = S . Such paths seem to
capture most if not all physically relevant cases.
Recall the set of generators g of sp (2n ) defined in section 2 and order them in a single
vector. Usin equation (9), any a Î  r (S ) corresponds to a A Î L¥ ([0, 1], sp (2n)). Since
the generators g form a basis, we can write A (t ) = ca (t ) · g with a function
ca Î L¥ ([0, 1], sp (2n )). Both A or ca together with the condition a (0) =  uniquely
define α.
The goal of this section is to prove that this does not give us any better way to avoid
squeezing:

Theorem 3.5. For any S Î Sp (2n ), we have


⎧ 1    ⎫
F (S ) = inf ⎨
⎩ ò0 ˙ (t ) = (cap (t ) g p (a (t )) , caa (t ) g a (a (t )))T ⎬ ,
caa (t )1 dt a Î  r (S ) , a

(13)
 p a
where we introduced the notation c to clarify that g are actually vectors containing a set
of generators each, and the coefficients might differ for each of these generators.

The proof of this theorem is quite lengthy in details, thus we split it up into several
lemmata. The general idea is easy to relate: we first show that paths corresponding to products
of symplectic matrices of type Z(n) or K(n) produce the same outcome in (13) and (12). We
then use an approximation argument: given any path, we can approximate it by a path of
products of symplectic matrices to arbitrary precision.
To start, we prove the following lemma:

9
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

Lemma 3.6. Let A Î sp (2n ) and write A = 1 2 (A + AT ) + 1 2 (A - AT ) ≕ A+ + A-.


Then A+ Î p (2n ) and A- Î k (n ) and we have F (exp (A))  F (exp (A+ )).

Proof. First note that F is continuous in S since the singular values are. Using the Trotter-
formula, we obtain:

F (exp (A)) = F ( lim ( exp (A+ n) exp (A- n))n )


n ¥
(*)
 lim (nF (exp (A+ n)) + nF (exp (A- n)))
n ¥
= lim nF (exp (A+ n)) = F (exp (A+ )) ,
n ¥

where we used that F (exp (A- )) = 0 since A- Î k (n ) and in ( * ), we used a version of a


theorem by Gel’fand and Naimark again (see [Bha96], equation (III.20)). ,

Let us define yet another version of F which we call F̂ in the following way:

⎪ 
N ⎫
      2⎪
C N (S ) ≔ ⎨

(c1a , c1p, ¼, c Na , c Np ) S =  exp (c ja g a + c jp g p) , cj Î 4n ⎬

,
⎩ j=1 ⎭
C (S ) ≔ ⋃ C N (S ) ,
N Î
⎧   ⎫
Fˆ (S ) ≔ inf ⎨åcia 1 c Î C (S )⎬.
⎪ ⎪

⎩ i ⎭
⎪ ⎪

This definition is of course reminiscent of the definition of F in equation (12):

Lemma 3.7. For S Î Sp (2n ), we have Fˆ (S ) = F (S ).

Proof. To prove Fˆ  F , consider the Euler decomposition S = K1 A1 ¼ An K2 with


Ai Î Z (n ) and K1, K2 Î Sp (2n ). Since K(n) is compact, the exponential map is surjective
   
and there exist c1p and c2p such that exp (c1p g p ) = K1 and exp (c2p g p ) = K2 . Recall that we
ordered the vector g a in such a way that the generators gia generate the matrices in Z(n) for
 
i = 1,K,n, hence we know that there exist cia = (0, ¼, 0, (cia )(i) , 0, ¼, 0) for i = 1,K,n such
that
n
  
S = exp (c1p g p)  exp (cia g a) exp (c2p g p).
i=1

This implies
  
Fˆ (S )  åcia 1 = åF (exp (cia g a)) = åF (exp ((cia)(i) gia)) = å log s1 (Ai ) = F (S).
i i i i

 
Here we used that (cia )(i) is also the largest singular value of exp ((cia )(i) gia ) Î Z (n ), as
a a
F (exp ((ci )(i) gi )) = (ci )(i) by normalisation of g.
a

For the other direction Fˆ  F , let S be arbitrary. Let c Î C (S ) and consider each vector

ci separately. We drop the index i for readability, since we need to consider the entries of the
  
vector ci . To make the distinction clear, we denote the jth entry of the vector c by c( j). Recall
that the active generators are exactly those generating the positive matrices. Then:

10
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

 Lemma 3.6  
F (exp (cg))  F ( exp (c ag a)) = lim F (( exp (c(ai) gia n))n )
n ¥ i
  
 åF (exp (c(ai) gia)) = å∣c(ai) ∣ = c a1 ,
i i

where we basically redid the calculations we used to prove lemma 3.6, using the continuity of

F and the Trotter formula from matrix analysis. Until now, we have considered only one ci of

c Î C (S ). Now, if we define Si = exp (ci g), then we have i Si = S and hence, using
lemma 3.4, we find:
Lemma 3.4  
F (S )  åF (Si)  åF (exp (ci g))  åcia 1 " c Î C (S ) .
i i i

But this means F (S )  Fˆ (S ), as we claimed. ,

We can now prove the first half of the theorem:

Lemma 3.8. For S Î Sp (2n ) we have


⎧ 1  ⎫
F (S )  inf ⎨
⎩ ò0 caa (t )1 dt∣a Î  r (S ) , a (t )⎬ .

Proof. Let S Î Sp (2n ) and consider the Euler decomposition S = K1 S1  Sn K2 . We can


define a function A : [0, 1]  sp (2n ) via:
⎧(n + 2) · c1 p g p t Î [0, 1 (n + 2)) ,
⎪ a
A (t ) ≔ ⎨(n + 2) · (ci + 1)(i) gi t Î [i (2 (n + 2)) , (i + 1) (n + 2)) , i = 1, ¼, n ,
a
(14)
⎪ p p
⎩ ( n + 2) · c n + 2 g t Î [(n + 1) (n + 2) , 1] ,
p a a  p
where (c1 , 0, 0, c2 , ¼, 0, cn + 1, cn + 2 , 0) denotes the element in C n + 2 (S ) for the Euler
decomposition and vector indices are denoted by a subscript (i) as before. Let U (s, t ) be the
propagator corresponding to A, then for t Î [0, 1 (n + 2)) according to proposition A.1,
since A does not depend on t on this interval, it is given by U (t , s ) = exp ((t - s ) A). In

particular, U (1 (n + 2), 0) = exp (cnp+ 2 g p ) = K2 .
Iterating the procedure above, using U (0, 1) = U (0, 1 (n + 2)) U ((n + 1)
(n + 2), 1), we can see that by construction, U (0, 1) = K1 S1  Sn K2 = S . Hence A defines
a continuous path on Sp (2n ) via U (s, t ). We can calculate:
1  n ( i + 1) ( n + 2 ) 
ò0 c a (t )1 dt = åòi
i=1 ( n + 2)
∣(n + 2) · (cia+ 1)(i) ∣ dt
n

= å∣(cia+ 1)(i) ∣ Lemma
= 3.7 F (S ) ,
i=1

where we used that the integral over the interval [0, 1 (n + 2)) and [(n + 1) (n + 2), 1] is
empty due to the fact that all active components are zero. In the last step, we used that for
the Euler decomposition, which takes the minimum in F̂ , this value is exactly
  
åi∣(cia+ 1)(i) ∣ = åi cia+ 1 1, since (cia+ 1)( j) = 0 for j ¹ i . Taking the infimum on the left-hand
side only decreases the value. ,

11
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

For the other direction, we need some facts about ordinary differential equations that are
collected in appendix A.

Lemma 3.9. For S Î Sp (2n ) we have


⎧ 1  ⎫
F (S )  inf ⎨
⎩ ò0 caa (t )1 dt ∣ a Î  r (S ) , a (t )⎬ .

(15)

Proof. Let S Î Sp (2n ) be arbitrary. Combining the proof of lemma 3.8 with proposition 3.3
and lemma 3.7 we have already proved:
⎧ 1 
F (S ) = inf ⎨
⎩ ò0 caa (t )1 dt a Î  r (S ) ,
  
a
˙ (t ) = (cap (t ) g p (a (t )) , caa (t ) g a (a (t )))T , c step fct.}.
The only thing left to prove is that we can drop the step-function assumption. This will be
done by a standard approximation argument: let F˜ (S ) denote the right-hand side of
equation (15). Let e > 0 and consider an arbitrary A Î L¥ such that
1 
ò0 caa (t )1 dt - F˜ (S ) < e (16)

i.e.A corresponds to a path that is close to the infimum in the definition of F̃ . We can now
approximate ca by step-functions ca¢ (corresponding to a function A¢ , see lemma A.2) such
that
1
ò0 ca (t ) - ca¢ 1 dt < e . (17)

Using the fact that the propagators UA, UA ¢ are differentiable almost everywhere (proposition
A.1) and absolutely continuous when one entry is fixed, we can define a function
f (s ) ≔ UA (0, s ) U A¢ (s, t ), which is also differentiable almost everywhere. Furthermore, the
fundamental theorem of calculus holds for f (s) (see [Rud87], theorems 6.10 and 7.8).
d
f (s) = - UA (0, s) A (s) U A¢ (s , t ) + UA (0, s) A¢ (s) U A¢ (s , t )
ds
almost everywhere, which implies:
t d
U A¢ (0, t ) - UA (0, t ) = f (t ) - f (0) = ò0 ds
f (s) ds
t
= ò0 UA (0, s)(A¢ (s) - A (s)) U A¢ (s , t ) ds.

Since U and g are bounded in ·¥, we obtain


t
U A¢ (0, t ) - UA (0, t )1  M ò0 ca ¢ (s) - ca (s)1 ds  Me. (18)

M can explicitly be computed by the bounds given in proposition A.1.


Up to now, we have taken a path α to S close to the infimum and approximated it by a
path a¢. It is immediate by equations (16) and (17) that

12
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

1
ò0 ca¢ (t )1 dt - F˜ (S ) < 2e. (19)

Since ca¢ Î C N (S ¢) for some N Î  and S ¢ = U A¢ (0, 1), we would be done if S ¢ = S . To


remedy this, we want to extend a¢ to a path ã such that it ends at S. This is where
equation (18) enters: set S˜ ≔ U A¢ (0, 1)-1UA (0, 1), then
M
S˜ - 1  e (20)
S1
hence S̃ »  for ε small enough. Using the polar decomposition, we can write
 
S˜ = exp (cNp+ 1) exp (cNa + 2 ). A quick calculation yields
⎛ M ⎞ M
 log S˜1  n log ⎜e ⎟  ne ≕ Ce. (21)
⎝ S1 ⎠ S1
This lets us construct a new A˜ : [0, 2]  sp (2n ):
⎧ A¢ (t ) t Î [0, 1] ,
⎪ p
t  ⎨ 2 · c N + 1 g t Î (1, 3 2) ,
p
⎪ a
⎩ 2 · c N + 2 g a t Î (3 2, 2].
By construction, for the corresponding propagator we have U A˜ (0, 2) = S and ã is a feasible
path for F˜ (S ) (at least after reparameterisation) fulfiling:
2 1 1
ò0 ca˜a (t )1 dt - F˜ (S )  ò0 ca˜a (t )1 dt - F˜ (S ) + ò0 ca˜a (t )1 dt
(19) + (21)
 (2 + C ) e.
Since, ca˜ Î C N + 2 (S ), ã is a valid path for Fˆ (S ), which implies that for any  > 0 , choosing
e ≔  (2 + C ), we have seen:
Fˆ (S ) < F˜ (S ) +  . (22)
For   0 , Fˆ (S )  F˜ (S ), which implies the lemma via lemma 3.7. ,

4. A mathematical measure for squeezing of arbitrary states

Throughout this section, for convenience, we will switch to using J as symplectic form.
Having defined the measure F, we will now proceed to define a squeezing measure for
creating an arbitrary (mixed) state:

Definition 4.1. Let r be an n -mode bosonic quantum state with covariance matrix G. We
then define:
G (r ) º G (G) ≔ inf {F (S )∣G  S T S , S Î Sp (2n)}. (23)

Note that G is always finite: for any given covariance matrix Γ, by Williamson’s theorem
and corollary 2.10, we can find S Î Sp (2n ) and D̃   such that G = S T DS ˜  S T S . Fur-
thermore G is also non-negative since F is non-negative for symplectic S. We will prove in
section 5 that this is indeed an operational measure.

13
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

4.1. Different reformulations of the measure

We will now give several reformulations of the squeezing measure and prove some of its
properties. In particular, G is convex and one of the crucial steps towards proving convexity
of G is given by a reformulation of G with the help of the Cayley transform. For the reader
unfamiliar with the Cayley transform, a definition and basic properties are provided in
appendix B.

Proposition 4.2. Let G  iJ and G Î 2n ´ 2n symmetric. Then:

G (G) = inf {F (G10 2)∣G  G0  iJ} (24)


⎧1 n ⎛ 1 + s (A + iB) ⎞ ⎫
= inf ⎨
⎩2
å log ⎜⎝ 1 - si (A + iB) ⎟⎠ ∣  -1(G)  H , H Î ⎬ ,

(25)
i=1 i

where  is defined via:

= { (
H=
B -A )
A B Î 2m´ 2m ∣ AT = A , BT = B , spec (H ) Ì ( - 1, 1)
} . (26)

Proof. First note that the infimum in all three expressions is actually attained. We can see
this most easily in the definition (23): the matrix inequalities G  S T S (iJ ) imply that the set
of feasible S in the minimisation is compact, hence its minimum is attained. To see
(23) = (24), first note that (24) „ (23) since any S Î Sp (2n ) also fulfils S T S  iJ ,
hence G  S T S  iJ . For equality, note that for any G  G0  iJ , using Williamson’s
theorem we can find S Î Sp (2n ) and a diagonal D̃   (via corollary 2.10) such
that G0 = S T DS  S T S  iJ . But since F (G10 2)  F ((S T S )1 2 ) = F (S ) via the Weyl
monotonicity principle, the infimum is achieved on symplectic matrices.
Finally, let us prove equality with (25). First observe that we can replace Sp (2n ) by 
using proposition B.1(4).
Using the fact that si (S ) = li (S T S )1 2 = li ( (H ))1 2 and the fact that H is
diagonalised by the same unitary matrices as  (H ) = ( + H ) · ( - H )-1 whence its
eigenvalues are
1 + li (H )
li ( (H )) = ,
1 - li (H )
we have:
⎧ 1
⎛ 1 + l  (H ) ⎞ 2 ⎫
⎪ n ⎪
inf {F (S )∣G  S T S, S Î Sp (2n)} = inf ⎨log

 ⎜ i

⎟ ∣G   ( H ) , H Î  ⎬.

⎩ i=1 ⎝ 1 - l i (H ) ⎠ ⎭

Next we claim li (H ) = si (A + iB) for i = 1,K,n. To see this note:
 = ⎛⎜ 0 A + iB ⎞⎟
(
1  i
2  - i
· A B ·
B -A )(
- i i ⎝ )(
A - iB 0 ⎠
.) (27)

The singular values of the matrix on the right-hand side of equation (27) are the eigenvalues
of diag ((A + iB)† (A + iB), (A + iB)(A + iB)†)1 2 , which are the singular values of A + iB
with double multiplicity. From the structure of H, it is immediate that the eigenvalues of the
right-hand side of equation (27) and thus of H come in pairs si (A + iB). Hence

14
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

li (H ) = si (A + iB) for i = 1,K,n and we have:


inf {F (S )∣G  S T S , S Î Sp (2n)}
⎧1 n ⎛ 1 + si (A + iB) ⎞ ⎫
= inf ⎨ å log ⎜ ⎟ ∣ G   (H ) , H Î  ⎬ .
⎩ 2 i=1 ⎝ 1 - si (A + iB) ⎠ ⎭
To see that that the right-hand side equals (25), we only need to use the fact that
G   (H )  -1(G)  H for all H Î  and G  iJ since the Cayley transform and its
inverse are operator monotone. ,

4.2. Convexity

The reformulation (25) will allow us to prove:

Theorem 4.3. G is convex on the set of covariance matrices {G Î 2n ´ 2n∣G  iJ}.

The crucial part of the proof is the following lemma:

Lemma 4.4. Consider the map f : n ´ n ´ n ´ n   :


1 n ⎛ 1 + s (A + iB) ⎞
f (A , B ) =
2
å log ⎜⎝ 1 - si (A + iB) ⎟⎠. (28)
i=1 i

If we restrict f to symmetric matrices A and B such that si (A + iB) < 1 for all i = 1, ¼, n , f
is jointly convex in A, B , i.e.
f (tA + (1 - t ) A¢ , tB + (1 - t ) B ¢)  tf (A , B) + (1 - t ) f (A¢ , B ¢) " t Î [0, 1] .

Proof. Let A˜ ≔ tA + (1 - t ) A¢ and B˜ ≔ tB + (1 - t ) B¢. Note that à and B̃ are


also symmetric, and the largest singular value of A˜ + iB˜ fulfils
s1 (A˜ + iB˜ )  ts1 (A + iB) + (1 - t ) s1 (A¢ + iB¢). Therefore, the singular values of any
convex combination of A + iB and A¢ + iB¢ also lie in the interval [0, 1). This makes our
restriction well-defined under convex combinations.
For any j = 1,K,n, by Thompson’s theorem (see [Tho76]), which states that for every
complex A, B, there exist unitary matrices U , V such that ∣A + B∣  U∣A∣U * + V ∣B∣V *, we
have
sj (A˜ + iB˜ ) = lj (∣A˜ + iB˜∣)  lj (U∣t (A + iB)∣U * + V ∣(1 - t )(A¢ + iB ¢)∣V *).
Using Lidskii’s theorem ([Bha96], chapter III with explicit formulation in exercise III.4.3),
we have

sj (A˜ + iB˜ )  lj (U∣t (A + iB)∣U *) + åpp l p ( j) (V ∣(1 - t )(A¢ + iB ¢)∣V *)


p
(*)
= lj (∣t (A + iB)∣) + åpp l p ( j) (∣(1 - t )(A¢ + iB ¢)∣)
p
= tlj (∣A + iB∣) + (1 - t ) åpp l p ( j) (∣A¢ + iB ¢∣) (29)
p

with pp  0 and å p pp = 1. In ( * ), we used that unitaries do not change the spectrum. Now
each summand in equation (28) is the Cayley transform of a singular value. We can use the

15
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

log-convexity of the Cayley-transform to prove the joint convexity of f:


n
f (A˜ , B˜ ) = å log  [si (A˜ + iB˜)]
i=1
n ⎡ ⎤
 å log  ⎢⎣tli (∣A + iB∣) + (1 - t ) åpp l p (i) (∣A¢ + iB ¢∣) ⎥

i=1 p
n ⎛ ⎞
 å ⎜t log  [li (∣A + iB∣)] + (1 - t ) åpp log  [l p (i) (∣A¢ + iB¢∣)] ⎟
i = 1⎝ p ⎠
n ⎛ n ⎞
= åt log  [li (∣A + iB∣)] + (1 - t ) åpp ⎜⎝å log  [l p (i) (∣A¢ + iB¢∣)] ⎟⎠
i=1 p i=1
n ⎛n ⎞
 t å log  [li (∣A + iB∣)] + (1 - t ) åpp · max ⎜å log  [l p (i) (∣A¢ + iB ¢∣)] ⎟
i=1 p p ⎝i = 1 ⎠
n n
(**)
= t å log  [li (∣A + iB∣)] + (1 - t ) å log  [li (∣A¢ + iB ¢∣)]
i=1 i=1
= tf (A , B) + (1 - t ) f (A¢ , B ¢) ,

where in (**) we use that the sum of all eigenvalues is of course not dependent on the order of
the eigenvalues. ,

This lemma will later allow us to calculate G as a convex programme.

Proof of theorem 4.3. We can now finish the proof of the convexity of G.
First note that using the definition of f in lemma 4.4 we can reformulate (25) to

G (G) = inf { f (A , B)∣ -1(G)  H , H Î }. (30)

Let G  iJ , G¢  iJ be two covariance matrices and let H , H ¢ Î  be the matrices that attain
the minimum of G (G), G (G¢) respectively. Then, in particular, tH + (1 - t ) H ¢ Î  .
Furthermore, since -1(G)  H and -1(G¢)  H ¢ we have

(*)
 -1(t G + (1 - t ) G¢)  t  -1(G) + (1 - t )  -1(G¢)  tH + (1 - t ) H ¢ ,

where we used the operator concavity of -1 in ( * ). This means that tH + (1 - t ) H ¢ is a


feasible matrix for the minimisation in G, which implies using equation (30)

G (t G + (1 - t ) G¢)  f (tA + (1 - t ) A¢ , tB + (1 - t ) B ¢).

The convexity now follows directly from lemma 4.4 and the fact that we chose H and H ¢ to
attain G (G) and G (G¢). ,

4.3. Continuity properties

From the convexity of G on the set of covariance matrices, it follows from general arguments
in convex analysis that G is continuous on the interior of the set of covariance matrices (see
[Roc97], theorem 10.1). What more can we say about the boundary?

16
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

Theorem 4.5. G is lower semicontinuous on the set of covariance matrices


{G Î 2n ´ 2n∣G  iJ} and continuous on its interior. Moreover, G (G + e )  G (G) for
0 < e  0 for any G  iJ .

The ultimate goal is to extend continuity from the interior to the exterior, which we do
not know how to do at present. The proof will need a few notions from set-valued analysis
that we review in appendix C.

Proof of theorem 4.5. As already observed, G is continuous on the interior. Let G0  iJ be


arbitrary and suppose
 (G) ≔ {Gˆ ∣(G - 2G0 )  Gˆ  G}.
By definition,  is compact and convex for any Γ. Moreover, it defines a set-valued function
on the set of covariance matrices with non-empty values. Let e > 0 , then for all G  iJ with
G - G0  < e , we have that for any Gˆ Î  (G), G˜ ≔ Gˆ + (G - G0) Î  (G0) and
Gˆ - G˜  < e. This is the condition in lemma C.2 hence the set-valued function defined by
 is upper semicontinuous at G0 , which implies that  (G) Ç {X∣iJ  X} is also upper
semicontinuous by proposition C.3. If ε is small enough (e.g. e < 1), this implies
 (G) Ç {X∣iJ  X} = {X∣iJ  X  G} ≕  (G),
hence this set is upper semicontinuous at G0 .
Since F is continuous on positive definite matrices, it is absolutely continuous if we
restrict to a small neighbourhood of the covariance matrix G0 . This means that for every e > 0
there exists an  > 0 such that
F (G˜ ) - e < F (Gˆ ) < F (G˜ ) + e (31)
for all G˜ - Gˆ  <  and all G˜ , Gˆ Î ⋃ G-G0 < 1  (G).
Assuming without loss of generality that G - G0  < 1, the set  (G) is exactly the set for
the minimisation in the definition of G. The upper semicontinuity of  (G) implies by
lemma C.2 that for every  > 0 there exists a d > 0 such that for all G - G0  < d we have:
for all Gˆ Î  (G) there exists a G˜ Î  (G0) such that Gˆ - G˜  <  . In particular, this is true for
1 2
all minimisers Ĝ with G (G) = F (Gˆ ), where Ĝ and G˜ Î ⋃ G-G0 < 1  (G). Using
equation (31) we obtain: for every e > 0 there exists a d > 0 such that for all
G - G0  < d , we have a pair Gˆ , G˜ with Gˆ Î  (G) minimising G (G) and G˜ Î  (G0) such that
F (G˜ ) - e < F (Gˆ ) = G (G) .
This implies that for all e > 0 there exists a d > 0 such that
G (G0)  G (G) + e
for all G - G0  < d .
Taking the limit inferior on both sides implies that G is lower semicontinuous at G0 .
Upper semicontinuity would follow for instance if  (G0) is also lower semicontinuous.
Finally, let us prove that G (G0 + e )  0 for e  0 . To see this, consider the closed sets
Cn ≔ ⋃  (G0 + x )
0x1 n

for any n Î  . It is easy to see that Cn + 1 Í Cn and that ⋂n Î¥ Cn =  (G0). Moreover, C1 is


compact. Now let Gn be the sequence of minimisers for G (G0 + 1 n ), then Gn Î Cn for all
n Î  . By compactness, a subsequence will converge to
17
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

G Î ⋂ Cn =  (G0) .
n Υ

Therefore, G (G0)  lime 0 G (G0 + e ), but since G0  G0 + e  for all e > 0 we also have
G (G)  lime 0 G (G0 + e ). ,

4.4. Additivity properties

Now we consider additivity properties of G. We switch our basis again and use γ and σ.

Proposition 4.6. For any covariance matrices gA Î 2n1´ 2n1 and gB Î 2n2 ´ 2n2 , we have
1
(G (gA) + G (gB))  G (gA Å gB)  G (gA) + G (gB).
2
In particular, G is subadditive.

Proof. For subadditivity, let S T S  gA and S ¢ T S ¢  gB obtain the minimum in G (gA) and
G (gB ) respectively. Then S Å S ¢ is symplectic and (S Å S ¢)T (S Å S ¢)  gA Å gB
hence, G (gA Å gB )  G (A) + G (B).
To prove the lower bound, we need the following equation that we will only prove later
on (see equation (46)):
a1: G (gA)  G (gA Å a  n2). (32)
Assuming this inequality, let a  1 be such that a  n2  gB , then
G (gA Å a  n2)  G (gA Å gB)
hence G (gA)  G (gA Å gB ) and since we can do the same reasoning for gB , we have
G (gA) + G (gB )  2G (gA Å gB ). ,

We do not know whether G is also superadditive, which would make it additive. At


present, we can only prove:

Corollary 4.7. Let gA Î 2n1´ 2n1 and gB Î Sp (2n 2 ), be two covariance matrices (i.e. gB is a
covariance matrix of a pure state). Then G is additive.

Proof. Subadditivity has already been proven in the lemma. For superadditivity, we use the
second reformulation of the squeezing measure in equation (24): note that there is only one
matrix gB  g  is , namely gB itself. Now write
⎛ ˜ ⎞
gA Å gB  ⎜ A C ⎟  is
⎝C T B˜ ⎠
for A˜ Î 2n1´ 2n1 and B˜ Î 2n2 ´ 2n2 . Then in particular gB - B˜  0 , but also B˜  is , hence
gB  B˜  is and therefore B˜ = gB . But then
⎛ ˜ ⎞ ⎛ g - A˜ C ⎞
gA Å gB - ⎜ A C ⎟ = ⎜ A ⎟
⎝C B˜ ⎠ ⎝ C T
T
0⎠
hence also C = 0 and the matrix that takes the minimum in G (gA Å gB ) must be block-
diagonal. Then gA Å gB  A˜ Å gB  0 and à is in the feasible set of G (gA). ,

18
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

Corollary 4.8. For any covariance matrices gA Î 2n1´ 2n1 and gB Î 2n2 ´ 2n2 ,
⎛⎛ g C ⎞⎞
G (gA) + G (gB)  2G ⎜⎜ ⎜ T ⎟ ⎟⎟.
A

⎝ ⎝C gB ⎠ ⎠
If G is superadditive, then this inequality holds without the factor of two.

Proof.

⎛ ⎛g 0 ⎞ ⎞ ⎛1 ⎛ g C ⎞ 1 ⎛ gA - C ⎞ ⎞
G (gA) + G (gB)  2G ⎜ ⎜ A ⎟ ⎟ = 2G ⎜⎜ ⎜ T ⎟⎟
A
⎟+ ⎜
⎝ ⎝ 0 gB ⎠ ⎠ ⎝ 2 ⎝C gB ⎠ 2 ⎝- C T gB ⎠ ⎟⎠
(*) ⎛ ⎛ gA C ⎞ ⎞ ⎛⎛ g - C ⎞ ⎞ (**) ⎛⎛ g C ⎞⎞
 G ⎜⎜ ⎜ T ⎟ ⎟⎟ + G ⎜⎜ ⎜ ⎟ ⎟⎟ = 2G ⎜⎜ ⎜ T ⎟ ⎟.
A A

⎝⎝ C gB⎠ ⎠ ⎝⎝ - C T g B ⎠⎠ ⎝⎝C gB ⎠ ⎟⎠

Here we used proposition 4.6 and then convexity of G in ( * ). Finally, in (**) we used that
for every
⎛ gA C ⎞ ⎛ SA C˜ ⎞ ⎛ SA C˜ ⎞T
⎜ T ⎟  ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ Î Sp (2 (n1 + n 2)) (33)
⎝C gB ⎠ ⎝C˜ T SB ⎠ ⎝C˜ T SB ⎠

we also have:

⎛ gA - C ⎞ ⎛ SA - C˜ ⎞ ⎛ SA - C˜ ⎞T
⎜ ⎟  ⎜⎜ ⎟⎟ ⎜⎜ ⎟⎟ Î Sp (2 (n1 + n 2)) (34)
⎝- C T gB ⎠ ⎝- C˜ T SB ⎠ ⎝- C˜ T SB ⎠

and vice versa. Since the two matrices on the right-hand side of equations (33) and (34) have
equal spectrum, the two squeezing measures of the matrices on the left-hand side need to be
equal. ,

4.5. Bounds

Let us give a few simple bounds on G.

Proposition 4.9 (Spectral bounds). Let G  iJ be a valid covariance matrix and l (G) be
the vector of eigenvalues in decreasing order. Then:
n
1 1
- å log (li (G))  G (G)  2
2 l i (G) < 1
å log li (G) = F (G1 2) . (35)
i=1

Proof. According to the Euler decomposition, a symplectic positive definite matrix has
positive eigenvalues that come in pairs s, s-1 and we can find O Î SO (2n ) such that for any
STS  G
OT GO  diag (s1, ¼, sn , s1-1, ¼, sn-1) .
But then, lk (G)  lk (diag (s1, ¼, sn, s1-1, ¼, sn-1)) via the Weyl inequalities li (A)  li (B)
for all i and A - B  0 (see [Bha96], theorem III.2.3). This implies:

19
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
n n
G (G)  å log (max {si , si-1})  å log li (G)1 2.
i=1 i=1

For the lower bound, given an optimal matrix S with eigenvalues si, we have

G (G) = å max {si , si-1}.


i

If S T S = OT diag (s12, ¼, sn2, s1-2, ¼, sn-2 ) O with O Î SO (2n ) is the diagonalisation of S T S ,


we can write:
O-T G-1O-1  diag (s12, ¼, sn2 , s1-2, ¼, sn-2)

and again by Weyl’s inequalites, we can find for all k  n :


2n k
1 1
-
2
å log (li (G)) 
2
å log li (diag (s12, ¼, sn2 , s1-2, ¼, sn-2))  G (G). (36)
i = 2n - k + 1 i=1

Now, - 2 å i2=n 2n - k + 1 li (G) can be upper bounded by restricting to eigenvalues li (G) < 1.
1

This implies
1
- å log (li (G))  G (G)
2 l i (G) < 1

using that the number of eigenvalues li (G) < 1 can at most be n (hence k  n in the
inequality of equation (36)), since G  S T S and S TS has at least n eigenvalues bigger than
one. ,

Numerics suggest that the lower bound is often very good for low dimensions. In fact, it
can sometimes be achieved:

Proposition 4.10. Let G  iJ be a covariance matrix, then G achieves the lower bound in
equation (35) if there exists an orthonormal eigenvector basis {vi}i2=n 1 of Γ with viT Jvj = di, n + j .
Conversely, if G achieves the lower bound, then viT Jvj = 0 for all normalised eigenvectors
vi, vj of G with li, lj < 1.

Proof. Suppose that the lower bound in equation (35) is achieved. Via Weyl’s inequalities
(see [Bha96] theorem III.2.3), for all S T S  G in the definition of G we have
li (S T S )  li (G). For the particular S achieving G, this implies that for all li (G) < 1 we
have li (S T S ) = li (G). But then G  S T S implies that S TS and Γ share all eigenvectors to the
smallest eigenvalue. Iteratively, every eigenvector of Γ with li (G) < 1 must be an
eigenvector of S TS with the same eigenvalue.
Since the matrix diagonalising S TS also diagonalises -1(S T S ), the eigenvectors of the
two matrices are the same. Now, since -1(S T S ) Î  by reformulation (25), for any
eigenvector vi of any eigenvalue -1(li ) < 0 , Jvi is also an eigenvector of -1(S T S ) to the
eigenvalue --1(li ), implying viT Jvj = 0 for all i, j . By definition, this means that {vi, Jvj}
forms a symplectic basis. Above, we already saw that the eigenvectors of Γ for li (G) < 1 are
also eigenvalues of S TS, hence viT Jvj = 0 for all i such that li (G) < 1.
Conversely, suppose we have an orthonormal basis {vi}i2=n 1 such that viT Jvj = di, j + n
(modulo 2n if necessary) for all eigenvectors of Γ, i.e.Γ is diagonalisable by a symplectic
orthonormal matrix O˜ Î U (n ). Then

20
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

T
O˜ GO˜ = diag (l1, ¼, l2n) .

Since G  iJ we have li l2i  1. Assume that li  ln + i for all i = 1,K,n and the ln + i are
ordered in decreasing order. Then ln + r < 1  ln + r - 1 for some r  n and

S T S = O˜ diag (1, ¼, 1, l- -1
T
r , ¼, l n , 1, ¼, 1, l n + r , ¼, l 2n) O
1 ˜

fulfils S T S  G and obviously achieves the lower bound in equation (35). ,

In contrast to this, the upper bound can be arbitrarily bad. For instance, consider the
thermal state G = (2N + 1) ·  for increasing N. It can easily be seen that G (G) = 0 , since
G   Î P (n ) and F () = 0 , hence G (G)  0 . However, the upper bound in equation (35) is
n 2 log (2N + 1)  ¥ for N  ¥, therefore arbitrarily bad.
We can achieve better upper bounds by using Williamson’s normal form:

Proposition 4.11 (Williamson bounds). Let G Î 2n ´ 2n be such that G  iJ and consider
its Williamson normal form G = S T DS . Then:

F (S ) - log ( det (G) )  G (G)  F (S ) . (37)

Proof. Since D   via G  iJ , the upper bound follows directly from the definition. Also,
F (S )  F (G1 2), which makes this bound trivially better than the spectral upper bound in
equation (35).
The lower bound follows from:

⎛ 2n ⎞ i = 1 li (G)
n
(36) 1 1
G (G)  log ⎜  li (G)-1⎟ = log
i = 1 li (G)
2n
2 ⎝i = n + 1 ⎠ 2
= F (G1 2) - log (det (G)1 2 )  F ((S T S )1 2 ) - log ( det (G) )
= F (S ) - log ( det (G) )

using Weyl’s inequalities once again, implying that since S T S  G, we also have
F (S )2 = F (S T S )  F (G). ,

The upper bound here can also be arbitrarily bad. One just has to consider
G ≔ S T (N · ) S with S 2 = diag (N - 1, ¼, N - 1, (N - 1)-1, ¼, (N - 1)-1) Î Sp (2n).
Then G   , i.e.G (G) = 0 , but F (S )  ¥ for N  ¥.

Proposition 4.12. Let G  iJ be a covariance matrix. Then


1
G (G)  inf {g0 1 ∣log G  g0, g0 Î p (n)} , (38)
4

where p (n ) was defined in proposition 2.5 as the Lie algebra of the positive semidefinite
symplectic matrices. This infimum can be computed efficiently as a semidefinite programme.

Proof. Recall that the logarithm is operator monotone on positive definite matrices. Using
this, we have:

21
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

⎧ n ⎫
G (G) = log inf ⎨  li (S T S )1 2 ∣G  S T S ⎬
⎩i = 1 ⎭
⎧n ⎫
 inf ⎨å log li ( exp (g0))1 2 ∣log G  g0, g0 Î p (n)⎬
⎩i = 1 ⎭
⎧1 n ⎫
= inf ⎨ åli (g0)∣log G  g0, g0 Î p (n)⎬
⎩ 2 i=1 ⎭
⎧1 2 n ⎫
= inf ⎨ åsi (g0)∣log G  g0, g0 Î p (n)⎬.
⎩ 4 i=1 ⎭
The last step is valid, because the eigenvalues of matrices g0 Î p (n ) come in pairs li . Since
the sum of all the singular values is just the trace-norm, we are done.
It remains to see that this can be computed by a semidefinite programme. First note that
since the matrices H Î p (n ) are those symmetric matrices with HJ + JH = 0 , the constraints
are already linear semidefinite matrix inequalities. The trace norm is an SDP by standard
reasoning [RFP10, VB96]:
⎧1 ⎛ A g0 ⎞ ⎫
g0 1 = min ⎨ tr (A + B) ⎜ ⎟  0⎬
⎩2 ⎝ g0 B ⎠ ⎭
which is clearly a semidefinite programme. ,

Numerics for small dimensions suggest that this bound is mostly smaller than the spectral
lower bounds.

5. An operational definition of the squeezing measure

We claim that G answers the question: given a state, what is the minimal amount of single-
mode squeezers needed to prepare it? In other words, it quantifies the amount of squeezing
needed for the preparation of a state.

5.1. Operations for state preparation and an operational measure for squeezing

We first specify the preparation procedure. Since we want to quantify squeezing, it seems
natural that we allow to freely draw states from the vacuum or a thermal bath to start with.
Furthermore, we can perform an arbitrary number of the following operations for free:
(1) Add ancillary states also from a thermal bath or the vacuum.
(2) Add Gaussian noise.
(3) Implement any gate from linear optics.
(4) Perform Weyl-translations of the state.
(5) Perform selective or non-selective Gaussian measurements such as homodyne or
heterodyne detection.
(6) Forget part of the state.
(7) Create convex combinations of ensembles.
In addition, the following operation comes with an associated cost:
(8) Implement single-mode squeezers at a cost of log(s ), where s is the squeezing parameter.

22
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

All these operations are standard operations in quantum optics and they should capture
all important Gaussian operations except for squeezing.
It is well-known that all of these operations are captured by the following set of
operations on the covariance matrix (for a justification, see appendix D):
(O0) We can always draw N-mode states with g Î 2N ´ 2N for any dimension N from the
vacuum g =  or a bath fulfiling g   .
(O1) We can always add ancillary modes from the vacuum ganc =  or a bath g   and
consider g Å ganc .
(O2) We can freely add noise with gnoise  0 to our state, which is simply added to the
covariance matrix of a state.
(O3) We can perform any beam splitter or phase shifter and in general any operation
S Î K (n ), which translates to a map g  S T gS on covariance matrices of states.
(O4) We can perform any single-mode squeezer S = diag (1, ¼, 1, s, s-1, 1 ¼ ,1) for
some s Î +.
(O5) We can perform any Weyl-translation leaving the covariance matrix invariant.
(O6) Given two states with covariance matrices g1 and g2 , we can always take their convex
combination pg1 + (1 - p ) g2 for any p Î [0, 1].
(O7) At any point, we can perform a selective measurement of the system corresponding to
a projection into a finitely or infinitely squeezed state. Given a state with covariance
⎛ A B⎞
matrix g = ⎜ T ⎟, this maps
⎝B C⎠
g  A - C (B - gG ) MP C T ,
MP
where denotes the Moore–Penrose pseudoinverse.
Only operation (O4) comes at a cost of log(s ), all other operations are free.
We are now ready to state our main theorem, which states that the minimal squeezing
cost for any possible preparation procedure consisting of operations (1)–(8).is given by G.

Theorem 5.1. Let r be a quantum state with covariance matrix g . Consider arbitrary
sequences

gN ≔ g0  g1    gN ,
where g0 fulfils (O0) and every arrow corresponds to an arbitrary operation (O1)–(O5) or
 
(O7). Using (O6), we can merge two sequences g N1 and g N2 to one resulting tree with
g N1+ N2 + 1 = lg N1 + (1 - l ) g N2 for some l Î (0, 1). Iteratively, we can construct trees of any
depth and width using operations (O1)–(O7).
Let ON (g ) be the set of such trees with N operations ending with γ (i.e. gN = g ).
Let O (g ) = ⋃¥ N = 1 ON (g ).

Furthermore, for any tree gˆ Î ON (g ), let s = {si}iM= 1 be the sequence of the largest
singular values of any single-mode squeezer (O4) implemented along the sequence (in
particular, M  N ). Then
⎧  ⎫
G (g ) = inf ⎨å log si∣si Î s , gˆ Î O (g )⎬ . (39)
⎩ i ⎭

23
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

5.2. Proof of the main theorem

Since we consider many different operations, the proof is rather lengthy, where the main
difficulties will be in showing that measurements do not squeeze. In order to increase
readability, the proof will be split into several lemmata.

Lemma 5.2. Let g Î 2n ´ 2n be a covariance matrix, g0   , let N Î  and


g0  g1    gN = g (40)
be any sequence of actions (O1)–(O5) or (O7). If we denote the cost (sum of the logarithm of
the largest singular values of any symplectic matrix involved) of this sequence by c, then one
can replace this sequence by:

(O1) (O2) ( O 3) , ( O 4) T
g0  g0 Å ganc  g0 Å ganc + gnoise  S (g0 Å ganc + gnoise) S
(O 7)
  (S T (g0 Å ganc + gnoise) S ) (41)

with ganc   , gnoise  0 , S Î Sp (2n ) and  a partial Gaussian measurement of type


specified in (O7). For this sequence, c  F (S ).

Proof. We prove the proposition by proving that given any chain g0  g1    gN = g


as in (40), we can interchange all operations and obtain a chain as in equation (41). For
readability, we will not always specify the size of the matrices and we will assume that
g  is , ganc   , gnoise  0 , and S a symplectic matrix, whenever the symbols arise:
(1) We can combine any sequence gi  gi + 1    gi + m for some m Î  where each of
the arrows corresponds to a symplectic transformation Sj, j = 1,K,m as in (O3) or (O4),
into a single symplectic matrix S Î Sp (2n ) such that gi + m = S T gi S . Furthermore
lemma 3.4 implies F (S )  åi s1 (Si ), hence this recombination of steps only lowers the
amount of squeezing.
(2) Any sequence g  S T gS  S T gS + gnoise can be converted into a sequence
g  S T (g + g˜noise ) S with the same S and hence the same costs by set-
ting g˜noise ≔ S-T gnoise S-1  0 .
(3) Any sequence g  S T gS  S T gS Å ganc can be converted into a sequence
g  g Å ganc  S˜T (g Å ganc ) S˜ by setting S˜ = S Å  with  of the same dimension
as ganc . Since we only add the identity, we have F (S˜ ) = åi log si (S˜ ) = F (S ) and the
costs do not increase.
(4) Any sequence g  g + gnoise  (g + gnoise ) Å ganc can be converted into a sequence
g  g Å ganc  g Å ganc + g˜noise by setting g˜noise = gnoise Å 0  0 , which is again a
valid noise matrix. As no operation of type (O4) is involved, the squeezing costs do not
change.
In a next step we consider measurements. We will only consider homodyne detection,
since the proof is exactly the same for arbitrary Gaussian measurements of type (O7). Given a
covariance matrix γ, we assume a decomposition

⎛ A C⎞
g= ⎜ ⎟;  (g ) = A - C (pBp ) MP C T
⎝C T B ⎠

as in the definition of (O7) with p = diag (1, 0).

24
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

(5) Any sequence g   (g )  S T  (g ) S can be converted into a sequence


g  S˜T gS˜   (S˜T gS˜ ) by setting S˜ = S Å 2 . To see this, write
S T  (g ) S = S T AS - S T C (pBp )MP C T S and
⎛ T ⎛ A C⎞ S 0 ⎞ ⎛ ⎛ S T AS S T C ⎞ ⎞
⎜ S 0
⎝ 0 ( ) ⎜ ⎟
( ) ⎟ = ⎜⎜ T
⎝C T B ⎠ 0  ⎠ ⎝⎝ C S B ⎠⎠
⎟⎟

= S T AS - S T C (pBp ) MP C T S
hence the final covariance matrices are the same. By the same reasoning as in (3), the
costs are equivalent.
(6) Any sequence g   (g )   (g ) + gnoise can be converted into a sequence
g  g + g˜noise   (g + g˜noise ) by setting g˜noise = gnoise Å 0 , with 0 on the last mode
being measured. Since no symplectic matrices are involved, the costs are equivalent.
(7) Any sequence g   (g )   (g ) Å ganc can be changed into a sequence
g  g Å ganc   ˜ (g Å g ), where the measurement ̃ measures the last mode of
anc
γ, i.e.
⎛⎛ A C 0 ⎞⎞
⎜ ⎟
M˜ ⎜ ⎜C T B 0 ⎟ ⎟ = (A Å ganc) - (C Å 0)(pBp ) MP (C Å 0)T .
⎜⎜ 0 0 g ⎟⎟
⎝⎝ anc ⎠ ⎠

Clearly, the resulting covariance matrices of the two sequences are the same and the costs
are equivalent.
We can now easily prove the lemma. Let g0   gn be an arbitrary sequence with
operations of type (O1)–(O5) or (O7). We can first move all measurements to the right of the
sequence, i.e.we first perform all operations of type (O1)–(O5) and then all measurements.
This is done using the observations above. Note also that this step is similar to the quantum
circuit idea to ‘perform all measurements last’ (see [NC00], chapter 4).
Similarly, we can combine operations of type (O3) and (O4) and rearrange the other
operations to obtain a new sequence as in equation (41) with at most the costs of the sequence
g1    gm we started with. ,

We can now slowly work towards theorem 5.1:

Lemma 5.3. Let g Î 2n ´ 2n be a covariance matrix, then


G (g ) = inf {F (S )∣g = S T (g0 Å ganc + gnoise) S , S Î Sp (2n) , g0 Å ganc   , gnoise  0}.
(42)

Proof. First note that for any g  is , we can find S Î Sp (2n ), g0 Î 2n ´ 2n with g0   and
gnoise Î 2n ´ 2n with gnoise  0 such that g = S T (g0 + gnoise ) S by using Williamson’s
theorem, hence the feasible set is never empty. The lemma is immediate by observing that for
any g = S T (g0 Å ganc + gnoise ) S since (g0 Å ganc + gnoise )   we have g  S T S and
conversely, for any g  S T S , defining g0 ≔ S-T gS-1   , we have g = S T g0 S . ,

25
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

As an intermediate step we introduce the following notation:


G˜ (g ) ≔ inf {F (S )∣g =  (S T (g0 Å ganc + gnoise) S ) , S Î Sp (2n) ,
(43)
g0 Å ganc  2n , gnoise  0,  measurement}.

Then we have:

Lemma 5.4. For g Î 2n ´ 2n a covariance matrix, we have

G˜ (g ) = inf {F (gˆ 1 2)∣g =  (g˜ ) , g˜  gˆ  is ,  measurement}. (44)

Proof. This follows from lemma 5.3:


G˜ (g ) = inf {F (S )∣g =  (S T (g0 Å ganc + gnoise) S )}
= inf {F (S )∣g =  (g˜ ) , g˜ = S T (g0 Å ganc + gnoise) S  is}
Lemma
= 5.3 inf {G (g˜ )∣g =  (g˜ ) , g˜  is}
Prop. 4.2
= inf {F (gˆ 1 2)∣g =  (g˜ ) , g˜  gˆ  is} (45)
by taking the infimum over all measurements last. ,

Note here, that equation (45) together with the following proposition 5.5 finishes the
proof of proposition 4.6 via:
G (g ) = inf {G (g˜ )∣g =  (g˜ ) , g˜  is}  G (g Å a  n2) (46)
for a  1, using that measuring the last modes we obtain  (g Å a  n2) = g and therefore,
g Å a  n2 is in the feasible set of G˜ (g ) = G (g ).

Proposition 5.5. For g Î 2n ´ 2n a covariance matrix we have

G˜ (g ) = G (g ) .

This proposition shows that G is operational if we exclude convex combinations (and


therefore also non-selective measurements).

Proof. Using lemma 5.4, the proof of this proposition reduces to the question whether:
inf {F (gˆ 1 2)∣g˜  gˆ  is ,  (g˜ ) = g} = inf {F (g 1 2)∣g  g  is}. (47)
Since we do not need to use measurements, „ is obvious.
Let g˜  gˆ  is for some  (g˜ ) = g . Our first claim is that
g   (gˆ )  is (48)
 (gˆ )  is is clear from the fact that ĝ is a covariance matrix and a measurement takes states
to states. g   (gˆ ) is proved using Schur complements. Let  be a Gaussian measurement
as in equation (68) with gG = diag (d , 1 d ) with d Î +. It is well-known that
 (g ) = ( Å diag (1 d , d ) g ( Å diag (1 d , d )) + 0 Å 2 )S ,

26
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

where S denotes the Schur complement of the block in the lower-right corner of the matrix.
For homodyne measurements, we take the limit d  ¥. Since for any g˜  gˆ  0 , the Schur
complements of the lower right block fulfil g˜ S  gˆ S  0 (see [Bha07], exercise 1.5.7), we
have g   (gˆ ) as claimed in equation (48).
Next, we claim

F ( (gˆ )1 2 )  F (gˆ 1 2) . (49)

To prove this claim, note that via the monotonicity of the exponential function on  , it
suffices to prove
m n
 sj ( (gˆ ))   sj (gˆ )
j=1 j=1

when we assume gˆ Î 2n ´ 2n and  (gˆ ) Î 2m ´ 2m with m  n . If we write


⎛ Aˆ Cˆ ⎞
gˆ = ⎜ T ⎟
⎝Cˆ Bˆ ⎠
T
then the state after measurement is given by  (gˆ ) = Aˆ - Cˆ (Bˆ + diag (d , 1 d ))-1Cˆ or the
T
limit d  ¥ for homodyne measurements. In any case Cˆ (Bˆ + diag (d , 1 d ))-1Cˆ  0 and
 (gˆ )  Aˆ and therefore, by Weyl’s inequalities, also
m m
 sj ( (gˆ ))   sj (Aˆ ) .
j=1 j=1

Now we use Cauchy’s interlacing theorem (see [Bha96], corollary III.1.5): as  is a submatrix
of ĝ , we have li (Aˆ )  li (gˆ ) for all i = 1, ¼, 2m . Since at least m eigenvalues of  are
bigger or equal one and at least n eigenvalues of ĝ are bigger or equal one, this implies
m m m n n
 sj (Aˆ ) =  lj (Aˆ )   lj (gˆ )   lj (gˆ ) =  sj (gˆ ) . (50)
j=1 j=1 j=1 j=1 j=1

In particular, this proves equation (49).


We can then complete the proof: let g˜  gˆ  is for some  (g˜ ) = g in equation (47).
We have just seen that this implies g   (gˆ )  is via equation (48) and furthermore that
F (gˆ 1 2 )  F ( (gˆ )1 2 ) via equation (49). But this means that we have found g ≔  (gˆ )
such that g  g  is . Hence g is in the feasible set of the right-hand side of (47) and
F (g˜ 1 2 )  F (g 1 2 ), which implies in equation (47). ,

Finally, we can prove theorem 5.1 by also covering convex combinations:



Proof. Let g Î 2n ´ 2n be a covariance matrix. First consider only sequences g : we replace
any sequence by the special type of sequence of lemma 5.3. For these sequences, we have
seen that the minimum cost is given by G (g ) in proposition 5.5.
However, we explicitly excluded convex combinations (O6) by considering only
sequences and not trees ĝ : consider a tree of operations (O1)–(O7) which has γ at its root and
g0 =  as leaves. Let us consider any node closest to the leaves. At such a node, we start with
two covariance matrices g1 and g2 that were previously constructed without using convex
combinations and with costs G (g1) and G (g2 ). The combined matrix would be
g˜ ≔ lg1 + (1 - l ) g2 for some l Î (0, 1) and the costs would be lG (g1) + (1 - l ) G (g2 ).

27
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

By convexity of G (see theorem 4.3):


G (lg1 + (1 - l) g2)  lG (g1) + (1 - l) G (g2)
which means that we can find a sequence (without any convex combinations) producing
lg1 + (1 - l ) g2 which is cheaper than first producing g1 and g2 and then taking a convex
combination. Iteratively, this means we can eliminate every node from the tree and replace the
tree by a sequence of operations (O1)–(O5) and (O7), which is cheaper than the tree and trees
do not matter. ,

5.3. The squeezing measure as a resource measure

We have now seen that the measure G can be interpreted as a measure of the amount of
single-mode squeezing needed to create a state ρ. Let us now take a different perspective,
which is the analogue of the entanglement of formation for squeezing: consider covariance
matrices of the form
⎛s 0 ⎞
gs ≔ ⎜ ⎟. (51)
⎝ 0 s-1⎠
These are single-mode squeezed states with squeezing parameter s  1. We will now allow
these states as resources and ask the question: given a (Gaussian) state ρ with covariance
matrix γ, what is the minimal amount of these resources needed to construct γ, if we can
freely transform the state by the same operations as before excluding squeezing ((O1)–(O7)
excluding (O4)).
The corresponding measure is once again G:

Theorem 5.6. Let r be an n -mode state with covariance matrix g Î 2n ´ 2n . Then
⎧m 1 m ⎞⎫
G (g ) = inf ⎨å log (sm)∣g =  (⨁gsi⎟ ⎬ , (52)
⎩i = 1 2 i=1 ⎠ ⎭

where  : 2m ´ 2m  2n ´ 2n is a combination of the operations (1)–(6) above.

Proof. „: Note that for any feasible S Î Sp (2n ) in G (g ), i.e.any S with S T S  g , we can
find O Î Sp (2n ) Ç O (2n ) and D = ⨁in= 1 gsi with S T S = OT DO via the Euler
decomposition. Using that the Euler decomposition minimises F, we have
F (S ) = 2 F (D) = å in= 1 2 log (si ). But then, since we can find gnoise  0 such that
1 1

g = OT ⨁in= 1 gsi O + gnoise , we have that D is a feasible resource state to produce γ. This
implies G resource (g )  G (g ).
: For the other direction, the proof proceeds exactly as the proof of theorem 5.1. First,
we exclude convex combinations. Then, we realise that we can change the order of the
different operations (even if we include adding resource states during any stage of the
preparation process) according to lemma 5.2, making sure that any preparation procedure can
be implemented via:
⎛ ⎛m ⎞ ⎞
g =  ⎜O ⎜⨁gsi Å 2m¢ + gnoise ⎟ OT ⎟ ,
⎝ ⎝i = 1 ⎠ ⎠
¢ ¢
where O Î Sp (2m + 2m¢) Ç O (2m + 2m¢), gnoise Î 2m + 2m ´ 2m + 2m with gnoise  0 and 
a measurement. Now the only difference to proof of 5.1 is that we had the vacuum  instead of
⨁im= 1 gsi Å 2m¢ and an arbitrary symplectic matrix S instead of O, but the two ways of writing
the maps are completely interchangeable, so that the proof proceeds as in theorem 5.1. ,

28
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

We could call this measure the ‘(Gaussian) squeezing of formation’, as it is the analogue
to the Gaussian entanglement of formation. Note also that the measure is similar to the
Gaussian entanglement of formation as defined in [Wol+04]. One natural further question
would be whether ‘distillation of squeezing’ is possible with Gaussian operations. It is
impossible in some sense for the minimal eigenvalue via [Kra+03], while it is possible and
has been investigated for non-Gaussian states in many papers (see [Fil13, Hee+06] and
references therein). In our case, it is not immediately clear whether extraction of single-mode
squeezed states with less squeezing is possible or not. This could be investigated in
future work.

6. Calculating the squeezing measure

We have seen that the measure G is operational. However, to be useful, we need a way to
compute it.

6.1. Analytical solutions

1
Proposition 6.1. Let n = 1, then G (G) = - mini log (li (G)) for all G Î 2n ´ 2n .
2

Proof. Note that this is the lower bound in proposition 4.9, hence
- 2 mini log (li (G))  G (G). Now consider the diagonalisation G = O diag (l1, l2 ) OT
1

with O Î SO (2) and assume l1  l2 . Then, l- 2  l1 since otherwise, G  iJ .


1

Consider diag (l1, l2 )  O S SO for some S Î Sp (2) with eigenvalues s  1 and


- T T -1

s-1. Since diag (l1, l2 )  O-T S T SO-1, this implies in particular that s-1  l2 by Weyl’s
inequality. Since F (S T S ) = log s , in order to minimise F (S) over S T S  G, we need to
maximize s-1. Setting s-1 = l2 we obtain s = l- 2  l1 and diag (l1, l2 )  diag (s , s ).
1 -1

Since SO (2) = K (1), S S ≔ O diag (l1, l2 ) O  G is the minimising matrix in G and


T T

G (G) = F (S ) = 2 log l-
1 1
2 . ,

Proposition 6.2. Let r be a pure, Gaussian state with covariance matrix G Î 2n ´ 2n .
Then G (G) = F (G1 2).

Proof. From proposition 2.2, we know that det (G) = 1 in particular. Therefore, the bounds
in proposition 4.11 are tight and G (G) = F (G1 2). ,

6.2. Numerical calculations using Matlab

The crucial observation to numerically find the optimal squeezing measure is given in
lemma 4.4: if we use G in the form of equation (25), we know that the function to be
minimised is convex on . In general, convex optimisation with convex constraints is
efficiently implementable and there is a huge literature on the topic (see [BV04] for an
overview).
In our case, a certain number of problems occur when performing convex optimisation:
(1) The function f in equation (28) is highly nonlinear. It is also not differentiable at
eigenvalue crossings of A + iB or H Î  . In particular, it is not differentiable when one
of the eigenvalues becomes zero, which is to be expected at the minimum.

29
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

(2) While the constraints -1(g )  H and  > H > - are linear in matrices, they are
nonlinear in simple parameterisations of matrices.
(3) For γ on the boundary of the set of allowed density operators, the set of feasible solutions
might not have an inner point.
The first and second problem imply that most optimisation methods are unsuitable, as
they are either gradient-based or need more problem structure. It also means that there is no
guarantee for good stability of the solutions. The third problem implies that interior point
methods become unsuitable on the boundary, which limits applications. For instance, our
example of the next section (see equation (53)) lies on the boundary. As a proof of principle
implementation, we used the MATLAB-based solver SOLVOPT (for details see the manual
[KK97]). We believe our implementation could be made more efficient and more stable, but it
seems to work well in most cases for less than ten modes. More information on the pro-
gramme is provided in appendix E.

6.3. Squeezing-optimal preparation for certain three-mode separable states

Let us now work with a particular example that has been studied in the quantum information
literature. In [MK08], Mišta Jrand Korolkova define the following three-parameter group of
three-mode states where the modes are labelled A, B, C :
g = gAB Å C + x (q1 q1T + q2 q2T ) (53)
with
⎛ e 2d a 0 - e 2d c 0 ⎞
⎜ - 2d - ⎟
gAB = ⎜ 02d e a 0 e 2d c ⎟ ,
⎜⎜- e c 0 e 2d a 0 ⎟⎟
⎝ 0 -
e c2 d 0 e-2d a ⎠
q1 = (0, sin f , 0, - sin f , 2, 2 )T ,
q2 = (cos f , 0, cos f , 0 2 , 2 )T ,

where a = cosh (2r ), c = sinh (2r ), tan f = e-2r sinh (2d ) + 1 + e-4r sinh2 (2d ) . The
remaining parameters are d  r > 0 and x  0 . For
2 sinh (2r )
x = xsep 
e 2d sin2 f + e-2d cos2 f
the state becomes fully separable [MK08]. The state as such is a special case of a bigger
family described in [Gie+01]. In [MK08], it was used to entangle two systems at distant
locations using fully separable mediating ancillas (here the system labelled C). Therefore,
Mišta Jr and Korolkova considered also an LOCC procedure to prepare the state characterised
by (53). For our purposes, this is less relevant and we allow for arbitrary preparations of the
state. This was also done in [MK08] by first preparing modes A and B each in a pure
squeezed-state with position quadratures e 2(d - r ) and e 2(d + r ) . A vacuum mode in C was added
and x (q1 q1T + q2 q2T ) was added as random noise. Therefore, the squeezing needed to produce
this state in this protocol is given by
1
c=log (e 2 (d - r ) · e 2 (d + r ) ) = 2d . (54)
2
We numerically approximated the squeezing measure for gABC , choosing x = xsep , which
leaves a two-parameter family of states. We chose parameters d and r according to

30
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

Figure 1. Results of numerical calculations (formulas for d and r in equation (55)). On


the upper figure, the lower range of points (green in the online version) are the best
lower bound, the middle points (blue) denote the value of the objective function at
the minimum found by SOLVOPT and the upper points (red) denote the squeezing
costs of the preparation protocol of [MK08] (equation (54)). The lower figure shows the
preparation error. It is mostly below 10−6.

r = 0.1 + j · 0.05, d = r + i · 0.03 (55)


with i, j Î {1, ¼, 30} for a total of 900 data points. Since the algorithm is not an interior point
algorithm as described above, to check the result, we reprepared the state in the
following way:
(1) Let S be the symplectic matrix at the value optimum found by SOLVOPT for a covariance
matrix gABC .
(2) Calculate S-T gABC S-1 and calculate its lowest eigenvalue l2n .

31
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

(3) Define g˜ ≔ S-T gABC S-1 + (1 - min {1, l2n})    . Calculate the largest singular value
of S T g˜ S - g .
If S was a feasible point, then S T g˜ S = g . Since it is obvious how to prepare g̃ with
operations specified in section 5, the largest singular value of S T g˜ S - g is an indicator of how
well we can approximate the state we want to prepare by a state with comparably low
squeezing costs.
The results of the numerical computation are shown in figure 1. We computed the
minimum both with the help of numerical and analytical subgradients and took the value with
a better approximation error. At rare occasions, one algorithm failed to obtain a minimum.
Possible reasons for this are discussed in appendix E. The optimal values computed by the
algorithm are close to the lower bound and a lot better than the upper bound and the costs
obtained by equation (54). One can easily see that gABC cannot achieve the spectral lower
bound as the assumptions of lemma 4.10 are not met.

7. Discussion of modifications to allowed operations

In experiments, squeezing of a state is most commonly measured by the logarithm of the


smallest eigenvalue (up to a constant) and the unit is usually referred to as decibel (dB)
[Lvo15]. We know of no operational interpretation for this measure that is similar to the
interpretation given in section 5 and the measure is not natural for multimode states.
In contrast, G is a natural measure for multimode states. However, squeezing is not just
experimentally challenging, it gets much harder if we want to achieve a larger amount of
single-mode squeezing. Currently, the highest amount of squeezing obtained in quantum
optical systems seems to be about 13 dB (see [And+15]). In other words, the two states ρ and
r ¢ with covariance matrices

g = diag (s , s-1, s , s-1) , g ¢ = diag (s 2 , s-2 , 1, 1) (56)

will not be equally hard to prepare although G (g ) = G (g ¢). This is due to the fact that we
quantified the cost of a single-mode squeezer by log s .
To amend this, one could propose an easy modification to the definition of F in
equation (11):
n
Fg (g ) = å log (g (si (S))) (57)
i=1

by inserting another function g :    to make sure that for the corresponding measure
Gg (r ) º Gg (g ), we have Gg (g ) ¹ Gg (g ¢) in equation (56). We pose the following natural
restrictions on g:
• We need g (1) = 1 since Gg (r ) should be zero for unsqueezed states.
• Squeezing should get harder with larger parameter, hence g should be monotonously
increasing.
• For simplicity, we assume g to be differentiable.
Let us first consider squeezing operations and the measure Fg. We proved in proposition
3.3 and theorem 3.5 that F is minimised by the Euler decomposition. A crucial part was given
by lemma 3.4. In order to be useful for applications, we must require the same to be true for
Fg, i.e.

32
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al
n n
å log (g (si (SS¢)))  å [log (g (si (S))) + log (g (si (S¢)))].
i=1 i=1

This puts quite strong restraints on g: considering n = 1 and assuming that S and S ¢ are
diagonal with ordered singular values, this implies that g must fulfill g (xy )  g (x ) g ( y) for
x, y  1. This submultiplicativity restraint rules out all interesting classes of functions:
Assume for instance that g (2) = c , then g (2n)  c n , where equality is attained if
g (x ) = c · x . Therefore, all submultiplicative functions g(x) for x  1 must lie below
g (x ) = c · x at least periodically. Hence, lemma 3.4 does not hold if we consider increasingly
growing functions g. This implies that one could make the measure arbitrarily small by
splitting the single-mode squeezer into many successive single-mode squeezers with smaller
squeezing parameter, which does not reflect experimental reality.
A way to circumvent the failure of lemma 3.4 would be to work with the ‘squeezing of
formation’ measure. Likewise, one could require that there was only one operation of type
(O4) as specified in section 5 in any preparation procedure. In that case we have:

Proposition 7.1. If g :    fulfils

(1) log ◦ g ◦  is convex on (1, ¥),


(2) log (g (exp (t ))) is convex and monotone increasing in t ,
then the squeezing of formation measure Gg is still operational, i.e.theorem 5.1 still
holds.

Proof. The first condition replaces the log -convexity of the Cayley transform in the proof of
theorem 4.3, making the measure convex. Using [Bha96], II.3.5 (v), the second condition
makes sure that equation (50) still holds. The second condition can probably be relaxed while
the proof of theorem 5.1 is still applicable. A function g fulfilling these prerequisites is
g (x ) = exp (x ), which would correspond to a squeezing cost increasing linearly in the
squeezing parameter. One could even introduce a cutoff after which g would be infinite. ,

A simpler way to reflect the problems of equation (56) would be to consider the measures
G and GminEig together (calculating GminEig of both the state and the minimal preparation
procedure in G).
Another problem is associated with the form of the Hamiltonian (1). In the lab, the
Hamiltonians that can be implemented might not be single-mode squeezers, but other
operations such as symmetric two-mode squeezers (e.g. [SZ97], chapter 2.8). It is clear how
to define a measure G¢ for these kinds of squeezers. Using the Euler decomposition, G is a
lower bound to G¢, but we did not investigate this any further.

Acknowledgments

MI thanks Konstantin Pieper for discussions about convex optimisation and Alexander
Müller-Hermes for discussions about MATLAB. MI is supported by the Studienstiftung des
deutschen Volkes.

Appendix A. Preliminaries for the proof of theorem 3.5

Let us collect facts about ordinary differential equations needed in the proof:

33
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

Proposition A.1. Consider the following system of differential equations for


x : [0, 1]  2n :
x (t )T = x (t )T A (t ) " t Î [0, 1] ,
(58)
x (s) = xs for some xs Î 2n , s Î [0, 1] ,
where A Î L¥ ([0, 1], sp (2n)). Then this system has a unique solution, which is linear in xs
and defined on all of [0, 1] such that we can define a map
" s , t Î [0, 1] : (s , t )  U (s , t ) Î  (2n)
via x (t )T = xsT U (s, t ) called the propagator of (58) that fulfils:

(1) U is continuous and differentiable almost everywhere.


(2) U (s, ·) is absolutely continuous in t.
(3) U (t , t ) =  and U (s, r ) U (r , t ) = U (s, t ) for all s, t Î [0, 1].
(4) U (s, t )-1 = U (t , s ) for all s, t Î [0, 1].
(5) U is the unique generalised (i.e. almost everywhere) solution to the initial value problem

¶t U (s , t ) - U (s , t ) A (t ) = 0
U (s , s) =  (59)

on C ([0, 1]2 , 2n ´ 2n).


(6) If A (t ) = A does not depend on t, then S (r ) = exp (rA) solves equation (59)
with U (s, t ) ≔ S (t - s ).
(7) for all s, t Î [0, 1]:
⎛ t ⎞
U (s , t )¥  exp ⎜
⎝ òs A (t )1 dt ⎟.

(8) U (s, t ) Î Sp (2n ) for all t , s Î [0, 1] and g (t ) = U (0, t ) fulfills equation (9)
with g (0) =  .

Proof. The proof of this (except for the part about U (s, t ) Î Sp (2n )) can be found in
[Son98] (theorem 55 and lemma C.4.1) for the transposed differential
equation x˙ (t ) = A (t ) x (t ).
For the last part, note that since U (s, s ) =  Î Sp (2n ), we have U (s, s )T JU (s, s ) = J .
We can now calculate almost everywhere:
¶t (U (t , s)T JU (t , s)) = - U (t , s)T (AT (t ) J - JA (t )) U (t , s) = 0
since A (t ) Î sp (2n ) and therefore AT (t ) J - JA (t ) = 0 .
But this implies U (t , s )T JU (t , s ) = J , hence U is symplectic. Obviously, U (0, t ) solves
equation (9). ,

We will also need another well-known lemma from functional analysis:

Lemma A.2. Let A : [0, 1]  sp (2n ), A Î L¥ ([0, 1], 2n ´ 2n). Then A can be
approximated in ·1-norm by step-functions, which we can assume to map to sp (2n )
without loss of generality.

The approximation by step-function can be found e.g. in [Rud87] (chapter 2, exer-


cise 24).

34
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

Appendix B. The Cayley trick for matrices

In this appendix, we give an introduction to the Cayley-transform. The definition and


properties needed in the main text are summarised by the following proposition:

Proposition B.1. Define the Cayley transform and its inverse via:
 : {H Î n ´ n∣spec (H ) Ç { + 1} = Æ}  n ´ n
+H (60)
H ,
-H
 -1{S Î n ´ n∣spec (H ) Ç { - 1} = Æ}  n ´ n
S-
S (61)
S+
 is a diffeomorphism onto its image with inverse -1. Furthermore, it has the following
properties:

(1)  is operator monotone and operator convex on matrices A with spec (A) Ì (-1, 1).
(2) -1 is operator monotone and operator concave on matrices A with spec (A) Ì (-1, ¥).
(3)  :    with  (x ) = (1 + x ) (1 - x ) is log-convex on [0, 1).
(4) For n = 2m even, H Î 2m ´ 2m and H Î  if and only if  (H ) Î Sp (2m, )
and  (H )  iJ .
where  is defined via:
⎧ ⎫
⎩ B -A ( )
 = ⎨H = A B Î 2m´ 2m AT = A , BT = B , spec (H ) Ì ( - 1, 1), .⎬

The definition and the fact that this maps the upper half plane of positive definite matrices
to matrices inside the unit circle is present in [AG88] (I.4.2) and [MS98] (proposition 2.51,
proof 2). Since no proof is given in the references and they do not cover the whole propo-
sition, we provide them here.
We start with well-definedness:

Lemma B.2.  and  -1 are well-defined and inverses of each other. Moreover,  is a
diffeomorphism onto its image dom (-1).

Proof. If spec (H ) Ç {+1} = Æ, then  - H is invertible and H  ( + H ) ( - H ) is


well-defined, as [ + H ,  - H ] = 0 . Now let H Î m ´ m be such that
spec (H ) Ç {+1} = Æ. We will show that  (H ) contains no eigenvalue −1. To see this, let
H = T ⨁ J (n i , l i ) T - 1 (62)
i
be the Jordan normal form with block sizes ni and eigenvalues li . Let us here consider the
complex Jordan decomposition, i.e.li are allowed to be complex. Then:
 + H = T ⨁ J (ni , 1 + li) T -1,  - H = T ⨁ J (n i , 1 - l i ) T - 1 (63)
i i

35
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

and thus

 (H ) = T ⨁ J (ni , 1 + li) · J (ni , 1 - li)-1T -1.


i

For the inverse of the Jordan blocks, we can use the well-known formula:
⎛ 1 -1 ⎞
( - 1) n i - 1
-1 ⎜ 1 - li ¼ ⎟
⎛1 - li 1 ¼ 0 ⎞ (1 - l i ) 2 (1 - l i ) n i
⎜ ⎟ ⎜ 1 ( - 1) n i - 2 ⎟
1 - li ¼ ¼ (1 - l ) n i - 1 ⎟
=⎜
⎜ 0 0 ⎟ 0 1 - li .
⎜ ⎟
i
⎜⎜     ⎟⎟
⎝ 0 0 ¼ 1 - li ⎠ ⎜     ⎟
⎜ 0 0 ¼
1 ⎟
⎝ 1 - li ⎠

In particular, this is still upper triangular. Then J (ni , 1 + li ) J (ni , 1 - li )-1 is still upper
triangular with diagonal entries (1 + li ) (1 - li ). Since (1 + li ) (1 - li ) ¹ -1 for all
li Î  , we find that J (ni , 1 + li ) J (ni , 1 - li )-1 cannot have eigenvalue −1 for any i,
hence spec ( (H )) Ç {-1} ¹ Æ.
Finally, we observe:
+H
-H
- +H-+H
 -1 (H ) = +H
= = H.
+ +H+-H
-H

Moreover, set f1 (A) = -2A -  for all matrices A Î m ´ m , f2 (A) = A-1 for all invertible
matrices A Î m ´ m and f3 (A) = A -  for all matrices A Î m ´ m . Then we have
⎛ 1 ⎞ 2
f1 ◦ f2 ◦ f3 (H ) = f1 ◦ f2 (H - ) = f1 ⎜ ⎟ = - -  =  (H ) . (64)
⎝H - ⎠ H-
Since fi are differentiable for all i = 1, 2, 3, we have that  is invertible.
The same considerations with a few signs reversed also lead us to conclude that -1 is
well-defined and indeed the inverse of  . We can similarly decompose -1 to show that it is
differentiable, making  a diffeomorphism. Here, we define g1 (A) = 2A +  for all
A Î m ´ m , g2 (A) = A-1 for all invertible A Î m ´ m and g3 (A) = A +  for all
A Î m ´ m . A quick calculation shows
g1 ◦ g2 ◦ g3 (S ) =  -1(S ). (65)
,

Denote by  the set


⎧ ⎫
 ≔ ⎨H = A B
⎩ B -A ( ) A Î 2n ´ 2nAT = A , BT = B , -  < H < ⎬

(66)

where H <  means that  - H is positive definite (not just positive semidefinite). We can
then prove the Cayley trick:

Proposition B.3. Let H Î 2n ´ 2n . Then H Î   ( (H ) Î Sp (2n )   (H )  iJ ).

Proof. Note that for H Î  , 1 Ï spec(H ), hence  (H ) is always well-defined.


 (H ) = ( + H )( - H )-1  0 , since  + H  0 and ( - H )-1  0 as - < H <  .
Observe:

36
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

0  = -B A =- 0 
HJ = ( A B
B -A )( - 0 ) (
A B - 0) ( )( )
A B = - JH .
B -A
Then we can calculate:
( + H ) · ( - H )-1J = - ( + H ) · (J ( - H ))-1 = - ( + H ) · (( + H ) J )-1
= ( + H ) J ( + H )-1 = J ( - H ) · ( + H )-1,
hence  (H ) J = J  (H )-1 and as  (H ) is Hermitian, we have  (H )T J  (H ) = J and  (H ) is
symplectic. Via corollary 2.10, as  (H ) is symplectic and positive definite, we can conclude
that  (H )  iJ .
Conversely, let S Î Sp (2n ) and S  iJ . Then S −iJ by complex conjugation and
S  0 after averaging the two inequalities. Since any element of Sp (2n ) is invertible, this
implies S > 0 . From this we obtain:
S - 
> - as S +  >  ,
S + 
S - 
<  always.
S + 
Write (S - ) · (S + )-1 =
We have on the one hand
C D ( )
A B . As S is Hermitian, AT = A and C = BT , DT = D.

S-
J = (S - ) · ( - S-T J - J )-1 = (S - )( - J )-1 (S-T + )-1
S+
= (SJ - J ) · (S-T + )-1 = J (S-T - ) S T S-T (S-T + )-1
S-
=- J
S+
and on the other hand
⎛ A B⎞ ⎛ -B A ⎞
⎜ ⎟J = ⎜ ⎟,
⎝ BT D ⎠ ⎝- D BT ⎠
⎛ A B⎞ ⎛ T -D ⎞
- J⎜ T ⎟ = ⎜- B ⎟.
⎝B D⎠ ⎝ A B ⎠
Put together this implies B = BT and D = -A, hence -1(S ) Î , which is what we
claimed. ,

Proposition B.4. The Cayley transform  is operator monotone and operator convex on the
set of A = AT Î m ´ m with spec (A) Ì (-1, 1). -1 is operator monotone and operator
concave on the set of A = AT Î m ´ m with spec (A) Ì (-1, ¥).

Proof. Recall equation (64) and the definition of f1 , f2 , f3. f1 and f3 are affine and thus for all
X  Y : f3 (X )  f3 (Y ) and f1 (X )  f1 (Y ). For X  Y  0 , we also have f2 (Y ) 
f2 (X )  0 since matrix inversion is antimonotone. Now let -  Y  X  1, then
-2  f3 (Y )  f3 (X )  0 and -1 2  f2 ◦ f3 (Y )  f2 ◦ f3 (X )  0 and finally  (X ) 
 (Y )  0 , proving monotonicity of  . Similarly, one can prove that -1 is monotonous using
equation (65).
For the convexity of  , we note that since f1 , f3 are affine they are both convex and
concave. It is well-known that 1 x is operator convex for positive definite and operator
concave for negative definite matrices (to prove this, consider convexity/concavity of the

37
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

functions ⟨y, X-1y⟩ for all ψ). It follows that for -  H   we have f3 (x )  0, hence
f2 ◦ f3 is operator concave on -  H   . As f1 (A) = -2A -  , this implies that
 = f1 ◦ f2 ◦ f3 is operator convex.
For the concavity of -1, recall equation (65) and the definitions of g1, g2 , g3. Then, given
-  X , we have g3 (X ) is positive definite and concave as an affine map. g2 is concave on
positive definite matrices, as 1 x is convex and (-1) is order-reversing, hence -1 x is
concave on positive definite matrices. Since g1 is concave as an affine map, g1 ◦ g2 ◦ g3 = -1
is operator concave for all -  X . ,

Lemma B.5.  :    is log -convex on [0, 1).

1+x
Proof. We need to see that the function h (x ) = log 1 - x is convex for x Î [0, 1). Since h is
differentiable on [0, 1), this is true iff the second derivative is non-negative:
4x
h  (x ) =
(1 - x 2 ) 2
is clearly positive on [0, 1) and h is therefore log -convex. ,

Appendix C. Continuity of set-valued functions

Here, we provide some definitions and lemmata from set-valued analysis for the reader’s
convenience. This branch of mathematics deals with functions f : X  2Y where X and Y are
topological spaces and 2Y denotes the power set of Y.
In order to state the results interesting to us we define:

Definition C.1. Let X , Y Í n ´ m and f : X  2Y be a set-valued function. Then we say


that a function is upper semicontinuous (often also called upper hemicontinuous to
distinguish it from other notions of continuity) at x 0 Î X if for all open neighbourhoods Q of
f (x 0 ) there exists an open neighbourhood W of x 0 such that W Í {x Î X∣ f (x ) Ì Q}.
Likewise, we call it lower semicontinuous (often called lower hemicontinuous) at a point x 0 if
for any open set V intersecting f (x 0 ), we can find a neighbourhood U of x 0 such that
f (x ) Ç V ¹ Æ for all x Î U .

Note that the definitions are valid in all topological spaces, but we only need the case of
finite dimensional normed vector spaces. Using the metric, we can give the following
characterisation of upper semicontinuity:

Lemma C.2. Let X , Y Í n ´ m and f : X  2Y be a set-valued function such that f (x ) is


compact for all x . Then f is upper semicontinuous at x 0 if and only if for all e > 0 there
exists a d > 0 such that for all x Î X with x - x 0  < d we have: for all y Î f (x ) there
exists a y˜ Î f (x 0 ) such that y - ỹ  < e .

Proof. : Let f be lower semicontinuous at x 0 . For any e > 0 the set


B (e , f (x 0)) ⋃ {ˆy Î Y ∣y - yˆ  < e} (67)
y Î f (x 0 )

38
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

is an open neighbourhood of f (x 0 ). Hence there exists an open neighbourhood W of x 0 ,


which contains a ball of radius d > 0 such that Bd (x 0 ) Í W Í {x Î X∣ f (x ) Ì B (e, f (x 0 ))}.
Clearly this implies the statement.
⇐: Let Q be a neighbourhood of f (x 0 ). Since f (x 0 ) is compact this implies that there is a
e > 0 such that B (e, f (x 0 )) Í Q where this set is defined as in equation (67). If this were not
the case, for every n Î  there must be a yn Î Y ⧹Q such that inf yˆ Î f (x0) yn - yˆ  < 1 n.
Since by construction this implies that yn Î B (1, f (x 0 )), which is compact, a subsequence of
these yn must converge to y. As Y ⧹Q is closed as Q is open, y Î Y ⧹Q. However,
inf yˆ Î f (x0) y - yˆ  = 0 by construction and since f (x 0 ) is compact, the infimum is attained,
which implies y Î f (x 0 ). This contradicts the fact that Q is a neighbourhood of f (x 0 ).
Hence we know that for any open Q containing f (x 0 ) there exists a e > 0 such that
B (e, f (x 0 )) Í Q. By assumption, this implies that there exists a d > 0 such that
Bd (x 0 ) Í {x Î X∣ f (x ) Ì B (e, f (x 0 ))}. Since clearly {x Î X∣ f (x ) Ì B (e, f (x 0 ))} Í
{x Î X∣ f (x ) Ì Q} we can choose W ≔ Bd (x 0 ) to finish the proof. ,

This second characterisation is sometimes called upper Hausdorff semicontinuity and it


can equally be defined in any metric space. Clearly, the notions can differ for set-valued
functions with non-compact values or in spaces which are not finite dimensional. With these
two definitions, we can state the following classic result:

Proposition C.3 ([DR79]). Let Y be a complete metric space, X a topological space and
f : X  2Y a compact-valued set-valued function. The following statements are equivalent:
• f is upper semicontinuous at x 0 .
• for each closed K Í X , K Ç f (x 0 ) is upper semicontinuous at x 0 .

An interesting question would be whether the converse is also true. Even if f (x) is always
convex, this need not be the case if K Ç f (x 0 ) has empty interior as simple counterexamples
can show. In case the interior is non-empty, another classic results guarantees a converse in
many cases:

Proposition C.4 ([Mor75]). Let X be a compact interval and Y a normed space. Let
f : X  2Y and g : X  2Y be two convex-valued set-valued functions. Suppose that
diam( f (t ) Ç g (t )) < ¥ and f (t ) Ç int (g (t )) ¹ Æ for all t . Then if f , g are continuous (in
the sense above) so is f Ç g.

Appendix D. Reduction of the set of necessary operations for state preparation

In this section, we give a justification of why the operations (O0)–(O7) are enough to
implement all operations described in section 5. All of this is known albeit scattered
throughout the literature, hence we collect it here.
In order to prepare a state, we could start with the vacuum g =  or alternatively a
thermal state for some bath (g = (1 2 + N )  with photon number N, see e.g. [Oli12]). Of
course, we should be able to draw arbitrary ancillary modes of this system, too. The effect of
Gaussian noise on the covariance matrix is given in [Lin00]. Since for any g   we can
decompose it as g =  + gnoise , this implies that the operations (O0)–(O2) are enough to
implement all operations 1.and 2.

39
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

As with other squeezing measures, passive transformations should not change the
squeezing measure, while single-mode-squeezers are not free. The effect of symplectic
transformations on the covariance matrix has already been observed in equation (10), hence
(O3) and (O4) implement operations (3) and (8).
Since we have the Weyl-system at our disposal, we can also consider its action on a
quantum state (translation in phase space). Direct computation shows that it does not affect
the covariance matrix. Including it as operation (O5) is beneficial if we consider a convex
combination of states. In an experiment, this can be done by creating ensembles of the states
of the convex combination and creating another ensemble where the ratio of the different
states is that of the convex combination. On the level of covariance matrices, we have the
following lemma:

Lemma D.1. Let r and r ¢ be two states with displacement d r and d r¢ and (centred)
covariance matrices g r and g r¢ . For l Î (0, 1), the covariance matrix of
r˜ ≔ lr + (1 - l ) r ¢ is given by:
g r˜ = lg r + (1 - l) g r ¢ + 2l (1 - l)(d r - d r ¢)(d r - d r ¢)T

A proof of this statement can be found in [WW01] (in the proof of proposition 1). Note
that for centralised states with d r = 0 and d r¢ = 0 , a convex combination of states translates
to a convex combination of covariance matrices. Since in particular,
2l (1 - l )(d r - d r ¢ )(d r - d r ¢ )T  0, any convex combination of ρ and r ¢ is on the level of
covariance matrices equivalent to
• centring the states (no change in the covariance matrices),
• taking a convex combination of the states (resulting in a convex combination of
covariance matrices),
• performing a Weyl translation to undo the centralization in the first step (no change in the
covariance matrix).
• Adding noise 2l (1 - l )(d r - d r ¢ )(d r - d r ¢ )T  0.
This implies that the effect of any convex combination of states (operation 4) on the
covariance matrix can equivalently be obtained from operations (O2), (O5) and (O6). Finally,
we consider measurements. Homodyne detection is the measurement of Q or P in one of the
modes, which corresponds to the measurement of an infinitely squeezed pure state in
lemma D.2. A broader class of measurements known as heterodyne detection measures
arbitrary coherent states [Wee+12]. Let us focus our attention on the even broader class of
projections onto Gaussian pure states.

Lemma D.2. Let r be an (n + 1)-mode quantum state with covariance matrix g and
∣gG , d ⟩⟨gG , d∣ be a pure single-mode Gaussian state with covariance matrix gG Î 2 ´ 2 and
displacement d . Let
⎛ A C⎞
g= ⎜ ⎟, B Î 2 ´ 2
⎝C T B ⎠
then the selective measurement of ∣gG , d ⟩ in the last mode results in a change of the
covariance matrix of r according to:
g ¢ = A - C (B - gG ) MP C T , (68)

40
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

where MP denotes the Moore–Penrose pseudoinverse. Homodyne detection corresponds to


the case where gG is an infinitely squeezed state.

This can most easily be seen on the level of Wigner functions, as demonstrated in
[ESP02, GIC02]. The generalisation to multiple modes is straightforward.
Since the covariance matrix of a Gaussian pure state is a symplectic matrix (see pro-
position 2.2), using the Euler decomposition we can implement a selective Gaussian mea-
surement by
(1) a passive symplectic transformation S Î K (n + 1),
(2) a measurement in the Gaussian state diag (d , 1 d ) for some d Î + according to
lemma D.2.
A non-selective measurement (forgetting the information obtained from measurement)
would then be a convex combination of such projected states. A measurement of a multi-
mode state can be seen as successive measurements of single-mode states since the Gaussian
states we measure are diagonal.
For homodyne detection, since an infinitely squeezed single-mode state is given by the
covariance matrix lim d ¥diag (1 d , d ), we have
g ¢ = lim (A - C (B - diag (1 d , d ))-1C T ) = A - C (pBp ) MP C T , (69)
d ¥

where p = diag (1, 0) is a projection and MP denotes the Moore–Penrose-pseudoinverse. It


has been shown (see [Wee+12] E.2 and E.3 as well as [ESP02, GIC02]) that any (partial or
total) Gaussian measurement is a combination of passive transformations, discarding
subsystems, projection onto Gaussian states and homodyne detection.
Therefore, we should also allow to discard part of the system, i.e.taking the partial trace.
However, this can be expressed as a combination of operations (O1)–(O6) and homodyne
detection:

⎛ A C⎞
Lemma D.3. Given a covariance matrix g = ⎜ ⎟ a partial trace on the second system
⎝C T B ⎠
translates to a map g  A. The partial trace can then be implemented by measurements and
adding noise.

Proof. When measuring the modes B, we note that since C (pBp )MP C T  0 in
equation (69), a partial trace is equivalent to first performing a homodyne detection on the
B-modes of the system and then adding noise. ,

Given the discussion above, lemmas D.2 and D.3 put together imply: on the level of
covariance matrices, in order to allow for general Gaussian measurements, it suffices to
consider Gaussian measurements of the state ∣gd , 0⟩⟨gd , 0∣ with covariance matrix
gd = diag (1 d , d ) for d Î + È {+¥}. All Gaussian measurements are then just combi-
nations of these special measurements and operations (O1)–(O6).

Appendix E. Numerical implementation and documentation

Here, we provide a short documentation to the programme written in MATLAB, Version


R2014a, and used for the numerical computations in section 6. The source Code can be found
at GitHub https://github.com/Martin-Idel/operationalsqueezing.

41
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

The programme tries to minimise the function f defined in equation (28) over the set .
Throughout, suppose we are given a covariance matrix γ.
Let us first describe the implementation of f: as parameterisation of , we choose the
simplest parameterisation such that for matrices with symplectic eigenvalues larger than one,
the set of feasible points has non-empty interior: we parameterise A, B via matrix units Ei, Ejk
with i Î {1, ¼, n}, k Î {1, ¼, n - 1} and j < k , where (Ei )jk = dij dik and
(Ejk )lm = d jl dkm + djmdkl . This parameterisation might not be very robust, but it is good
enough for our purpose. Instead of working with complex parameters, we compute
si (A + iB) as li (H ) for the matrix

H= ( A B .
B -A ) (70)

The evaluation of f is done in function OBJECTIVE.M. Since f is not convex for (A, B) with the
corresponding H having eigenvalues 1 or -1, the function first checks, whether this
constraint is satisfied and outputs a value that is 107-times larger than the value of the
objective function at the starting point otherwise.
The constraints are implemented in function MAXRESIDUAL.M. Via symmetry, it is
enough to check that for any H tested, l2n (H )  1. The second constraint is given by
-1(g )  H and this is tested by computing the smallest eigenvalue of the difference.
The function which is most important for users is MINIMUM.M, which takes a covariance
matrix G  iJ , its dimensions n and a number of options as arguments and outputs the
minimum. Note that the programme checks whether the covariance matrix is valid. For the
minimisation, we use the MATLAB-based solver SOLVOPT ([KK97], latest version 1.1).
SOLVOPT uses a subgradient based method and the method of exact penalization to compute
(local) minima. For convex programming, any minimum found by the solver is therefore an
absolute minimum. In order to work, the objective function may not be differentiable on a set
of measure zero and it is allowed to be non-differentiable at the minimum. Since f is dif-
ferentiable for all H with non-degenerate eigenvalues, this condition is met. In addition,
SOLVOPT needs f to be defined everywhere, as it is not an interior point method. Since f is
well-defined but not convex for H Ï  and spec (H ) È {1} = Æ, we remedy this by
changing the output of OBJECTIVE.M to be very large when H Ï  as described above.
Constraints are handled via the method of exact penalisation. We used SOLVOPTʼs algorithm
to compute the penalisation functions on its own.
It is possible (and for speed purposes advisable) to implement analytical gradients of both
the objective and the constraint functions. Following [Mag85], for diagonalisable matrices A
with no eigenvalue multiplicities, the derivative of an eigenvalue li (A) is given by:
¶E li (A) = vi (A)T ¶E Avi (A) , (71)
where vi(A) is the eigenvector corresponding to li (A) and ¶v (A) = limh  0 (A + hE - A)
h = E . Luckily, if A is not differentiable, this provides at least one subgradient. An easy
calculation shows that a subgradient of the objective function f for matrices H with
- < H <  in the parameterisation of the matrix units Eij is given by
n ¶i lj (H ) n vjT, k F (i) vk , j
(f )i = å (1 + l (H ))(1 - l (H ))2 = å + lj (H ))(1 - lj (H ))2
(72)
j=1 j j j , k = 1 (1

with F being the matrices corresponding to the chosen parameterisation. The gradient of the
constraint function is very similar and given by equation (71) for A = g - H or A = 2 - H
depending on which constraint is violated. This is implemented in functions OBJECTIVEGRAD.
M and MAXRESIDUALGRAD.M.

42
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

SOLVOPT needs a starting point. Given Γ, via Williamson’s theorem, G = S T DS  S T S ,


hence S TS provides a good starting point. The function WILLIAMSON.M computes the Wil-
liamson normal form for γ and returns S, D and S TS, the latter of which is used as starting
point. It computes S and D essentially by computing the Schur decomposition of G-1 2J G-1 2
(in the σ-basis instead of the J-basis). S is then given by S T = g1 2 KD-1 2 (see the proof of
[SCS99]), where K is the Schur transformation matrix.
A number of comments are in order:
(1) All functions use global variables instead of function handles. This is required by the fact
that SOLVOPT has not been adapted to the use of function handles. The user should
therefore always reset all variables before running the programme.
(2) SOLVOPT is not an interior point method, i.e.the results can at times violate constraints.
We use the default value for the accuracy of constraints, which is 10−8 and can be
modified by option six. The preparation error should be of the same order than the
accuracy of constraints as long as the largest eigenvalue of the minimising symplectic
matrix is of order one.
(3) For our numerical tests, we used bounds on the minimal step-size and the minimal error
in f (SOLVOPT options two and three) of the order 10−6 and 10−8, which seemed
sufficient.
(4) All functions called by SOLVOPT (the functions OBJECTIVE.M, OBJECTIVEGRAD.M,
MAXRESIDUAL,M, MAXRESIDUALGRAD.M and XTOH.M) are properly vectorised to
ensure maximal speed.
Finally, BOUNDS.M contains all lower- and upper bounds described in section 4.5. The
semidefinite programme was solved using CVX (version SDPT3 4.0), a toolbox developed in
MATLAB for disciplined convex programming including semidefinite programming [GB08],
[GB14]. The third bound is not described in section 4.5—it is an iteration of corollary 4.8
assuming superadditivity, hence in principle it could be violated. If it were violated, this
would immediately disprove superadditivity, which has never been observed in our tests.
Issues and further suggestions: It occurs sometimes that the algorithm does not
converge to a minimum inside or near the feasible set. We believe that this is due to
instabilities in the parameterisation and implementation. The behaviour can occur while using
numerical as well as analytical subgradients, although it occurs more often with analytical
ones. For every example where we could observe a failure with either numerical or analytical
subgradients, one other method (using numerical subgradients, using analytical subgradients
or a mixture thereof) worked fine. In cases of failure, the routine issued several warnings and
the result usually lies below the lower bound. A different type of implementation might lead
to an algorithm that is more stable, but we did not pursue this any further. It might also be
worth to consider trying to compute the penalty function analytically.
In terms of performance times, the algorithm is generally fast for small numbers of
modes. When analytical subgradients are not implemented, the performance bottleneck is
given by the functions XTOH.M, which is called most often. When analytical subgradients are
provided, the performance is naturally much faster. This is particularly important when the
number of modes increases. While for five modes, the calculation is done within seconds,
already for ten modes and depending on the matrix, it can take a minute on a usual laptop (the
algorithm now takes the most amount of time for eigenvalue computations, which seems
unavoidable). For even larger matrices, it might be advisable to switch from using the Matlab
function EIG to EIGF, but for our examples this did not lead to a time gain.

43
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

References

[AG88] Arnol’d V I and Givental’ A B 1988 Symplectic geometry Dynamical Systems IV (Berlin:
Springer)
[And+15] Andersen U L et al 2016 30 years of squeezed light generation Phys. Scr. 91 053001
[ARL14] Adesso G, Ragy S and Lee A R 2014 Continuous variable quantum information: Gaussian
states and beyond Open Syst. Inf. Dyn. 21 1440001
[Arv+95a] Arvind et al 1995 The real symplectic groups in quantum mechanics and optics Pramana
45 471–97
[Arv+95b] Arvind et al 1995 Two-mode quantum systems: invariant classification of squeezing
transformations and squeezed states Phys. Rev. A 52 1609–20
[ASI04] Adesso G, Serafini A and Illuminati F 2004 Extremal entanglement and mixedness in
continuous variable systems Phys. Rev. A 70 022318
[Bha07] Bhatia R 2007 Positive Definite Matrices (Princeton, NJ: Princeton University Press)
[Bha96] Bhatia R 1996 Matrix Analyis (Berlin: Springer)
[BL05] Braunstein S L and van Loock P 2005 Quantum information with continuous variables Rev.
Mod. Phys. 77 513–77
[Bra05] Braunstein S L 2005 Squeezing as an irreducible resource Phys. Rev. A 71 055801
[BV04] Boyd S and Vandenberghe L 2004 Convex Optimization (New York: Cambridge University
Press) ISBN 0521833787
[DR79] Dolecki S and Rolewicz S 1979 Metric characterizations of upper semicontinuity J. Math.
Anal. Appl. 69 146–52
[ESP02] Eisert J, Scheel S and Plenio M B 2002 Distilling Gaussian states with Gaussian operations is
impossible Phys. Rev. Lett. 89 137903
[Fil13] Filip R 2013 Distillation of quantum squeezing Phys. Rev. A 88 063837
[GB14] Grant M and Boyd S 2014 CVX: Matlab Software for Disciplined Convex Programming,
version 2.1 (http://cvxr.com/cvx)
[GB08] Grant M and Boyd S 2008 Graph implementations for nonsmooth convex programs Recent
Advances in Learning and Control (Lecture Notes in Control and Information Sciences) ed
V Blondel, S Boyd and H Kimura (Springer) pp 95–110
[GIC02] Giedke G and Cirac J I 2002 Characterization of Gaussian operations and distillation of
Gaussian states Phys. Rev. A 66 032316
[Gie+01] Giedke G et al 2001 Separability properties of three-mode gaussian states Phys. Rev. A 64
052303
[Gos06] de Gosson M A 2006 Symplectic Geometry and Quantum Mechanics (Operator Theory:
Advances and Applications/Advances in Partial Differential Equations) (Basel: Birkhäuser) ISBN
9783764375751
[Hee+06] Heersink J et al 2006 Distillation of squeezing from non-gaussian quantum states Phys. Rev.
Lett. 96 253601
[KK97] Kuntsevich A and Kappel F 1997 SolvOpt: the solver for local nonlinear optimization
problems (manual) (Institute for Mathematics, Karl-Franzens University of Graz)
[KL10] Kok P and Lovett B W 2010 Introduction to Optical Quantum Information Processing
(Cambridge: Cambridge University Press)
[Kok+07] Kok P et al 2007 Linear optical quantum computing with photonic qubits Rev. Mod. Phys.
79 135–74
[Kra+03] Kraus B et al 2003 Entanglement generation and Hamiltonian simulation in continuous-
variable systems Phys. Rev. A 67 042314
[Lee88] Lee C T 1988 Wehrlʼs entropy as a measure of squeezing Opt. Commun. 66 52–4
[LGW13] Lercher D, Giedke G and Wolf M M 2013 Standard super-activation for gaussian channels
requires squeezing New J. Phys. 15 123003
[Lin00] Lindblad G 2000 Cloning the quantum oscillator J. Phys. A: Math. Gen. 33 5059
[Lvo15] Lvovsky A I 2015 Squeezed light Photonics Volume 1: Fundamentals of Photonics and
Physics ed D Andrews (New York: Wiley) pp 121–164
[Mag85] Magnus J R 1985 On differentiating eigenvalues and eigenvectors Econometric Theor. 1
179–91
[MK08] Mišta L and Korolkova N 2008 Distribution of continuous-variable entanglement by separable
gaussian states Phys. Rev. A 77 050302

44
J. Phys. A: Math. Theor. 49 (2016) 445304 M Idel et al

[Mor75] Moreau J J 1975 Intersection of moving convex sets in a normed space Math. Scand. 36
159–73
[MS98] McDuff D and Salamon D 1998 Introduction to Symplectic Topology (Oxford: Oxford
University Press)
[NC00] Nielsen M and Chuang I 2000 Quantum Computation and Quantum Information (Cambridge:
Cambridge University Press) (doi:10.1017/CBO9780511976667)
[Oli12] Olivares S 2012 Quantum optics in the phase space Eur. Phys. J. Spec. Top. 203 3–24
[Rec+94] Reck M et al 1994 Experimental realization of any discrete unitary operator Phys. Rev. Lett.
73 58–61
[RFP10] Recht B, Fazel M and Parrilo P A 2010 Guaranteed minimum-rank solutions of linear matrix
equations via nuclear norm minimization SIAM Rev. 52 471–501
[Roc97] Rockafellar R T 1997 Convex Analysis (Princeton, NJ: Princeton University Press) ISBN
9780691015866
[Rud87] Rudin W 1987 Real and Complex Analysis (Mathematics Series) (New York: McGraw-Hill)
ISBN 9780070542341
[SCS99] Simon R, Chaturvedi S and Srinivasan V 1999 Congruences and canonical forms for a
positive matrix: application to the Schweinler–Wigner extremum principle J. Math. Phys. 40
3632–42
[SMD94] Simon R, Mukunda N and Dutta B 1994 Quantum-noise matrix for multimode systems: U(n )
invariance, squeezing, and normal forms Phys. Rev. A 49 1567–83
[Son98] Sontag E D 1998 Mathematical Control Theory: Deterministic Finite Dimensional Systems
2nd edn (New York: Springer)
[SZ97] Scully M O and Zubairy M S 1997 Quantum Optics (Cambridge: Cambridge University Press)
ISBN 9780521435956
[Tho76] Thompson R C 1976 Convex and concave functions of singular values of matrix sums Pac. J.
Math. 66 285–90
[VB96] Vandenberghe L and Boyd S 1996 Semidefinite programming SIAM Rev. 38 49–95
[Wee+12] Weedbrook C et al 2012 Gaussian quantum information Rev. Mod. Phys. 84 621
[WEP03] Wolf M M, Eisert J and Plenio M B 2003 Entangling power of passive optical elements Phys.
Rev. Lett. 90 047904
[Wil36] Williamson J 1936 On the algebraic problem concerning the normal forms of linear dynamical
systems Am. J. Math. 58 141–63
[Wol+04] Wolf M M et al 2004 Gaussian entanglement of formation Phys. Rev. A 69 052320
[WW01] Werner R F and Wolf M M 2001 Bound entangled Gaussian states Phys. Rev. Lett. 86
3658–61

45

You might also like