Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
181 views

Convex Tutorial

The document discusses convex sets and provides examples of common convex sets such as hyperplanes, half-spaces, boxes, balls, cones, and standard and Lorentz cones. Key properties of convex sets are that they are closed under linear combinations and affine transformations. Common operations that preserve convexity are Minkowski sums, scaling, intersections, and images/inverse images under affine mappings.

Uploaded by

fpttmm
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
181 views

Convex Tutorial

The document discusses convex sets and provides examples of common convex sets such as hyperplanes, half-spaces, boxes, balls, cones, and standard and Lorentz cones. Key properties of convex sets are that they are closed under linear combinations and affine transformations. Common operations that preserve convexity are Minkowski sums, scaling, intersections, and images/inverse images under affine mappings.

Uploaded by

fpttmm
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 88

Optimiz. & Comp.

Vision

Convex and Non-Convex Optimization

Christoph Schnörr
University of Mannheim, Germany

VISIONTRAIN Thematic School


Optimization Methods in Computer Vision
Les Houches, March 2006

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science


Optimiz. & Comp. Vision

Table of Contents

1 – Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 – Literatur . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3 – Convex Sets . . . . . . . . . . . . . . . . . . . . . . . . . 13
4 – Convex Functions . . . . . . . . . . . . . . . . . . . . . . 27
5 – Subgradients and Optimality . . . . . . . . . . . . . . . . 44
6 – Conjugate Duality . . . . . . . . . . . . . . . . . . . . . . 51
7 – Convex Optimization . . . . . . . . . . . . . . . . . . . . . 55
8 – Non-Convex Optimization . . . . . . . . . . . . . . . . . . 71

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science


Optimiz. & Comp. Vision 1 Introduction

1 – Introduction

Modelling in computer vision research

• problem representation (observations, states, decisions, ...)

• criterion, (visual) task

• prior knowledge

Optimization point-of-view

• variables, domain

• objective function

• constraints

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 3


Optimiz. & Comp. Vision 1 Introduction

Cases studies: Convex optimization

(i) Robust estimation


Non-quadratic optimization → quadratic programming
Huber’81, Mangasarian-Musicant (PAMI’00)

(ii) Total-variation denoising


Non-smooth optimization
Rudin-Osher-Fatemi (Physica’92)
Aujol-Chambolle (JMIV’04, IJCV’05)

(iii) Non-negative `1 -norm approximation


Linear programming
similar: sparse basis pursuit
Chen-Donoho-Saunders (SIAM Rev.’01)

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 4


Optimiz. & Comp. Vision 1 Introduction

Cases studies: Convex optimization (cont’d)

(iv) Non-negative sparse factorization


Second-Order Cone Programming
Heiler-Schnörr (ICCV’05, ECCV’06, JMLR to appear)

(v) Low-dimensional flat euclidean embedding


Semidefinite Programming
Weinberger-Saul (CVPR’04, IJCV in press)

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 5


Optimiz. & Comp. Vision 1 Introduction

Cases studies: Non-convex optimization


(i) Solvable binary problems
Linear Programming, Min-Cut/Max-Flow
Belongie-Malik-Puzicha (PAMI’02), Kolmogorov-Zabih (PAMI’04)
(ii) Perceptual grouping
Binary Optimization & Semidefinite Relaxation
Keuchel-Schnörr-Schellewald-Cremers (PAMI’03)
(iii) Auxiliary variables
Cohen (JMIV’96)
similar: Geman-Reynolds (PAMI’92)
(v) DC-programming
Dinh-Hoai-An (SIAM J. Opt.’98)
Schüle-Weber-Schnörr (Discr. Appl. Math.’05)
Neumann-Schnörr-Steidl (Mach. Learning’05)

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 6


Optimiz. & Comp. Vision 1 Introduction

Optimization problem

inf f (x) , C⊆X


x∈C

Convex optimization problem

(i) function f is convex,

(ii) set C is convex.

Properties assured in advance: Let f be a convex (extended) func-


tion over Rn . Then

(a) every locally optimal solution is globally optimal,


(b) argmin f is convex,
(c) if f is strictly convex and proper, there is at most one op-
timal solution.

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 7


Optimiz. & Comp. Vision 1 Introduction

Computational aspects
Convex problems can be solved reliably and efficiently.
Non-linearity does not imply that a problem is difficult, but non-
convexity does in general.

Modelling aspects
Many competing models exist in computer vision ...
⇒ optimization theory helps to classify the field.
It is important to recognize problems that can be formulated as
convex optimization problems.

Non-convex optimization (→ part II)


Sound approaches to non-convex optimization rely on convex op-
timization approaches as basic components

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 8


Optimiz. & Comp. Vision 1 Introduction

Optimization Tree

http://www-fp.mcs.anl.gov/otc/Guide/OptWeb/index.html

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 9


Optimiz. & Comp. Vision 2 Literatur

2 – Literatur

[1] R.T. Rockafellar. Convex analysis. Princeton Univ. Press, Princeton, NJ,
2nd. edition, 1972.

[2] R.T. Rockafellar and R.J-B. Wets. Variational Analysis, volume 317 of
Grundlehren der math. Wissenschaften. Springer, 1998.

[3] J.-B. Hiriart-Urruty and C. de Lemarechal. Convex Analysis and Minimiza-


tion Algorithms I, II. Springer-Verlag, 1993.

[4] H. Wolkowicz, R. Saigal, and L. Vandenberghe, editors. Handbook of


Semidefinite Programming, Boston, 2000. Kluwer Acad. Publ.

[5] A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization.


SIAM, 2001.

[6] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge Univer-


sity Press, 2004.

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 10


Optimiz. & Comp. Vision 2 Literatur

[7] H.D. Mittelmann. An independent benchmarking of SDP and SOCP


solvers. Math. Programming, Series B, 95(2):407–430, 2003.
[8] Y. Nesterov and A. Nemirovskii. Interior Point Polynomial Methods in Con-
vex Programming. SIAM, 1994.
[9] S.J. Wright. Primal–Dual Interior–Point Methods. SIAM, 1996.
[10] Y. Ye. Interior Point Algorithms: Theory and Analysis. Wiley, 1997.
[11] C. Roos, T. Terlaky, and J. Vial. Interior Point Methods for Linear Opti-
mization. Springer, 2nd edition, 2006.
[12] D.P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont,
Mass., 1999.
[13] A.R. Conn, N.I.M. Gould, and P.L. Toint. Trust-Region Methods. SIAM,
Philadelphia, 2000.
[14] G.L. Nemhauser and L.A. Wolsey. Integer and Combinatorial Optimiza-
tion. J. Wiley, New York, 1988.

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 11


Optimiz. & Comp. Vision 2 Literatur

[15] M. Groetschel, L. Lovasz, and A. Schrijver. Geometric algorithms and


combinatorial optimization. Springer, 2nd edition, 1993.

[16] B. Korte and J. Vygen. Combinatorial optimization : theory and algo-


rithms. Springer, 2000.

[17] D. Bertsimas and R. Weismantel. Optimization over Integers. Dynamics


Ideas, Belmont, Mass., 2005.

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 12


Optimiz. & Comp. Vision 3 Convex Sets

3 – Convex Sets

Convex subset C ⊂ Rn

(1 − t)x0 + tx1 ∈ C ∀x0 , x1 ∈ C , ∀t ∈ (0, 1)

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 13


Optimiz. & Comp. Vision 3 Convex Sets

Invariant operations

Minkowski sum C1 + C2

Scaling λ C , λ ∈ R

Set product C1 × C2
T
Intersection i∈I Ci

Images under affine mappings F (x) = Ax + b:



F (C) = y y = F (x) , x ∈ C

Inverse images under affine mappings F (x) = Ax + b:


−1

F (D) = x y = F (x) , y ∈ D

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 14


Optimiz. & Comp. Vision 3 Convex Sets

Hyperplane

C = x ha, xi = α

Half-space

C = x ha, xi ≤ α

Box

C = x a i ≤ xi ≤ b i

Ball

B(x, r) = x kx − xk ≤ r

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 15


Optimiz. & Comp. Vision 3 Convex Sets

Cone K ∈ Rn

0∈K and ∀x ∈ K , ∀λ ≥ 0 , λx ∈ K

K is convex if

K +K ⊂K

Convex cones include K


zero cone K = {0}
full cone K = Rn
linear subspaces 0
half-spaces

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 16


Optimiz. & Comp. Vision 3 Convex Sets

Standard cone
K = Rn+

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 17


Optimiz. & Comp. Vision 3 Convex Sets

Lorentz cone (second-order cone)



K = L = (x, t) kxk ≤ t ⊂ Rn
n >

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 18


Optimiz. & Comp. Vision 3 Convex Sets

Semidefinite cone
n
 n×n
>
K= S+ = S∈R S = S , S  0

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 19


Optimiz. & Comp. Vision 3 Convex Sets

Examples
n
Standard cone R+

Ax ≤ b ⇔ b − Ax ∈ Rn+

Second-order cone Ln
  

D d
kDx − dk ≤ hp, xi − q ⇔   x −   ∈ Ln
p> q
| {z } | {z }
A b

n
Semidefinite cone S+
Learning/optimization of inner-product and kernel matrices
 
A = hxi , xj i i,i=1,...,n , B = k(xi , xj ) i,i=1,...,n

→ Mercer kernels

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 20


Optimiz. & Comp. Vision 3 Convex Sets

Polar cones




K = v hv, wi ≤ 0 , ∀w ∈ K

K
K closed convex:

∗ ∗
K =K
0
Important special cases:

M∗ = M⊥ (subspaces) K*
 ∗
0 = Rn

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 21


Optimiz. & Comp. Vision 3 Convex Sets

Normal cone (C convex)



NC (x) = v hv, x − xi ≤ 0 , x ∈ C

Tangent cone (C convex)



TC (x) = cl w ∃λ > 0 , x + λw ∈ C

TC(x)
Note:
Interior points x ∈ C:

NC (x) = {0} C

Exterior points x 6∈ C:
NC(x)
NC (x) = ∅

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 22


Optimiz. & Comp. Vision 3 Convex Sets

Constraint functions
Defining C by
inequalities (fi , i ∈ I, are convex) and
equalities (fj , j ∈ J , are affine):

C = x fi (x) ≤ 0 , i ∈ I ; fj (x) = 0 , j ∈ J

C ⊂ Rn is called polyhedral if it can be expressed as the intersec-


tion of a finite family of closed half-spaces or hyperplanes (fi , fj are
affine).
C ⊂ Rn is called affine if it can be expressed as the intersection of a
finite family of hyperplanes (affine equality constraints).

Constraint qualification conditions


Various conditions, e.g. linear independency of ∇fj (x), exist that en-
sure regularity of C.

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 23


Optimiz. & Comp. Vision 3 Convex Sets

Convex combination of x0 , x1 , . . . , xp ∈ Rn :
p
X p
X
λ i xi , λi ≥ 0 , λi = 1
i=0 i=0

Typical interpretation: expected value Eλ [x]

Convex hull of C (not necessarily convex)


( p p
)
X X

conv C = λ i xi xi ∈ C , λ i ≥ 0 , λi = 1 , p ≥ 0
i=0 i=0

Consists of all convex combinations of elements of C

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 24


Optimiz. & Comp. Vision 3 Convex Sets

Convex hull of a finite subset of Rn

C = conv {x0 , x1 , . . . , xp }

If {x0 , x1 , . . . , xp } are affinely independent (i.e. xi − x0 are linearly


independent), then C is called p-simplex.

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 25


Optimiz. & Comp. Vision 3 Convex Sets

Example (↔ matching two n-sets of objects)


A permutation is a one-to-one mapping of {1, 2, . . . , n}.
Permutation matrix
  1 1
0 0 0 1 0

1 0 0 0
 2 2
 0

0 0 0 0 1
 3 3
 

0 1 0 0
 0


4 4
0 0 1 0 0 5 5
Matrix P is called double-stochastic if
X X
Pi,j ≥ 0 and Pi,j = Pi,j = 1
i j

Birkhoff’s theorem. P is double-stochastic iff it can be represented


as convex combination of permutation matrices.

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 26


Optimiz. & Comp. Vision 4 Convex Functions

4 – Convex Functions
Convex function f : C → R relative to (the convex set) C:

f (1 − t)x0 + tx1 ≤ (1 − t)f (x0 ) + tf (x1 ) , ∀t ∈ (0, 1)

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 27


Optimiz. & Comp. Vision 4 Convex Functions

Alternative criteria (f sufficiently smooth):

(a) h∇f (y) − ∇f (x), y − xi ≥ 0 , ∀x, y

(b) f (y) − f (x) ≥ h∇f (x), y − xi , ∀x, y

(c) ∇2 f (x)  0 , ∀x

Jensen’s inequality
f convex, C convex
For any convex combination
p
! p
X X
f λ i xi ≤ λi f (xi )
i=0 i=0

Numerous applications, e.g. EM iteration

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 28


Optimiz. & Comp. Vision 4 Convex Functions

Invariant operations
Pm
Addition and (positive) scalar multiplication i=1 λi fi , λi ≥ 0.

Composition f (x) = g(Ax + b) , g : Rm → R convex

Pointwise supremum supi fi

inf-projection
h(x) = inf f (x, y)
y

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 29


Optimiz. & Comp. Vision 4 Convex Functions

Example: Huber’s robust error function



 1 t2 if |t| ≤ γ
2
ρ(t) =
γ|t| − 1 γ 2 if|t| > γ
2

Representation by inf-projection:
n1 o
2
ρ(t) = inf s + γ|t − s|
s 2

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 30


Optimiz. & Comp. Vision 4 Convex Functions

Example (cont’d)

Note:
Additional dimensions often simplify problem representations.

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 31


Optimiz. & Comp. Vision 4 Convex Functions

Affine function
f (x) = ha, xi + β

Quadratic function
1
f (x) = hx, Axi + ha, xi + β , A0
2

`p -norm

n
!1/p
X
kxkp = |xi |p , 1≤p<∞
i=1

kxk∞ = max |xi |


i

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 32


Optimiz. & Comp. Vision 4 Convex Functions

`p -balls, p ∈ {1, 2, ∞}

Application: sparsity measure


1
√ kxk1 ≤ kxk2 ≤ kxk1
n
 
1 √ kxk1
sp(x) := √ n− ∈ [0, 1]
n−1 kxk2

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 33


Optimiz. & Comp. Vision 4 Convex Functions

Entropy
The entropy function is concave, i.e. the negative entropy is convex:
n
X
h : C ⊂ Rn → R , h(p) = pi log pi
i=1

C= Rn+ ∩ p he, pi = 1
(2-simplex)

Global minimum:
uniform distribution
pi = 1/n , ∀i

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 34


Optimiz. & Comp. Vision 4 Convex Functions

Indicator functions
C closed convex; extended function δC : Rn → R = [−∞, +∞]

0 if x ∈ C
δC (x) :=
+∞ if x 6∈ C

Allows to include constraints into the objective function:



inf f (x) ⇔ inf f (x) + δC (x)
x∈C x∈Rn

This is convenient for calculations.

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 35


Optimiz. & Comp. Vision 4 Convex Functions

Proper, lsc, convex functions


Important class of functions f : Rn → R

Proper functions

Effective domain: dom f = x f (x) < +∞ (e.g. dom δC = C)

f is called proper if dom f 6= ∅

Lower semicontinuous (lsc) functions



All level sets x f (x) ≤ α are closed.

Example: δC is lsc iff C is closed.

Attainment of minima
If f : Rn → R is proper, lsc and level-bounded, then inf f is finite and
argmin f is nonempty and compact.

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 36


Optimiz. & Comp. Vision 4 Convex Functions

Support functions

σC (v) := sup hx, vi


x∈C

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 37


Optimiz. & Comp. Vision 4 Convex Functions

Support functions
C 6= ∅ closed convex
σC (v) := sup hx, vi
x∈C

Support functions are proper, lsc, and


positively homogeneous

σC (λv) = λσC (v) , λ > 0, ∀v

sublinear (subadditive)

σC (v + w) ≤ σC (v) + σC (w) , ∀v, w

One-to-one correspondence between proper, lsc, sublinear func-


tions f and nonempty, closed, convex subsets C:
 n

f = σC and C = x hx, vi ≤ f (v) , ∀v ∈ R

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 38


Optimiz. & Comp. Vision 4 Convex Functions

Example

 p
C= x hx, Axi ≤ 1 , A pos. definite
p
σC (v) = hv, A−1 vi

Note:
λmin kxk22 ≤ hx, Axi ≤ λmax kxk22

Thus, C is the unit ball wrt. the norm


p
kxkA,2 := hx, A, xi

Then
σC (v) = kvkA−1 ,2

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 39


Optimiz. & Comp. Vision 4 Convex Functions

Examples
Euclidean norm
Support function of the unit ball:

kvk2 = sup hx, vi = σB(0,1) (v)


x∈B(0,1)

Polar (dual) norms


For any norm k · k with unit ball BX , the corresponding support
function is the polar (dual) norm

kvk◦ := σBX (v) = sup hx, vi


x∈BX

Examples:

k · k◦p = k · kq , 1 < p, q < ∞ , p−1 + q −1 = 1

k · k◦1 = k · k∞ , k · k◦∞ = k · k1

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 40


Optimiz. & Comp. Vision 4 Convex Functions

Polar (dual) norms (cont’d)


Norm k · k =: σB◦X is support function of the polar set


BX = v σBX (v) ≤ 1

 n

= v hv, xi ≤ σB◦X (x) , ∀x ∈ R

Conversely, by definition

BX = x σB◦X (x) ≤ 1

 n

= x hx, vi ≤ σBX (v) , ∀v ∈ R

Seminorms
For seminorms | · |, analogous definitions are useful:

C = x σC ◦ (x) ≤ 1 , σC ◦ (x) = |x|



C = v σC (v) ≤ 1 ,
σC (v) = |v|◦

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 41


Optimiz. & Comp. Vision 4 Convex Functions

Application: Total Variation TV(u)


Discontinuity-preserving smoothing of g by minimizing

Z  
1
(u − g)2 + λTV(u) dx
Ω 2

200
200

150
60 60
100

100
0
40 40

20 20

40 20 40 20

60 60

80 80

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 42


Optimiz. & Comp. Vision 4 Convex Functions

Definition: (Total Variation Measure)


Z 
hu, div ηidx η ∈ Co1 (Ω; Rd ) , kηk∞ ≤ 1

TV(u) = sup
η Ω
Z
= |∇u|dx for u sufficiently regular

→ discrete total variation by finite differences

Definition: (Space of osc. patterns, Meyer 2002; discrete version)


 
G = v ∃w ∈ Y , v = div(w) ,
kvkG = inf kwk∞ v = div(w)

w




⇒ TV(u) = σC ◦ (u) , C = v kvkG ≤ 1


⇒ kvkG = σC (v) , C = u TV(u) ≤ 1

→ abstraction helps to recognize familiar situations

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 43


Optimiz. & Comp. Vision 5 Subgradients and Optimality

5 – Subgradients and Optimality


Recall the convexity criterion for smooth f :

f (y) − f (x) ≥ h∇f (x), y − xi , ∀x, y

A subgradient v of a proper
convex function f (possibly
non-smooth) at x is the gradi- f
ent of an affine function sup-
porting f at x:

f (x) ≥ f (x) + hv, x − xi , ∀x v

The set of all subgradients of f at x is denoted with

∂f (x)

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 44


Optimiz. & Comp. Vision 5 Subgradients and Optimality

Examples

Indicator functions
∂δC (x) = NC (x)

Support functions

∂σC (v) = argmaxhx, vi = x ∈ C v ∈ NC (x)

x∈C

Norm 
x
 
kxk2 if x 6= 0
∂ kxk2 =
B(0, 1) if x = 0

Follows also from kxk2 = σB(0,1) (x)

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 45


Optimiz. & Comp. Vision 5 Subgradients and Optimality

Generalized Fermat’s rule


A proper, lsc, convex function has a global minimum at x if and only if

0 ∈ ∂f (x)

Special case: f differentiable, C closed convex


n o
inf f (x) = inf f (x) + δC (x)
x∈C x∈X

Global optimality:
0 ∈ ∇f (x) + NC (x)

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 46


Optimiz. & Comp. Vision 5 Subgradients and Optimality

Application (C closed convex, x 6∈ C)

Distance function x

dC (x) = inf ky−xk+δC (y)
y

Projection ΠC( x)

ΠC (x) = argmin ky − xk C
y∈C

Global optimality: ΠC (x) = {x} (notation: ΠC (x) = x) and


x−x
∈ NC (x)
kx − xk

Note:
v ∈ NC (x) ⇔ x = ΠC (x + v)

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 47


Optimiz. & Comp. Vision 5 Subgradients and Optimality

Application (cont’d)
Total variation denoising (Aujol, Chambolle)
n1 o
inf ku − gk2 + λTV(u)
u 2


Optimality: 0 ∈ u − g + ∂σλC ◦ (u) , λC = v kvkG ≤ λ

We already know
 ◦
∂σλC ◦ (u) = v ∈ λC u ∈ NλC ◦ (v)

u ∈ NλC ◦ (v) ⇔ v = ΠλC ◦ (v + u)

Hence

g − u ∈ ∂σλC ◦ (u) ⇔ g − u ∈ λC ◦ , u ∈ NλC ◦ (g − u)

⇒ g − u = ΠλC ◦ (g − u + u) ⇔ u = g − ΠλC ◦ (g)

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 48


Optimiz. & Comp. Vision 5 Subgradients and Optimality

Example

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 49


Optimiz. & Comp. Vision 5 Subgradients and Optimality

Optimization by basic fixed-point iteration


Applicable if
(i) f is strictly convex
(ii) C is sufficiently simple (ΠC is easy to compute)

Recall: x ∈ argmin f (x) + δC (x) ⇔ 0 ∈ ∇f (x) + NC (x)

⇔ x = ΠC x − ∇f (x)

Ass.: h∇f (x) − ∇f (y), x − yi ≥ c kx − yk2 (strong monotonicity)



∇f (x) − ∇f (y) ≤ L kx − yk (Lipschitz constant)

Then, for τ > L2 /(2c), the following iteration converges to x:


k+1 k −1 k

x = ΠC x − τ ∇f (x )

→ Literature: numerous extensions under weaker assumptions

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 50


Optimiz. & Comp. Vision 6 Conjugate Duality

6 – Conjugate Duality
Legendre-Fenchel Transform
One-to-one transform of the class of proper, lsc, convex functions:



f (v) = sup hv, xi − f (x)
x
 ∗

f (x) = sup hv, xi − f (v)
v

Interpretation:

f ∗ (v) ≥ hv, xi − f (x) ⇔ f (x) ≥ hv, xi − f ∗ (v)

f ∗ (v) gives the “largest” affine function hv, xi − f ∗ (v) majorized by f .

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 51


Optimiz. & Comp. Vision 6 Conjugate Duality

Subgradient rule
f proper, lsc, convex
 ∗

∂f (x) = argmax hv, xi − f (v)
v


∂f (v) = argmax hv, xi − f (x)
x

Furthermore v ∈ ∂f (x) ⇔ x ∈ ∂f ∗ (v)

Example:

δC = σC ⇒ v ∈ NC (x) ⇔ x ∈ ∂σC (v)

Note:
 
inf x f (x)} = − supx − f (x) , thus


inf f (x) = −f (0) , argmin f (x) = ∂f ∗ (0)
x x

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 52


Optimiz. & Comp. Vision 6 Conjugate Duality

Piecewise linear-quadratic penalties


Y ⊆ Rm , B ∈ S +
m
, A ∈ Rm×n , b ∈ Rm , z = b − Ax

n 1 
θY,B : R → R , θY,B (x) = sup hy, zi − hy, B, yi
y∈Y 2


∗
dom θY,B = Y ∩ N (B) (Y ∞ : horizon cone of all directions)

Important special cases:

B = 0: θY,0 = σY

B = 0 , Y = K is a cone: θY,0 = σY = δK = δK ∗
1
∗
Y = Rm : θY,B = 2 hy, Byi

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 53


Optimiz. & Comp. Vision 6 Conjugate Duality

Dual convex optimization schemes


k : Rn → R, h : Rm → R proper, lsc, convex

min φ(x) , φ(x) = hc, xi + k(x) + h(b − Ax)


x∈Rn

max
m
ψ(y) , ψ(y) = hb, yi − h∗ (y) − k ∗ (A> y − c)
y∈R

It holds inf x φ(x) = supy ψ(y) provided

b ∈ int (A dom k + dom h) , c ∈ int (A> dom h∗ − dom k ∗ )

Extended linear-quadratic programming (special case)


n 1 o
n
min n hc, xi + hx, Cxi + θY,B (b − Ax) , C ∈ S+
x∈X⊂R 2
n 1 o
max m hb, yi − hy, Byi − θX,C (A> y − c) , B ∈ S+m
y∈Y ⊂R 2

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 54


Optimiz. & Comp. Vision 7 Convex Optimization

7 – Convex Optimization

Linear Programming (LP)

min hc, xi , Ax ≥ b
x∈Rn

corresponds to X = Rn , Y = R+
m
, and

b − Ax ≤ 0 ⇔ b − Ax ∈ Y ∗ ↔ δY ∗ (b − Ax) = σY (b − Ax)
n o n o
min hc, xi + θY,0 (b − Ax) , max hb, yi − θX,0 (A> y − c)
x∈X y∈Y

σX (A> y − c) = δ{0} (A> y − c), thus

max
m
hb, yi , A> y = c , y≥0
y∈R

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 55


Optimiz. & Comp. Vision 7 Convex Optimization

Example: `1 -norm approximation


Unknown positive superposition with an unknown step and noise:
1
4

2
2

-1

minp kg−Φαk1 ⇔ min


n
he, ri , −r ≤ g−Φα ≤ r , α ≥ 0
α∈R+ r∈R ,α∈Rp

   
 >   I Φ   g
e r   r  
min     , I
 −Φ   ≥
−g 

r,α 0 α α
0 I 0

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 56


Optimiz. & Comp. Vision 7 Convex Optimization

Example (cont’d)

4
4

2
2

1.4
1.2
4
1
0.8
2
0.6
0.4
0.2

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 57


Optimiz. & Comp. Vision 7 Convex Optimization

Convex Quadratic Programming (QP)


Definition: Convex quadratic objective function subject to affine con-
straints.
Example: Huber’s function (cf. page 30)
 
 1 t2 if |t| ≤ γ  n1 o
ρ(t) = 2 = inf s2 + γ|t − s|
γ|t| − 1 γ 2 if |t| > γ  s 2
2

Application to robust estimation (convex QP)


n1 o
2
min kzk2 + γkAx − b − zk1
n
x∈R ,z∈R m 2
n1 o
⇔ min kzk22 + γhe, ri
x,z,r 2
− r ≤ Ax − b − z ≤ r

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 58


Optimiz. & Comp. Vision 7 Convex Optimization

Example: Support Vector Machine (SVM)


Linear separable case:
margin
1
min kwk2
w∈R n ,b∈R 2

yi hw, xi i + b ≥ 1
i = 1, . . . , m w

Vector-matrix notation: xL
 h
X = x1 , . . . , xm ∈ Rn×m

D = diag y1 , . . . , ym ∈ Rm×m
0
Thus
1 2
min kwk , D(X > w + be) − e ≥ 0
w∈Rn ,b∈R 2

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 59


Optimiz. & Comp. Vision 7 Convex Optimization

Example (cont’d)
Corresponds to X = Rn+1 , Y = Rm+ , δY ∗ = σY = θY,0

(  >       )
1 w I 0 w   w
min + θY,0 e − DX > De  
w,b 2 b 0 0 b b
| {z }
C
(  ) 
XD
max he, yi − θX,C   y
y∈Y e> D
    >  
 
XD x XD
θX,C    y = sup     y − 1 kxk2
>
e D n
x∈R ,xn+1 ∈R xn+1 e> D 2

 ∗
 1 
⇒ hDe, yi = 0 , θX,C . . . = kxk2 XDy
2

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 60


Optimiz. & Comp. Vision 7 Convex Optimization

Example (cont’d)
Dual program
 
1 2
max he, yi − kXDyk , hDe, yi = 0 , y≥0
y∈R m 2

Straightforward generalization to the nonlinear, non-separable case

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 61


Optimiz. & Comp. Vision 7 Convex Optimization

Conic Programming

LP ⊂ SOCP ⊂ SDP

Linear program (LP): K = Rm


+


>
minn hc, xi , Ax − b ≥K 0 max
m
hb, yi , A y = c , y ≥K 0
x∈R y∈R

Second-order cone program (SOCP): K = Lm




>
minn hc, xi , Ax − b ≥K 0 max m
hb, yi , A y = c , y ≥K 0
x∈R y∈R

m
Semidefinite program (SDP): K = S+ , A : Rn → S m


minn hc, xi , Ax − B ≥K 0 maxm tr(BY ) , A> Y = c , Y ≥K 0

x∈R Y ∈S

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 62


Optimiz. & Comp. Vision 7 Convex Optimization

Second-order cone programming (SOCP)

min hc, xi , kDx − dk ≤ hp, xi − q


x∈Rn
   
D d
⇔ min hc, xi ,   x −   ≥K 0
x∈Rn p> q
| {z } | {z }
A b

Dual program: maxy∈Rm hb, yi , A> y = c , y ≥K 0


Put y = (ỹ > , z)>

max hd, ỹi + qz , Dỹ + zp = c , kỹk ≤ z
ỹ,z

LP ⊂ SOCP:
k0k ≤ hAi,• , xi − bi , ∀i

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 63


Optimiz. & Comp. Vision 7 Convex Optimization

Example: Sparse non-negative factorization

100 1 5
50 0.5
10
0 0
0 20 40 0 10 20
50 1 15
0.5
20
0 0
0 20 40 0 10 20
200 1 25
100 0.5
30
0 0
0 20 40 0 10 20
50 1 35
0.5
40
0 0 5 10 15 20
0 20 40 0 10 20

How to determine both basis functions and coefficient characteristics


in an unsupervised way?

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 64


Optimiz. & Comp. Vision 7 Convex Optimization

Example (cont’d)

min kV − W Hk2 , smin


w ≤ sp(W ) ≤ smax
w , smin
h ≤ sp(H > ) ≤ smax
h ,
W,H

Sparsity measure (applied column-wise to the matrices above)


 
1 √ kxk1
sp(x) := √ n− ∈ [0, 1]
n−1 kxk2
n
x ∈ R+ ⇒ kxk1 = he, xi. Thus
√ √ 
sp(x) ≤ s ⇔ n − ( n − 1)s kxk2 ≤ kxk1

So x ∈ Rn+ , sp(x) ≤ s equals x ∈ Rn+ ∩ C(s) with


(   )

n 
x n+1
√ √
C(s) = x ∈ R  ∈L , cn,s = n − ( n − 1)s
1
cn,s he, xi

Optimizing a single factor under the upper sparsity bound ⇒ SOCP

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 65


Optimiz. & Comp. Vision 7 Convex Optimization

Example (cont’d)
100 1
50 0.5 5

0 0
0 20 40 0 10 20 10
50 1
0.5 15

0 0 20
0 20 40 0 10 20
200 1
25
100 0.5
0 0 30
0 20 40 0 10 20
50 1
35
0.5
40
0 0 5 10 15 20
0 20 40 0 10 20

40 2 50 2

20 1 1

0 0 0 0
0 10 20 30 40 0 5 10 15 20 0 10 20 30 40 0 5 10 15 20
40 5 40 4

20 0 20 2

0 −5 0 0
0 10 20 30 40 0 5 10 15 20 0 10 20 30 40 0 5 10 15 20
4 4 40 2

2 2 20 1

0 0 0 0
0 10 20 30 40 0 5 10 15 20 0 10 20 30 40 0 5 10 15 20
4 4 10 5

2 2 5 0

0 0 0 −5
0 10 20 30 40 0 5 10 15 20 0 10 20 30 40 0 5 10 15 20

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 66


Optimiz. & Comp. Vision 7 Convex Optimization

Semidefinite programming (SDP)


n
X
min hc, xi , Ax − B ≥K 0 ⇔ min hc, xi , xi Ai − B  0
x∈Rn x∈Rn
i=1

Dual program

maxm tr(BY ) , A> Y = c , Y ≥K 0


Y ∈S
>
⇔ max tr(BY ) , tr(A1 Y ), . . . , tr(An Y ) =c, Y 0
Y ∈S m

SOCP ⊂ SDP:
   
x tIn−1 x
  ∈ Ln ⇔  0
t x> t

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 67


Optimiz. & Comp. Vision 7 Convex Optimization

Example: Low-dimensional flat euclidean embedding

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 68


Optimiz. & Comp. Vision 7 Convex Optimization

Example: Low-dimensional flat euclidean embedding


Given a neighborhood graph
G = (V, E), compute embedding

Rn 3 x i → y i ∈ R m
m < n , i = 1, . . . , p

such that

K  0 , Ki,j = hyi , yj i (euclidean embedding)


kyi − yj k = kxi − xj k , ∀ij ∈ E (local isometry)
X
kyi − yj k2 → max , i, j ∈ V (unfolding)
i,j

P 
Additionally, impose i yi = 0 for centering yi .

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 69


Optimiz. & Comp. Vision 7 Convex Optimization

Example (cont’d)
Put dij = kxi − xj k2 and express the constraints in terms of K:

kyi − yj k2 = Ki,i − 2Ki,j + Kj,j = dij


1 X 2
X
2
X
kyi − yj k = kyi k = Ki,i
2p i,j i i
X X
hyi , yj i = Ki,j = 0 (centering)
i,j i,j

SDP in dual form:

maxp tr K , K  0
K∈S
Ki,j − 2Ki,j + Kj,j = dij , ∀ij ∈ E
p
X
Ki,j = 0
i,j=1

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 70


Optimiz. & Comp. Vision 8 Non-Convex Optimization

8 – Non-Convex Optimization
Solvable binary problems

min c(x) , x ∈ F (discrete!) (general problem formulation)


x
minhc, xi , Ax = b , x ∈ {0, 1}n (typical formulation; A, b integral)
x
minhc, xi , Ax = b , x ∈ [0, 1]n (LP-relaxation)
x

Ideal situation: Linear relaxation amounts to minx c(x) , x ∈ convF

Example: Linear assignment problem (cf. page 26)


X 
>
min ci,j xi,j = min tr C X
X X
i,j
X X
xi,j = xi,j = 1 , xi,j ∈ {0, 1}
i j

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 71


Optimiz. & Comp. Vision 8 Non-Convex Optimization

Integral polyhedra
A ∈ Zm×n , b ∈ Zm , id. situation
 n

x ∈ R+ Ax ≤ b
 n

= conv x ∈ Z+ Ax ≤ b

if and only if A is totally unimod-


ular, i.e. the determinant of each
square
 submatrix
 is  0, 1,or −1.
−1 1 2
Ex.:  ≤ 
−1 0 −3
Example: Matching in bipartite
graphs (A: incidence matrix)

...

...
maxn hc, xi , Ax ≤ 1
x∈R+

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 72


Optimiz. & Comp. Vision 8 Non-Convex Optimization

Submodular functions, flows/cuts in networks

Global optimum of binary functional (regular grid-graph G = (V, E))


X 2 X
J(x) = (1 − xi )u0 + xi u1 − yi + λ (xi − xj )2
i∈V ij∈E

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 73


Optimiz. & Comp. Vision 8 Non-Convex Optimization

Approach (overview)

• Regard functionals of the class


X X
i
J(x) = J (xi ) + J ij (xi , xj ) , xi ∈ {0, 1} , ∀i ∈ V
i∈V ij∈E

V
 
as set functions J : 2 → R+ , V = i ∈ V | xi = 0 ∪ i ∈ V | xi = 1
| {z } | {z }
S V \S

• Design J to be submodular :

J ij (0, 0) + J ij (1, 1) ≤ J ij (0, 1) + J ij (1, 0) , ∀ij ∈ E

• Construct a network (D, c, s, t) having J as cut-functions.

• Solve for the minimum-cut edges defining the optimal partition of


V , i.e. global optimum x∗ of J.

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 74


Optimiz. & Comp. Vision 8 Non-Convex Optimization

Networks, cuts.
A network (D, c, s, t) is a digraph D = (V, E) with edge capacities
c : E → R+ and two specified vertices s ∈ V (source) and t ∈ V \ {s}
(sink, target).
Example. Vertices s, t are marked with black, and the capacities
c(e) , e ∈ E, are depicted as edge labels:

6 2
8
2 9
6 4 3 9
5 1
s 8 3 6 t
7 7
14 1 8 12
8 4 2
11
2 9

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 75


Optimiz. & Comp. Vision 8 Non-Convex Optimization

LP with TU-matrix A gives the maximum flow from s to t subject to


the capacity constraints. Efficient dedicated algorithms exist.
For any S ⊂ V , we call directed cut the edge set
+

δ (S) = ij ∈ E | i ∈ S , j 6∈ S

A minimum-cut δ + (S) minimizes the cut function


 X
+
f (S) = c δ (S) := c(e)
e∈δ + (S)

Cut functions are submodular:

f (S ∪ T ) + f (S ∩ T ) ≤ f (S) + f (T ) , ∀S, T ⊆ V

Equivalent condition (V finite):


   
f S ∪{i, j} +f S ≤ f S ∪{i} +f S ∪{j} , ∀S ∈ V , ∀i, j ∈ V \S

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 76


Optimiz. & Comp. Vision 8 Non-Convex Optimization

Theorem: Max-Flow Min-Cut (Ford-Fulkerson 1956)


Maximum value of a flow = Minimum capacity of a cut.
As a result, any maximum-flow algorithm can be used to partition V .
Parents and childs of the minimum-cut edges partition V .

6 2
8
2 9
6 4 3 9
5 1
8 3 6
7 7
14 1 8 12
8 4 2
11
2 9
Construction of network D from functional J: Kolmogorov-Zabih’04

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 77


Optimiz. & Comp. Vision 8 Non-Convex Optimization

Binary (quadratic) optimization and semidefinite relaxation

Saliency measure
X X 2
E(p) = − w(i, j)pi pj + λ pi , p ∈ {0, 1}n
i,j i

Cannot (in general) be optimized with graph-cuts!

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 78


Optimiz. & Comp. Vision 8 Non-Convex Optimization

x = 2p − e ∈ {−1, +1}n :
1
>
1

J(x) = x, (λee − W )x + e, (λnI − W )x
4 2
Lifting problem variables into matrix space:
 >    >
x Q b x
J(x) = hx, Qxi + 2hb, xi =     
1 b> 0 1
> >

= x̃ Lx̃ = tr Lx̃x̃

Semidefinite relaxation (x2i = 1)



min tr LX , X  0 , diag(X) = e
X∈S n+1

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 79


Optimiz. & Comp. Vision 8 Non-Convex Optimization

The feasible set for 0

  -1
1
1 a b
 
X = a 1 c 

 0

b c 1
-1
1

non-polyhedral but convex! 0

-1

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 80


Optimiz. & Comp. Vision 8 Non-Convex Optimization

Auxiliary variables
Non-convex local data term and convex non-
local regularization: step-size, convergence, ...?
Example (coherent particle matching in fluids)

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 81


Optimiz. & Comp. Vision 8 Non-Convex Optimization

Approach
Represent the non-convex function f (x) through auxiliary variables y
1
f (x) = inf φ(x, y)
y λ

such that both local minimization inf y φ and non-local minimization


inf x f = λ−1 inf x {inf y φ} (+ other problem terms in x) are convex.
Ass.: ∃λ ∈ R+ such that 12 kxk2 − λf (x) is convex. Ansatz:
n1 o n1 1 o
2 2 2
λf (x) = inf kx − yk + g(y) = inf kxk − hx, yi + kyk + g(y)
y 2 y 2 2
1 n 1 o
= kxk2 − sup hx, yi − kyk2 + g(y)
2 y |2 {z }
:=h(y)
1
= kxk2 − h∗ (x)
2

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 82


Optimiz. & Comp. Vision 8 Non-Convex Optimization

Thus, h∗ (x) = 12 kxk2 − λ f (x). g(y), in turn, is determined by


1 2 ∗∗
 ∗

2 kyk + g(y) = h (y) = supx hx, yi − h (x) . Result:

1
φ(x, y) = kxk2 − hx, yi + h∗∗ (y)
2

f (x) as inf-projection of the biconvex (6= convex!) function λ−1 φ(x, y):

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 83


Optimiz. & Comp. Vision 8 Non-Convex Optimization

Example

(Ruhnau et al. Meas., Science and Techn. 16: 1449-1458, 2005)

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 84


Optimiz. & Comp. Vision 8 Non-Convex Optimization

DC-Programming
Example: Robust linear classificaton with feature selection
n 
X 
inf
n
(1 − λ) 1 − yi hw, xi i + b + λkwk0
w∈R ,b∈R +
i=1


kwk0 := {i wi 6= 0 “counts” features. Concave approximation:
1

X 
−αvi
1−e , |wi | ≤ vi
i
1

DC-program:


min (1 − λ)he, ξi + λ e, e − exp(−αv)
w,b,ξ,v

ξi ≥ 1 − yi hw, xi i + b , ∀i
ξ ≥ 0 , −v ≤ w ≤ v

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 85


Optimiz. & Comp. Vision 8 Non-Convex Optimization

Example: Binary transmission tomography


Large-scale binary quadratic optimization (graph-cuts do not apply):
1n o
inf kAx − bk2 + λhx, Lxi
x∈{0,1}n 2

Relax {0, 1}n → [0, 1]n and impose concave penalty

µ
− hx, x − ei
2
1

DC-program:
1n o
inf kAx − bk2 + λhx, Lxi − µhx, x − ei
x∈[0,1]n 2

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 86


Optimiz. & Comp. Vision 8 Non-Convex Optimization

DC-Program: g, h proper, lsc, convex

inf f (x) , f (x) = g(x) − h(x)


x∈X

Given xk , compute the closest affine majorization of the concave part:


 ∗
−h(x ) ≤ −hy, x i+h (y) , ∀y y y ∈ argmin h (y)−hy, x i = ∂h(xk )
k k ∗ k k

Convex program
n  o
xk+1 ∈ argmin g(x) − h(xk ) + hy k , x − xk i = ∂g ∗ (y k )

Decreasing sequence: (inf y above is attained)


 k+1 k+1 k+1 k+1 ∗

inf g(x) − h(x) ≤ g(x ) − h(x ) ≤ g(x ) − hy, x i − h (y)
x

≤ g(xk+1 ) − hy k , xk+1 i + hy k , xk i − h(xk )


≤ g(xk ) − h(xk )

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 87


Optimiz. & Comp. Vision 8 Non-Convex Optimization

Example
Tomogr. reconstruction of a binary 2563 volume from 5 projections:

C. Schnörr — CVGPR Group, Dept. Math. & Comp. Science page 88

You might also like