Approximation Algorithms: Slides by Kevin Wayne. All Rights Reserved
Approximation Algorithms: Slides by Kevin Wayne. All Rights Reserved
Approximation Algorithms: Slides by Kevin Wayne. All Rights Reserved
Approximation
Algorithms
1
Approximation Algorithms
-approximation algorithm.
Guaranteed to run in poly-time.
Guaranteed to solve arbitrary instance of the problem
Guaranteed to find solution within ratio of true optimum.
2
11.1 Load Balancing
Load Balancing
Def. The makespan is the maximum load on any machine L = maxi Li.
4
Load Balancing: List Scheduling
List-Scheduling(m, n, t1,t2,,tn) {
for i = 1 to m {
Li 0 load on machine i
J(i) jobs assigned to machine i
}
for j = 1 to n {
i = argmink Lk machine i has smallest load
J(i) J(i) {j} assign job j to machine i
Li Li + tj update load of machine i
}
return J(1), , J(m)
}
5
Load Balancing: List Scheduling Analysis
6
Load Balancing: List Scheduling Analysis
machine i j
0
Li - tj L = Li
7
Load Balancing: List Scheduling Analysis
Li tj m1 k Lk k=1,,m
m1 k t k k=1,,n
Lemma 1 L*
Now Li (Li t j ) t j 2L *.
L* L*
Lemma 2
Q. Is our analysis tight?
8
Load Balancing: List Scheduling Analysis
machine 2 idle
machine 3 idle
machine 4 idle
m = 10 machine 5 idle
machine 6 idle
machine 7 idle
machine 8 idle
machine 9 idle
machine 10 idle
9
Load Balancing: List Scheduling Analysis
m = 10
optimal makespan = 10
10
Load Balancing: LPT Rule
LPT-List-Scheduling(m, n, t1,t2,,tn) {
Sort jobs so that t1 t2 tn
for i = 1 to m {
Li 0 load on machine i
for j = 1 to n {
i = argmink Lk machine i has smallest load
J(i) J(i) {j} assign job j to machine i
11
Load Balancing: LPT Rule
L i (Li t j ) t j 3 L *.
2
L* 12 L*
Lemma 3
( by observation, can assume number of jobs > m )
12
Load Balancing: LPT Rule
Ex: m machines, n = 2m+1 jobs, 2 jobs of length m+1, m+2, , 2m-1 and
one job of length m.
13
11.2 Center Selection
Center Selection Problem
k=4
r(C)
center
site
15
Center Selection Problem
Notation.
dist(x, y) = distance between x and y.
dist(si, C) = min c C dist(si, c) = distance from si to closest center.
r(C) = maxi dist(si, C) = smallest covering radius.
16
Center Selection Example
Ex: each site is a point in the plane, a center can be any point in the
plane, dist(x, y) = Euclidean distance.
r(C)
center
site
17
Greedy Algorithm: A False Start
Greedy algorithm. Put the first center at the best possible location
for a single center, and then keep adding centers so as to reduce the
covering radius each time by as much as possible.
greedy center 1
center
k = 2 centers site
18
Center Selection: Greedy Algorithm
Greedy-Center-Selection(k, n, s1,s2,,sn) {
C =
repeat k times {
Select a site si with maximum dist(si, C)
Add si to C
} site farthest from any center
return C
}
19
Center Selection: Analysis of Greedy Algorithm
r(C) r(C)
ci
r(C)
C*
ci*
sites s
20
Center Selection
21
11.4 The Pricing Method: Vertex Cover
Weighted Vertex Cover
2 9 2 9
weight = 2 + 2 + 4 weight = 11
23
Pricing Method
Weighted-Vertex-Cover-Approx(G, w) {
foreach e in E
pe = 0
24
Approximation method: Pricing Method
2 4
2 9
Lemma. For any vertex cover S and any fair prices pe: e pe w(S).
Pf.
pe pe wi w( S ).
e E i S e ( i , j ) iS
25
Pricing Method
vertex weight
Figure 11.8
Example shows the pricing method does not provide the optimal
weighted vertex cover solution
26
Pricing Method: Analysis
w(S) wi pe pe 2 pe 2w(S*).
i S i S e(i, j) iV e(i, j) e E
all nodes in S are tight S V, each edge counted twice fairness lemma
prices 0
27
11.6 LP Rounding: Vertex Cover
Weighted Vertex Cover
10 A F
6 9
16 B G
7 10
6 C
3 H 9
23 D I 33
7 E J
10 32
total weight = 55
29
Weighted Vertex Cover: IP Formulation
30
Weighted Vertex Cover: IP Formulation
( ILP) min wi xi
i V
s. t. xi x j 1 (i, j) E
xi {0,1} i V
Observation. If x* is optimal solution to (ILP), then S = {i V : x*i = 1}
is a min weight vertex cover.
31
Integer Programming
n
t
max c x aij x j bi 1 i m
j1
s. t. Ax b
xj 0 1 j n
x integral
xj integral 1 j n
Observation. Vertex cover formulation proves that integer
programming is NP-hard search problem.
32
Linear Programming
Linear.
No x2, xy, arccos(x), x(1-x), etc.
33
LP Feasible Region
LP geometry in 2D.
x1 = 0
x2 = 0
x1 + 2x2 = 6
2x1 + x2 = 6
34
LP Feasible Region
LP geometry in 3D.
35
Weighted Vertex Cover: LP Relaxation
( LP) min wi xi
i V
s. t. xi x j 1 (i, j) E
xi 0 i V
Observation.
Optimal value of (LP) is optimal value of (ILP).
Pf. LP has fewer constraints.
36
Weighted Vertex Cover
wi wi xi* 1
2 wi
i S* iS iS
LP is a relaxation x*i
37
Weighted Vertex Cover
10 5 - 21
38
11.8 Knapsack Problem
Polynomial Time Approximation Scheme
This section. PTAS for knapsack problem via rounding and scaling.
40
Knapsack Problem
Knapsack problem.
Given n objects and a "knapsack."
Item i has value vi > 0 and weighs wi > 0. we'll assume wi W
41
Knapsack is NP-Complete
vi V
iS
42
Knapsack Problem: Dynamic Programming 1
Def. OPT(i, w) = max value subset of items 1,..., i with weight limit w.
Case 1: OPT does not select item i.
OPT selects best of 1, , i1 using up to weight limit w
Case 2: OPT selects item i.
new weight limit = w wi
OPT selects best of 1, , i1 using up to weight limit w wi
0 if i 0
OPT(i, w) OPT(i 1, w) if w i w
max OPT(i 1, w), v OPT(i 1, w w ) otherwise
i i
W = weight limit.
Not polynomial in input size!
43
Knapsack Problem: Dynamic Programming II
0 if v 0
if i 0, v > 0
OPT (i, v)
OPT (i 1, v) if v i v
min OPT (i 1, v), wi OPT (i 1, v vi ) otherwise
V* n vmax
W = 11 W = 11
45
Knapsack: FPTAS
46
Knapsack: FPTAS
vi vi
always round up
i S* i S*
vi n |S| n
i S DP alg can take vmax
(1 ) vi n = vmax, vmax iS vi
i S
Original problem assumes no individual item has weight w_n that exceeds weight limit W all by itself
47