Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
100% found this document useful (1 vote)
69 views

13 Randomized Algorithms

randomized algorithms

Uploaded by

Sheetanshu Sinha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
69 views

13 Randomized Algorithms

randomized algorithms

Uploaded by

Sheetanshu Sinha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

13.

R ANDOMIZED A LGORITHMS
contention resolution
global min cut
linearity of expectation
max 3-satisfiability
universal hashing
Chernoff bounds
load balancing
Lecture slides by Kevin Wayne
Copyright 2005 Pearson-Addison Wesley
http://www.cs.princeton.edu/~wayne/kleinberg-tardos

Last updated on 2/3/16 9:47 AM

Randomization
Algorithmic design patterns.

Greedy.
Divide-and-conquer.
Dynamic programming.
Network flow.
Randomization.
in practice, access to a pseudo-random number generator
Randomization. Allow fair coin flip in unit time.
Why randomize? Can lead to simplest, fastest, or only known algorithm for
a particular problem.
Ex. Symmetry breaking protocols, graph algorithms, quicksort, hashing,
load balancing, Monte Carlo integration, cryptography.

13. R ANDOMIZED A LGORITHMS


contention resolution
global min cut
linearity of expectation
max 3-satisfiability
universal hashing
Chernoff bounds
load balancing

Contention resolution in a distributed system


Contention resolution. Given n processes P1, , Pn, each competing for
access to a shared database. If two or more processes access the database
simultaneously, all processes are locked out. Devise protocol to ensure all
processes get through on a regular basis.
Restriction. Processes can't communicate.
Challenge. Need symmetry-breaking paradigm.

P1

P2
.
.
.
Pn
4

Contention resolution: randomized protocol


Protocol. Each process requests access to the database at time t with
probability p = 1/n.
Claim. Let S[i, t] = event that process i succeeds in accessing the database at
time t. Then 1 / (e n) Pr [S(i, t)] 1/(2n).
Pf. By independence, Pr [S(i, t)] = p (1 p) n 1.
process i requests access

Setting p = 1/n, we have Pr [S(i, t)]


value that maximizes Pr[S(i, t)]

none of remaining n-1 processes request access

= 1/n (1 1/n)
n 1.
between 1/e and 1/2

Useful facts from calculus. As n increases from 2, the function:

(1 1/n) n -1 converges monotonically from 1/4 up to 1 / e.


(1 1/n) n 1 converges monotonically from 1/2 down to 1 / e.
5

Contention Resolution: randomized protocol


Claim. The probability that process i fails to access the database in
en rounds is at most 1 / e. After e n (c ln n) rounds, the probability n -c.
Pf. Let F[i, t] = event that process i fails to access database in rounds 1
through t. By independence and previous claim, we have
Pr [F[i, t]] (1 1/(en)) t.

Pr[ F(i, t)] (1

1 $en %
en

Choose t = e n c ln n:

Pr[ F(i, t)]

c ln n

Choose t = e n:

( )
1
e

(1

1 en
en

1
e

= nc

Contention Resolution: randomized protocol


Claim. The probability that all processes succeed within 2e n ln n rounds
is 1 1 / n.
Pf. Let F[t] = event that at least one of the n processes fails to access
database in any of the rounds 1 through t.

Pr [ F [t] ]

n
"n
%
t
1
= Pr$ F [i, t ] ' Pr[ F [i, t]] n ( 1 en )
# i=1
&
i=1
union bound

Choosing t = 2 en c ln n yields

previous slide

Pr[F[t]] n n-2 = 1 / n.

Union bound. Given events E1, , En,

n
"n %
Pr $ Ei ' Pr[ Ei ]
# i=1 &
i=1

13. R ANDOMIZED A LGORITHMS


contention resolution
global min cut
linearity of expectation
max 3-satisfiability
universal hashing
Chernoff bounds
load balancing

Global minimum cut


Global min cut. Given a connected, undirected graph G = (V, E),
find a cut (A, B) of minimum cardinality.
Applications. Partitioning items in a database, identify clusters of related
documents, network reliability, network design, circuit design, TSP solvers.
Network flow solution.

Replace every edge (u, v) with two antiparallel edges (u, v) and (v, u).
Pick some vertex s and compute min s- v cut separating s from each
other vertex v V.

False intuition. Global min-cut is harder than min s-t cut.

Contraction algorithm
Contraction algorithm. [Karger 1995]

Pick an edge e = (u, v) uniformly at random.


Contract edge e.
- replace u and v by single new super-node w
- preserve edges, updating endpoints of u and v to w
- keep parallel edges, but delete self-loops

Repeat until graph has just two nodes v1 and v1.


Return the cut (all nodes that were contracted to form v1).

contract u-v

e
f

10

Contraction algorithm
Contraction algorithm. [Karger 1995]

Pick an edge e = (u, v) uniformly at random.


Contract edge e.
- replace u and v by single new super-node w
- preserve edges, updating endpoints of u and v to w
- keep parallel edges, but delete self-loops

Repeat until graph has just two nodes v1 and v1.


Return the cut (all nodes that were contracted to form v1).

Reference: Thore Husfeldt

11

Contraction algorithm
Claim. The contraction algorithm returns a min cut with prob 2 / n2.
Pf. Consider a global min-cut (A*, B*) of G.

Let F* be edges with one endpoint in A* and the other in B*.


Let k = | F* | = size of min cut.
In first step, algorithm contracts an edge in F* probability k / | E |.
Every node has degree k since otherwise (A*, B*) would not be
a min-cut | E | k n.

Thus, algorithm contracts an edge in F* with probability

2 / n.

B*

A*

F*
12

Contraction algorithm
Claim. The contraction algorithm returns a min cut with prob 2 / n2.
Pf. Consider a global min-cut (A*, B*) of G.

Let F* be edges with one endpoint in A* and the other in B*.


Let k = | F* | = size of min cut.
Let G' be graph after j iterations. There are n' = n j supernodes.
Suppose no edge in F* has been contracted. The min-cut in G' is still k.
Since value of min-cut is k, | E' | k n'.
Thus, algorithm contracts an edge in F* with probability 2 / n'.
Let Ej = event that an edge in F* is not contracted in iteration j.
Pr[E1 E2 ! En2 ] = Pr[E1 ] Pr[E2 | E1 ] ! Pr[En2 | E1 E2 ! En3 ]

2 ! 1 2 1 2
(1 2n ) (1 n1
) ( 4 ) ( 3)
n 2
n3
( n ) ( n 1 ) ! ( 24 ) ( 13 )

2
n(n1)

2
n2

13

Contraction algorithm
Amplification. To amplify the probability of success, run the contraction
algorithm many times.
with independent random choices,

Claim. If we repeat the contraction algorithm n2 ln n times,


then the probability of failing to find the global min-cut is 1 / n2.
Pf. By independence, the probability of failure is at most

# 2&
%1 2 (
$ n '

n 2 ln n

)# 2 & 12 n 2 , 2ln n
= +%1 2 ( .
e1
+*$ n ' .-

( )

2ln n

1
n2

(1 1/x)x 1/e

14

Contraction algorithm: example execution

trial 1

trial 2

trial 3

trial 4

trial 5
(finds min cut)

trial 6

...

Reference: Thore Husfeldt


15

Global min cut: context


Remark. Overall running time is slow since we perform (n2 log n) iterations
and each takes (m) time.
Improvement. [Karger-Stein 1996] O(n2 log3 n).

Early iterations are less risky than later ones: probability of contracting
an edge in min cut hits 50% when n / 2 nodes remain.

Run contraction algorithm until n / 2 nodes remain.


Run contraction algorithm twice on resulting graph and
return best of two cuts.
Extensions. Naturally generalizes to handle positive weights.
Best known. [Karger 2000] O(m log3 n).
faster than best known max flow algorithm or
deterministic global min cut algorithm

16

13. R ANDOMIZED A LGORITHMS


contention resolution
global min cut
linearity of expectation
max 3-satisfiability
universal hashing
Chernoff bounds
load balancing

Expectation
Expectation. Given a discrete random variables X, its expectation E[X]
is defined by:

E[X ] = j Pr[X = j]
j=0

Waiting for a first success. Coin is heads with probability p and tails with

probability 1 p. How many independent flips X until first heads?

j=0

j=0

E[X ] = j Pr[X = j] = j (1 p)
j 1 tails

j1

p
p 1 p
1
j
p =
2 =
j (1 p) =
1 p j=0
1 p p
p

1 head

18

Expectation: two properties


Useful property. If X is a 0/1 random variable, E[X] = Pr[X = 1].
Pf.

j=0

j=0

E[X ] = j Pr[X = j] = j Pr[X = j] = Pr[X = 1]

not necessarily independent

Linearity of expectation. Given two random variables X and Y defined over


the same probability space, E[X + Y] = E[X] + E[Y].

Benefit. Decouples a complex calculation into simpler pieces.

19

Guessing cards
Game. Shuffle a deck of n cards; turn them over one at a time;
try to guess each card.
Memoryless guessing. No psychic abilities; can't even remember what's
been turned over already. Guess a card from full deck uniformly at random.
Claim. The expected number of correct guesses is 1.
Pf. [ surprisingly effortless using linearity of expectation ]

Let Xi = 1 if ith prediction is correct and 0 otherwise.


Let X = number of correct guesses = X1 + + Xn.
E[Xi] = Pr[Xi = 1] = 1 / n.
E[X] = E[X1] + + E[Xn] = 1 / n + + 1 / n = 1.
linearity of expectation

20

Guessing cards
Game. Shuffle a deck of n cards; turn them over one at a time;
try to guess each card.
Guessing with memory. Guess a card uniformly at random from cards
not yet seen.
Claim. The expected number of correct guesses is (log n).
Pf.

Let Xi = 1 if ith prediction is correct and 0 otherwise.


Let X = number of correct guesses = X1 + + Xn.
E[Xi] = Pr[Xi = 1] = 1 / (n i 1).
E[X] = E[X1] + + E[Xn] = 1 / n + + 1 / 2 + 1 / 1 = H(n).
linearity of expectation

ln(n+1) < H(n) < 1 + ln n

21

Coupon collector
Coupon collector. Each box of cereal contains a coupon. There are n
different types of coupons. Assuming all boxes are equally likely to contain
each coupon, how many boxes before you have 1 coupon of each type?
Claim. The expected number of steps is (n log n).
Pf.

Phase j = time between j and j + 1 distinct coupons.


Let Xj = number of steps you spend in phase j.
Let X = number of steps in total = X0 + X1 + + Xn1.
n1

n1

n 1
n
E[X ] = E[X j ] =
= n = n H (n)
j=0
j=0 n j
i=1 i

prob of success = (n j) / n

expected waiting time = n / (n j)

22

13. R ANDOMIZED A LGORITHMS


contention resolution
global min cut
linearity of expectation
max 3-satisfiability
universal hashing
Chernoff bounds
load balancing

Maximum 3-satisfiability
exactly 3 distinct literals per clause

Maximum 3-satisfiability. Given a 3-SAT formula, find a truth assignment


that satisfies as many clauses as possible.

C1
C2
C3
C4
C5

=
=
=
=
=

x2
x2
x1
x1
x1

x3
x3
x2
x2
x2

x4
x4
x4
x3
x4

Remark. NP-hard search


problem.
Simple idea. Flip a coin, and set each variable true with probability ,
independently for each variable.

24

Maximum 3-satisfiability: analysis


Claim. Given a 3-SAT formula with k clauses, the expected number of
clauses satisfied by a random assignment is 7k / 8.

" 1 if clause C j is satisfied


Zj = #
$ 0 otherwise.

Pf. Consider random variable

satisfied by random assignment.


Let Z = number of clauses

E[Z ] =
linearity of expectation

E[Z j ]
j=1
k

Pr[clause C j is satisfied]

j=1
7k
8

25

The Probabilistic Method


Corollary. For any instance of 3-SAT, there exists a truth assignment that
satisfies at least a 7/8 fraction of all clauses.
Pf. Random variable is at least its expectation some of the time.

Probabilistic method. [Paul Erds] Prove the existence of a non-obvious


property by showing that a random construction produces it with
positive probability!

26

Maximum 3-satisfiability: analysis


Q. Can we turn this idea into a 7/8-approximation algorithm?
A. Yes (but a random variable can almost always be below its mean).
Lemma. The probability that a random assignment satisfies 7k / 8 clauses
is at least 1 / (8k).
Pf. Let pj be probability that exactly j clauses are satisfied;
let p be probability that 7k / 8 clauses are satisfied.
7k
8

= E[Z ] =

j pj
j 0

j pj +

j < 7k /8

j pj
j 7k /8

1) p
( 7k

j + k pj
8
8
j < 7k /8

j 7k /8

( 87 k 18 ) 1 + k p
Rearranging terms yields p 1 / (8k).

27

Maximum 3-satisfiability: analysis


Johnson's algorithm. Repeatedly generate random truth assignments until
one of them satisfies 7k / 8 clauses.
Theorem. Johnson's algorithm is a 7/8-approximation algorithm.
Pf. By previous lemma, each iteration succeeds with probability 1 / (8k).
By the waiting-time bound, the expected number of trials to find the
satisfying assignment is at most 8k.

28

Maximum satisfiability
Extensions.

Allow one, two, or more literals per clause.


Find max weighted set of satisfied clauses.
Theorem. [Asano-Williamson 2000] There exists a 0.784-approximation
algorithm for 3-SAT.
Theorem. [Karloff-Zwick 1997, Zwick+computer 2002] There exists a 7/8approximation algorithm for version of MAX-3-SAT where each clause has
at most 3 literals.
Theorem. [Hstad 1997] Unless P = NP, no -approximation algorithm for
MAX-3-SAT (and hence MAX-SAT) for any > 7/8.

very unlikely to improve over simple randomized


algorithm for MAX-3SAT

29

Monte Carlo vs. Las Vegas algorithms


Monte Carlo. Guaranteed to run in poly-time, likely to find correct answer.
Ex: Contraction algorithm for global min cut.

Las Vegas. Guaranteed to find correct answer, likely to run in poly-time.


Ex: Randomized quicksort, Johnson's MAX-3-SAT algorithm.

stop algorithm
after a certain point

Remark. Can always convert a Las Vegas algorithm into Monte Carlo,
but no known method (in general) to convert the other way.

30

RP and ZPP
RP. [Monte Carlo] Decision problems solvable with one-sided error in polytime.
can decrease probability of false negative
to 2-100 by 100 independent repetitions

One-sided error.

If the correct answer is no, always return no.


If the correct answer is yes, return yes with probability .
ZPP. [Las Vegas] Decision problems solvable in expected poly-time.
running time can be unbounded,
but fast on average

Theorem. P ZPP RP NP.


Fundamental open questions. To what extent does randomization help?
Does P = ZPP ? Does ZPP = RP ? Does RP = NP ?

31

13. R ANDOMIZED A LGORITHMS


contention resolution
global min cut
linearity of expectation
max 3-satisfiability
universal hashing
Chernoff bounds
load balancing

Dictionary data type


Dictionary. Given a universe U of possible elements, maintain a subset
S U so that inserting, deleting, and searching in S is efficient.
Dictionary interface.

create():
insert(u):
delete(u):
lookup(u):

initialize a dictionary with S = .


add element u U to S.
delete u from S (if u is currently in S).
is u in S ?

Challenge. Universe U can be extremely large so defining an array of


size | U | is infeasible.
Applications. File systems, databases, Google, compilers, checksums P2P
networks, associative arrays, cryptography, web caching, etc.

33

Hashing
Hash function. h : U { 0, 1, , n 1 }.
Hashing. Create an array H of size n. When processing element u,
access array element H[h(u)].
birthday paradox

Collision. When h(u) = h(v) but u v.

A collision is expected after (n) random insertions.


Separate chaining: H[i] stores linked list of elements u with h(u) = i.

H[1]

H[2]

jocularly

seriously

null

H[3]

suburban

H[n]

browsing

untravelled

considerating

34

Ad-hoc hash function


Ad hoc hash function.
int hash(String s, int n) {
int hash = 0;
for (int i = 0; i < s.length(); i++)
hash = (31 * hash) + s[i];
return hash % n;
}

hash function ala Java string library

Deterministic hashing. If | U | n2, then for any fixed hash function h,


there is a subset S U of n elements that all hash to same slot.
Thus, (n) time per search in worst-case.
Q. But isn't ad-hoc hash function good enough in practice?

35

Algorithmic complexity attacks


When can't we live with ad hoc hash function?

Obvious situations: aircraft control, nuclear reactors.


Surprising situations: denial-of-service attacks.
malicious adversary learns your ad hoc hash function
(e.g., by reading Java API) and causes a big pile-up in a
single slot that grinds performance to a halt

Real world exploits. [Crosby-Wallach 2003]

Bro server:

send carefully chosen packets to DOS the server,

using less bandwidth than a dial-up modem

Perl 5.8.0: insert carefully chosen strings into associative array.


Linux 2.4.20 kernel: save files with carefully chosen names.

36

Hashing performance
Ideal hash function. Maps m elements uniformly at random to m hash slots.

Running time depends on length of chains.


Average length of chain = = m / n.
Choose n m on average O(1) per insert, lookup, or delete.
Challenge. Achieve idealized randomized guarantees, but with a hash
function where you can easily find items where you put them.
Approach. Use randomization in the choice of h.
adversary knows the randomized algorithm you're using,
but doesn't know random choices that the algorithm makes

37

Universal hashing
Universal family of hash functions. [Carter-Wegman 1980s]

For any pair of elements u, v U,


Can select random h efficiently.
Can compute h(u) efficiently.

Pr h H [ h(u) = h(v) ] 1/ n
chosen uniformly at random

Ex. U = { a, b, c, d, e, f }, n = 2.
a

h1(x)

h2(x)

H = {h1, h2}
Pr h H [h(a) = h(b)] = 1/2

not universal

Pr h H [h(a) = h(c)] = 1
Pr h H [h(a) = h(d)] = 0
...

h1(x)

H = {h1, h2 , h3 , h4}

Pr h H [h(a) = h(b)] = 1/2

h2(x)

Pr h H [h(a) = h(c)] = 1/2

h3(x)

Pr h H [h(a) = h(d)] = 1/2

h4(x)

Pr h H [h(a) = h(e)] = 1/2

universal

Pr h H [h(a) = h(f)] = 0
...

38

Universal hashing: analysis


Proposition. Let H be a universal family of hash functions; let h H be
chosen uniformly at random from H; and let u U. For any subset S U
of size at most n, the expected number of items in S that collide with u
is at most 1.
Pf. For any element s S, define indicator random variable Xs = 1 if h(s) = h(u)
and 0 otherwise. Let X be a random variable counting the total number of
collisions with u.

EhH [X ] = E[sS X s ] = sS E[X s ] = sS Pr[X s = 1] sS


linearity of expectation

Xs is a 0-1 random variable

1
n

= | S | 1n 1

universal
(assumes u S)

Q. OK, but how do we design a universal class of hash functions?


39

Designing a universal family of hash functions


Theorem. [Chebyshev 1850] There exists a prime between n and 2n.
Modulus. Choose a prime number p n.

no need for randomness here

Integer encoding. Identify each element u U with a base-p integer of r


digits: x = (x1, x2, , xr).
Hash function. Let A = set of all r-digit, base-p integers. For each
a = (a1, a2, , ar) where 0 ai < p, define

#r
&
ha (x) = % ai xi ( mod p
$ i=1
'
Hash function family. H = { ha : a A }.

40

Designing a universal family of hash functions


Theorem. H = { ha : a A } is a universal family of hash functions.
Pf. Let x = (x1, x2, , xr) and y = (y1, y2, , yr) be two distinct elements of U.
We need to show that Pr[ha(x) = ha(y)] 1 / n.

Since x y, there exists an integer j such that xj yj.


We have ha(x) = ha(y) iff
aj (yj xj) =
!
#"#
$
z

ai (xi yi ) mod p
i j
!
##"##
$
m

Can assume a was chosen uniformly at random by first selecting all


coordinates ai where i j, then selecting aj at random. Thus, we can
assume
ai is fixed for all coordinates i j.

Since p is prime, aj z = m mod p has at most one solution among p


possibilities.

see lemma on next slide

Thus Pr[ha(x) = ha(y)]

= 1 / p 1 / n.

41

Number theory fact


Fact. Let p be prime, and let z 0 mod p. Then z = m mod p has at most one
solution 0 < p.
Pf.

Suppose and are two different solutions.


Then ( ) z = 0 mod p; hence ( ) z is divisible by p.
Since z 0 mod p, we know that z is not divisible by p;
it follows that ( ) is divisible by p.

This implies = .

Bonus fact. Can replace "at most one" with "exactly one" in above fact.
Pf idea. Euclid's algorithm.

42

13. R ANDOMIZED A LGORITHMS


contention resolution
global min cut
linearity of expectation
max 3-satisfiability
universal hashing
Chernoff bounds
load balancing

Chernoff Bounds (above mean)


Theorem. Suppose X1, , Xn are independent 0-1 random variables. Let X =
X1 + + Xn. Then for any E[X] and for any > 0, we have

sum of independent 0-1 random variables


is tightly centered on the mean

Pf. We apply a number of simple transformations.

For any t > 0,

Pr[X > (1+ )] = Pr e t X > e t(1+)

f(x) = etX is monotone in x

Now

et(1+) E[e tX ]
Markov's inequality: Pr[X > a] E[X] / a

E[e tX ] = E[e t i X i ] = i E[e t X i ]


definition of X

independence
44

Chernoff Bounds (above mean)


Pf. [ continued ]

Let pi = Pr [Xi = 1]. Then,

for any 0, 1+ e

Combining everything:
Pr[X > (1+ )]

t(1+)

i E[e

previous slide

t Xi

] e

t(1+)

inequality above

i e

pi (e t 1)

t(1+)

(e t 1)

i pi = E[X]

Finally, choose t = ln(1 + ).

45

Chernoff Bounds (below mean)


Theorem. Suppose X1, , Xn are independent 0-1 random variables.
Let X = X1 + + Xn. Then for any E [X ] and for any 0 < < 1, we have

Pf idea. Similar.
Remark. Not quite symmetric since only makes sense to consider < 1.

46

13. R ANDOMIZED A LGORITHMS


contention resolution
global min cut
linearity of expectation
max 3-satisfiability
universal hashing
Chernoff bounds
load balancing

Load Balancing
Load balancing. System in which m jobs arrive in a stream and need to be
processed immediately on m identical processors. Find an assignment that
balances the workload across processors.
Centralized controller. Assign jobs in round-robin manner. Each processor
receives at most m / n jobs.
Decentralized controller. Assign jobs to processors uniformly at random.
How likely is it that some processor is assigned "too many" jobs?

48

Load balancing
Analysis.

Let Xi = number of jobs assigned to processor i.


Let Yij = 1 if job j assigned to processor i, and 0 otherwise.
We have E[Yij] = 1/n.
Thus, Xi = j Yi j , and = E[Xi] = 1.
Applying Chernoff bounds with = c 1 yields
Let (n) be number x such that xx = n, and choose c = e (n).

Union bound

with probability 1 1/n no processor receives more

than e (n) = (log n / log log n) jobs.


Bonus fact: with high probability,
some processor receives (logn / log log n) jobs

49

Load balancing: many jobs


Theorem. Suppose the number of jobs m = 16 n ln n. Then on average,
each of the n processors handles = 16 ln n jobs. With high probability,
every processor will have between half and twice the average load.
Pf.

Let Xi , Yij be as before.


Applying Chernoff bounds with = 1 yields
Pr[ X i <

Union bound

1 ]
2

<e

12 (

1 2 (16n ln n)
2

1
= 2
n

every processor has load between half and

1 2/n.
twice the average with probability

50

You might also like