Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lower Bound On Deterministic Evaluation Algorithms For NOR Circuits - Yao's Principle For Proving Lower Bounds

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 61

• Lower bound on deterministic evaluation

algorithms for NOR circuits


• Yao’s Principle for proving lower bounds

2006/3/8 Randomized Algorithms, Lecture 1


3
Part 0

Deterministic algorithms for


evaluating NOR circuits

2006/3/8 Randomized Algorithms, Lecture 2


3
A three-level complete NOR
circuit
x1
x2

x3
x4
y
x5
x6

x7
x8

2006/3/8 Randomized Algorithms, Lecture 3


3
Evaluating NOR circuits

Let F k (x 1 ; x 2 ; : : : ; x 2 k ) denot e t he value


of t he level-k complet e NOR circuit on
input x 1 ; x 2 ; : : : ; x 2 k .

T he NOR circuit evaluat ion problem:


F Input : x 1 ; x 2 ; : : : ; x 2 k .
F Out put : F k (x 1 ; x 2 ; : : : ; x 2 k ).

2006/3/8 Randomized Algorithms, Lecture 4


3
Take a closer look at NOR

x1
y = F1(x1, x2)
x2

To det ermine y = F 1 (x 1 ; x 2 ), t here are


two cases;
Case 1: y = 0 At least one of x 1 and x 2 is 1, so
we only have t o read one of x 1 and
x 2 if we are lucky.
Case 2: y = 1 Bot h of x 1 and x 2 are 0, so we
have t o read bot h x 1 and x 2 t o
ensure y = 1.
2006/3/8 Randomized Algorithms, Lecture 5
3
Question: can a correct
evaluation algorithm avoid
reading all inputs?
X1
X2
X3
X4
Y
X5
X6
X7
X8

2006/3/8 Randomized Algorithms, Lecture 6


3
The answer
• It’s impossible for deterministic any
algorithms to correctly evaluate Fk in
sublinear (i.e., o(2k)) time for any input
x1,x2,…,x2k.
• It’s possible for a randomized algorithm to
correctly evaluate Fk in expected sublinear
(i.e., o(2k)) time for any input x1,x2,…,x2k.

2006/3/8 Randomized Algorithms, Lecture 7


3
Theorem

For any det erminist ic algorit hm A t hat always


correct ly evaluat es F k for any k ¸ 1, t here
exist s an input inst ance x 1 ; x 2 ; : : : ; x 2 k such
t hat A has t o read all 2k numbers t o det ermine
F k (x 1 ; x 2 ; : : : ; x 2 k ).

2006/3/8 Randomized Algorithms, Lecture 8


3
Proof strategy
• Without loss of generality, we may assume
that a deterministic algorithm works in a
depth-first manner.
• Prove by induction on k that any
deterministic depth-first algorithm has two
“trouble” instances, one for output 0 and
the other for output 1.

2006/3/8 Randomized Algorithms, Lecture 9


3
Depth-first evaluation
• Definition: does not read any leaf beneath
the second leg until the value of the first
leg is determined
• For example, RandEval is an depth-first
evaluation algorithm.
• Observation: Without loss of generality, we
can focus only depth-first evaluation
algorithms. Why?

2006/3/8 Randomized Algorithms, Lecture 10


3
The reason
• Lemma. Let D’ be a deterministic algorithm that
correctly evaluates any NOR circuit. Then, there
is a deterministic depth-first evaluation algorithm
D such that
– The outputs of D and D’ coincide on any input.
– The number of leaves by D is always no more than
that by D’.
• Proof: a simple exercise.
• So, it suffices to focus on depth-first algorithms.

2006/3/8 Randomized Algorithms, Lecture 11


3
Proof by induction on k
• CLAIM: The following statement holds for
any positive integer k and any
deterministic depth-first evaluation
algorithm A.
– There is an input Xk,1 (respectively, Xk,0) such
that A has to read all numbers in the input to
correctly compute Fk(Xk,1)=1 (respectively,
Fk(Xk,0)=0).

2006/3/8 Randomized Algorithms, Lecture 12


3
Basis: k=1.

X k ;1 0 1
0

X k ;0 0 0 if A reads t he ¯rst leg ¯rst .


1

X k ;0 1 0 if A reads t he second leg ¯rst .


0

2006/3/8 Randomized Algorithms, Lecture 13


3
Induction step: k  k+1.

X k + 1;1

X k ;0
0
1
0
X k ;0

2006/3/8 Randomized Algorithms, Lecture 14


3
Induction step: k  k+1.

X k + 1;0
First leg to be evaluated
X k ;0 by A.c
0
0
1
X k ;1

2006/3/8 Randomized Algorithms, Lecture 15


3
Induction step: k  k+1.

X k + 1;0

X k ;1
1
0
0
X k ;0
First leg to be evaluated
by A.

2006/3/8 Randomized Algorithms, Lecture 16


3
Theorem

For any det erminist ic algorit hm A t hat always


correct ly evaluat es F k for any k ¸ 1, t here
exist s an input inst ance x 1 ; x 2 ; : : : ; x 2 k such
t hat A has t o read all 2k numbers t o det ermine
F k (x 1 ; x 2 ; : : : ; x 2 k ).

2006/3/8 Randomized Algorithms, Lecture 17


3
Therefore, any deterministic
algorithm takes (2k) time to
correctly evaluate Fk in the worst
case!

2006/3/8 Randomized Algorithms, Lecture 18


3
Randomization is indeed very
helpful for this problem!

2006/3/8 Randomized Algorithms, Lecture 19


3
Comments
• Throwing coins does the magic:
– For any input, including Xk,0 and Xk,1, RandEval runs in
expected O(n0.793) time.
– By Snir [TCS’85].
• Snir’s algorithm RandEval is an optimal Las
Vegas algorithm for evaluating NOR circuits.
– That is any Las Vegas algorithm requires (n0.793)
time to evaluate NOR circuits.
– Proved by Saks and Wigderson [FOCS’86].

2006/3/8 Randomized Algorithms, Lecture 20


3
Part 1

Yao’s Principle for Proving Lower


Bounds [FOCS’77]

2006/3/8 Randomized Algorithms, Lecture 21


3
Yao’s Inequality
Let ¦ be a problem. Let D consist of all det ermin-
ist ic algorit hms for solving ¦ . Let I consist of all
possible input s for ¦ . Let I p denot e t he inst ance of
¦ wit h probability dist ribut ion p. T hen, t he follow-
ing equality holds for any probability dist ribut ion p
and any Las Vegas randomized algorit hm R .

min expect ed t ime of A running on I p


A2 D
· max expect ed t ime of R running on I :
I2I

2006/3/8 Randomized Algorithms, Lecture 22


3
Outline
• An alternative way to view a randomized
algorithm.
• A little bit of game theory
– Two person game and its Nash equilibrium
– Von Neumann’s Minimax Theorem
– Loomis Theorem
• Yao’s inequality
• 牛刀小試 :
– It takes (n0.694) time for any Las Vegas algorithm to
correctly evaluate a NOR circuit.

2006/3/8 Randomized Algorithms, Lecture 23


3
An alternative viewpoint
Let I be an input for t he problem ¦ . Let D consist
of ALL det erminist ic algorit hms t hat correct ly solve
¦ on I . T hen each Las Vegas algorit hm for ¦ is
act ually a probability dist ribut ion over D.

2006/3/8 Randomized Algorithms, Lecture 24


3
For example,
• such an instance has 8 deterministic
depth-first evaluation algorithms.

u v w
x1 " " "
v
x2 " " #
u y " # "
x3
w
x4
" " "

2006/3/8 Randomized Algorithms, Lecture 25


3
Algorithm RandEval
boolean funct ion RandEval(x 1 ; : : : ; x 2t ) f
if (2t = = 1) ret urn x 1 ;
t hrow a fair coin;
if (t he head appears) f
ret urn (RandEval(x 1 ; : : : ; x t ) = = 1) ?
0 : ! RandEval(x t + 1 ; : : : ; x 2t );
g else f
ret urn (RandEval(x t + 1 ; : : : ; x 2t ) = = 1) ?
0 : ! RandEval(x 1 ; : : : ; x t );
g
g
2006/3/8 Randomized Algorithms, Lecture 26
3
The alternative view
• RandEval is the uniform distribution over
the eight deterministic depth-first
evaluation algorithms.

2006/3/8 Randomized Algorithms, Lecture 27


3
Strategic Interactions
• Players: Reynolds and Philip Morris
• Strategies: { Advertise , Do Not Advertise }
• Payoffs: Companies’ Profits

• Each firm earns $5 million from its customers


• Advertising costs a firm $2 million
• Advertising captures $3 million from competitor

• How to represent this game?

2006/3/8 Randomized Algorithms, Lecture 28


3
Strategic Normal Form

PLAYERS

Philip Morris
No Ad Ad
No Ad 5 , 5 2 , 6
Reynolds
Ad 6 , 2 3 , 3

STRATEGIES
PAYOFFS
2006/3/8 Randomized Algorithms, Lecture 29
3
Nash Equilibrium
Philip Morris
No Ad Ad
No Ad 5 , 5 2 , 6
Reynolds
Ad 6 , 2 3 , 3
• Best reply for Reynolds:
• If Philip Morris advertises: advertise
• If Philip Morris does not advertise: advertise
• Regardless of what you think Philip Morris will do
Advertise!

2006/3/8 Randomized Algorithms, Lecture 30


3
Nash Equilibrium
j

• When the row player


uses the i-th strategy,
the best strategy for
i the column player is
strategy j.
• When the column
player uses the j-th
strategy, the best
strategy for the row
player is strategy i.

2006/3/8 Randomized Algorithms, Lecture 31


3
Another example:
Prisoner’s Dilemma

2006/3/8 Randomized Algorithms, Lecture 32


3
The scenario
• In the Prisoner’s Dilemma, A and B are
picked up by the police and interrogated
in separate cells without the chance to
communicate with each other.

2006/3/8 Randomized Algorithms, Lecture 33


3
Both are told:
  If you both confess, you will both get four years in
prison.

  If neither of you confesses, the police will be able to


pin part of the crime on you, and you’ll both get two
years.

  If one of you confesses but the other doesn’t, the


confessor will make a deal with the police and will go
free while the other one goes to jail for five years.

2006/3/8 Randomized Algorithms, Lecture 34


3
Payoff Table

B does not B
confess confesses
A does not
confess 2, 2 5, 0
A confesses
0, 5 4, 4

2006/3/8 Randomized Algorithms, Lecture 35


3
Question:

Does each game have a unique


equilibrium?

2006/3/8 Randomized Algorithms, Lecture 36


3
Matching Pennies
No Nash equilibrium

Head Tail

Head 1, -1 -1, 1

Tail -1, 1 1, -1

2006/3/8 Randomized Algorithms, Lecture 37


3
Go to movie together, but action
or romance?
Action Romance

Action 2, 1 0, 0

Romance 0, 0 1, 2

2006/3/8 Randomized Algorithms, Lecture 38


3
Focus

2-person 0-sum games –


the sum of payoffs of two players
is zero in each cell of the table

2006/3/8 Randomized Algorithms, Lecture 39


3
2-person 0-sum games
c

• It suffices to list the


payoffs for the row-
player.

r
M (r; c)

2006/3/8 Randomized Algorithms, Lecture 40


3
An observation

max min M (r; c) · min max M (r; c):


r c c r
Try t o prove it yourself as a sim-
ple exercise.

LHS = t he maximum payo®t hat RHS = t he minimum loss t hat


t he row-player can guarant ee for t he column-player can guarant ee
himself. for himself.

2006/3/8 Randomized Algorithms, Lecture 41


3
For example,

max min M (r; c) = ¡ 1 < 1 = min max M (r; c):


r c c r

剪刀 石頭 布

剪刀 0 -1 1

石頭 1 0 -1

布 -1 1 0

2006/3/8 Randomized Algorithms, Lecture 42


3
Another example

max min M (r; c) = 0 = min max M (r; c):


r c c r

剪刀 饅頭 布
Saddle point

剪刀 0 1 2

饅頭 -1 0 1

布 -2 -1 0

2006/3/8 Randomized Algorithms, Lecture 43


3
Not every 2-person 0-sum game
has a saddle point
• However, with respect to “mixed
strategies”, von Neumann showed that
each 2-person 0-sum game has a saddle
point.

2006/3/8 Randomized Algorithms, Lecture 44


3
Pure strategies versus
mixed strategies
A mixed st rat egy is a probability dist ribut ion over
t he set of all pure st rat egies. Let p be a mixed st rat -
egy for t he row player and q be a mixed st rat egy
for t he column player. T hat is, pr is t he probability
for t he row player t o use his r -t h st rat egy, and qc is
t he probability for t he column player t o use his c-t h
st rat egy. T hen t he expect ed payo® wit h respect t o p
and q is
X X
T
p Mq = pr M (r; c)qc :
r c
2006/3/8 Randomized Algorithms, Lecture 45
3
von Neumann’s
Minimax Theorem
For any 2-person 0-sum game, we have

max min pT M q = min max pT M q:


p q q p

T hat is, each 2-person zero-sum game has a saddle


point wit h respect t o mixed st rat egies.

2006/3/8 Randomized Algorithms, Lecture 46


3
Loomis’ Theorem

For any 2-person 0-sum game, we have

max min pT M e c = min max e Tr M q;


p c q r

where ei means running t he i-t h st rat egy wit h probability


1.
To see t he t heorem, just observe t hat when p is known,
t he column player has an opt imal st rat egy t hat is a pure
st rat egy. A similar observat ion holds for t he row player,
t oo.

2006/3/8 Randomized Algorithms, Lecture 47


3
Yao’s interpretation
• The row player = the maximizer = the
adversary responsible for designing
malicious inputs.
• The column player = the minimizer = the
algorithm designer responsible for
designing efficient algorithms.

2006/3/8 Randomized Algorithms, Lecture 48


3
Pure strategies
• For the column player (minimizer)
– Each pure strategy corresponds to a
deterministic algorithm.
• For the row player (maximizer)
– Each pure strategy corresponds to a
particular input instance.

2006/3/8 Randomized Algorithms, Lecture 49


3
Mixed strategies
• For the column player (minimizer)
– Each mixed strategy corresponds to a
randomized algorithm.
• For the row player (maximizer)
– Each mixed strategy corresponds to a
probability distribution over all the input
instances.

2006/3/8 Randomized Algorithms, Lecture 50


3
Yao’s interpretation for Loomis’
Theorem
Let T (I ; A) denot e t he t ime required for algorit hm A t o run on
input I . T hen by Loomis T heorem, we have

max min E [T (I p ; A)] = min max E [T (I ; A q )]:


p d et er m in ist ic a lgor it h m A q in p u t I

T herefore, t he following inequality holds for any probability dis-


t ribut ion p and q:

min E [T (I p ; A)] · max E [T (I ; A q )]:


d et er m in ist ic a lgor it h m A in p u t I

2006/3/8 Randomized Algorithms, Lecture 51


3
Yao’s Inequality
Let ¦ be a problem. Let D consist of all det ermin-
ist ic algorit hms for solving ¦ . Let I consist of all
possible input s for ¦ . Let I p denot e t he inst ance of
¦ wit h probability dist ribut ion p. T hen, t he follow-
ing equality holds for any probability dist ribut ion p
and any Las Vegas randomized algorit hm R .

min expect ed t ime of A running on I p


A2 D
· max expect ed t ime of R running on I :
I2I

2006/3/8 Randomized Algorithms, Lecture 52


3
A comment
• The two different topics “probabilistic
analysis for deterministic algorithms” and
“randomized algorithms” interact by Yao’s
Principle.

2006/3/8 Randomized Algorithms, Lecture 53


3
How to use Yao’s Lemma?
• Task 1:
– Design a probability distribution p for the input
instance.
• Task 2:
– Obtain a lower bound on the expected
running for any deterministic algorithm
running on Ip.

2006/3/8 Randomized Algorithms, Lecture 54


3
牛刀小試
on Yao’s Principle
A lower bound (n0.694) on the
expected running time of any Las
Vegas algorithms for evaluating NOR
circuits

2006/3/8 Randomized Algorithms, Lecture 55


3
Task 1: designing Ip

x1
x2

x3
x4
y
x5
x6

x7
x8

2006/3/8 Randomized Algorithms, Lecture 56


3
An Ip

Let each leaf of t he NOR circuit be in-


dependent ly assigned value 1 wit h prob-
ability
3¡ p 5
p=
2
and value 0 wit h probability 1 ¡ p.

2006/3/8 Randomized Algorithms, Lecture 57


3
Interestingly,

t he out put of each NOR gat e also has


value 1 wit h probability

(1 ¡ p) 2 = p:

(As a mat t er of fact , t he above equat ion


is how we obt ained t he value of p in t he
¯rst place.)
2006/3/8 Randomized Algorithms, Lecture 58
3
Depth-first evaluation
• Recall that we can focus on deterministic
depth-first evaluation algorithms.
• Let A be an arbitrary algorithm of this kind.
Let W(k) be the time required for A to
evaluate the circuit on Ip with 2k numbers.
• So, W(k) = W(k – 1) + (1 – p) W(k – 1),
implying W(k) = ((2 – p)k) = (n0.694).

2006/3/8 Randomized Algorithms, Lecture 59


3
A comment
• Saks and Wigderson proved that
RandEval is optimal by designing a much
complicated Ip.

2006/3/8 Randomized Algorithms, Lecture 60


3
向左走  向右走
• The two different topics “probabilistic
analysis for deterministic algorithms” and
“randomized algorithms” meet at Yao’s
Principle.

2006/3/8 Randomized Algorithms, Lecture 61


3

You might also like