Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

CH 15

Download as pdf or txt
Download as pdf or txt
You are on page 1of 19

Chapter 15

Michelle Bodnar, Andrew Lohr


April 12, 2016

Exercise 15.1-1

Proceed by induction. The base case of T (0) = 20 = 1. Then, we apply the


inductive hypothesis and recall equation (A.5) to get that
n−1 n−1
X X 2n − 1
T (n) = 1 + T (j) = 1 + 2j = 1 + = 1 + 2n − 1 = 2n
j=0 j=0
2−1

Exercise 15.1-2

Let p1 = 0, p2 = 4, p3 = 7 and n = 4. The greedy strategy would first cut


off a piece of length 3 since it has highest density. The remaining rod has length
1, so the total price would be 7. On the other hand, two rods of length 2 yield
a price of 8.

Exercise 15.1-3

Now, instead of equation (15.1), we have that

rn = max{pn , r1 + rn−1 − c, r2 + rn−2 − c, . . . , rn−1 + r1 − c}

And so, to change the top down solution to this problem, we would change
MEMOIZED-CUT-ROD-AUX(p,n,r) as follows. The upper bound for i on line
6 should be n − 1 instead of n.Also, after the for loop, but before line 8, set
q = max{q − c, p[i]}.

Exercise 15.1-4

Create a new array called s. Initialize it to all zeros in MEMOIZED-CUT-


ROD(p,n) and pass it as an additional argument to MEMOIZED-CUT-ROD-
AUX(p,n,r,s). Replace line 7 in MEMOIZED-CUT-ROD-AUX by the following:
t = p[i] + M EM OIZED − CU T − ROD − AU X(p, n − i, r, s). Following this,
if t > q, set q = t and s[n] = i. Upon termination, s[i] will contain the size of
the first cut for a rod of size i.

1
Exercise 15.1-5

The subproblem graph for n = 4 looks like


4

2 3

0 1 1 2

0 1
The number of vertices in the tree to compute the nth Fibonacci will follow
the recurrence
V (n) = 1 + V (n − 2) + V (n − 1)
And has initial condition V (1) = V (0) = 1. This has solution V (n) =
2 ∗ F ib(n) − 1 which we will check by direct substitution. For the base cases,
this is simple to check. Now, by induction, we have

V (n) = 1 + 2 ∗ F ib(n − 2) − 1 + 2 ∗ F ib(n − 1) − 1 = 2 ∗ F ib(n) − 1

The number of edegs will satisfy the recurrence

E(n) = 2 + E(n − 1) + E(n − 2)

and having base cases E(1) = E(0) = 0. So, we show by induction that we have
E(n) = 2 ∗ F ib(n) − 2. For the base cases it clearly holds, and by induction, we
have

E(n) = 2 + 2 ∗ F ib(n − 1) − 2 + 2 ∗ F ib(n − 2) − 2 = 2 ∗ F ib(n) − 2

We will present a O(n) bottom up solution that only keeps track of the the
two largest subproblems so far, since a subproblem can only depend on the so-
lution to subproblems at most two less for Fibonacci.

Exercise 15.2-1

An optimal parenthesization of that sequence would be (A1 A2 )((A3 A4 )(A5 A6 ))


which will require 5 ∗ 50 ∗ 6 + 3 ∗ 12 ∗ 5 + 5 ∗ 10 ∗ 3 + 3 ∗ 5 ∗ 6 + 5 ∗ 3 ∗ 6 =
1500 + 180 + 150 + 90 + 90 = 2010.

Exercise 15.2-2

2
Algorithm 1 DYN-FIB(n)
prev = 1
prevprev = 1
if n ≤ 1 then
return 1
end if
for i=2 upto n do
tmp = prev + prevprev
prevprev = prev
prev = tmp
end for
return prev

Algorithm 2 MATRIX-CHAIN-MULTIPLY(A,s,i,j)
if i == j then
Return Ai
end if
Return MATRIX-CHAIN-MULTIPLY(A,s,i,s[i,j]) · MATRIX-CHAIN-
MULTIPLY(A,s,s[i,j]+1,j)

The following algorithm actually performs the optimal multiplication, and


is recursive in nature:
Exercise 15.1-3

By indution we will show that P (n) from eq (15.6) is ≥ 2n − 1 ∈ Ω(2n ). The


base case of n=1 is trivial. Then, for n ≥ 2, by induction and eq (15.6), we have
n−1
X n−1
X
P (n) = P (k)P (n − k) ≥ 2k 2n−k = (n − 1)(2n − 1) ≥ 2n − 1
k=1 k=1

So, the conclusion holds.

Exercise 15.2-4

The subproblem graph for matrix chain multiplication has a vertex for each
pair (i, j) such that 1 ≤ i ≤ j ≤ n, corresponding to the subproblem of finding
the optimal way to multiply Ai Ai+1 · · · Aj . There are n(n − 1)/2 + n vertices.
Vertex (i, j) is connected by an edge directed to vertex (k, l) if k = i and
k ≤ l < j or l = j and i < k ≤ j. A vertex (i, j) has outdegree 2(j − i). There
are n − k vertices such that j − i = k, so the total number of edges is
n−1
X
2k(n − k).
k=0

3
Exercise 15.1-5

We count the number of times that we reference a different entry in m than


the one we are computing, that is, 2 times the number of times that line 10
runs.
n n−l+1
X X i+l−2
X n n−l+1
X X n
X
2= (l − 1)2 = 2(l − 1)(n − l + 1)
l=2 i=l k=i l=2 i=1 l=2
n−1
X
= 2l(n − l)
l=1
n−1
X n−1
X
= 2n l−2 l2
l=1 l=1

2 (n − 1)(n)(2n − 1)
= n (n − 1) −
3
3 2
2n − 3n + n
= n3 − n2 −
3
n3 − n
=
3
Exercise 15.2-6

We proceed by induction on the number of matrices. A single matrix has no


pairs of parentheses. Assume that a full parenthesization of an n-element ex-
pression has exactly n − 1 pairs of parentheses. Given a full parenthesization of
an n + 1-element expression, there must exist some k such that we first multiply
B = A1 · · · Ak in some way, then multiply C = Ak+1 · · · An+1 in some way, then
multiply B and C. By our induction hypothesis, we have k − 1 pairs of paren-
theses for the full parenthesization of B and n + 1 − k − 1 pairs of parentheses
for the full parenthesization of C. Adding these together, plus the pair of outer
parentheses for the entire expression, yields k − 1 + n + 1 − k − 1 + 1 = (n + 1) − 1
parentheses, as desired.

Exercise 15.3-1

The runtime of of enumerating is just n ∗ P (n), while is we were running


RECURSIVE-MATRIX-CHAIN, it would also have to run on all of the internal
nodes of the subproblem tree. Also, the enumeration approach wouldn’t have
as much overhead.

Exercise 15.3-2

Let [i..j] denote the call to Merge Sort to sort the elements in positions i
through j of the original array. The recursion tree will have [1..n] as its root,
and at any node [i..j] will have [i..(j − i)/2] and [(j − i)/2 + 1..j] as its left

4
and right children, respectively. If j − i = 1, there will be no children. The
memoization approach fails to speed up Merge Sort because the subproblems
aren’t overlapping. Sorting one list of size n isn’t the same as sorting another
list of size n, so there is no savings in storing solutions to subproblems since
each solution is used at most once.

Exercise 15.3-3

This modification of the matrix-chain-multiplication problem does still ex-


hibit the optimal substructure property. Suppose we split a maximal multipli-
cation of A1 , . . . , An between Ak and Ak+1 then, we must have a maximal cost
multiplication on either side, otherwise we could substitute in for that side a
more expensive multiplication of A1 , . . . , An .

Exercise 15.3-4

Suppose that we are given matrices A1 , A2 , A3 , and A4 with dimensions


such that p0 , p1 , p2 , p3 , p4 = 1000, 100, 20, 10, 1000. Then p0 pk p4 is minimized
when k = 3, so we need to solve the subproblem of multiplying A1 A2 A3 ,
and also A4 which is solved automatically. By her algorithm, this is solved
by splitting at k = 2. Thus, the full parenthesization is (((A1 A2 )A3 )A4 ).
This requires 1000 · 100 · 20 + 1000 · 20 · 10 + 1000 · 10 · 1000 = 12, 200, 000
scalar multiplications. On the other hand, suppose we had fully parenthe-
sized the matrices to multiply as ((A1 (A2 A3 ))A4 ). Then we would only require
100 · 20 · 10 + 1000 · 100 · 10 + 1000 · 10 · 1000 = 11, 020, 000 scalar multiplications,
which is fewer than Professor Capulet’s method. Therefore her greedy approach
yields a suboptimal solution.

Exercise 15.3-5

The optimal substructure property doesn’t hold because the number of pieces
of length i used on one side of the cut affects the number allowed on the other.
That is, there is information about the particular solution on one side of the
cut that changes what is allowed on the other.
To make this more concrete, suppose the rod was length 4, the values were
l1 = 2, l2 = l3 = l4 = 1, and each piece has the same worth regardless of length.
Then, if we make our first cut in the middle, we have that the optimal solution
for the two rods left over is to cut it in the middle, which isn’t allowed because
it increases the total number of rods of length 1 to be too large.

Exercise 15.3-6

First we assume that the commission is always zero. Let k denote a currency
which appears in an optimal sequence s of trades to go from currency 1 to
currency n. pk denote the first part of this sequence which changes currencies
from 1 to k and qk denote the rest of the sequence. Then pk and qk are both

5
optimal sequences for changing from 1 to k and k to n respectively. To see this,
suppose that pk wasn’t optimal but that p0k was. Then by changing currencies
according to the sequence p0k qk we would have a sequence of changes which is
better than s, a contradiction since s was optimal. The same argument applies
to qk .
Now suppose that the commissions can take on arbitrary values. Suppose
we have currencies 1 through 6, and r12 = r23 = r34 = r45 = 2, r13 = r35 = 6,
and all other exchanges are such that rij = 100. Let c1 = 0, c2 = 1, and ck = 10
for k ≥ 3. The optimal solution in this setup is to change 1 to 3, then 3 to 5,
for a total cost of 13. An optimal solution for changing 1 to 3 involves changing
1 to 2 then 2 to 3, for a cost of 5, and an optimal solution for changing 3 to 5
is to change 3 to 4 then 4 to 5, for a total cost of 5. However, combining these
optimal solutions to subproblems means making more exchanges overall, and
the total cost of combining them is 18, which is not optimal.

Exercise 15.4-1

An LCS is h1, 0, 1, 0, 1, 0i. A concise way of seeing this is by noticing that the
first list contains a “00” while the second contains none, Also, the second list
contains two copies of “11” while the first contains none. Inorder to reconcile
this, any LCS will have to skip at least three elements. Since we managed to
do this, we know that our common subsequence was maximal.

Exercise 15.4-2

The algorithm PRINT-LCS(c,X,Y) prints the LCS of X and Y from the


completed table by computing only the necessary entries of B on the fly. It
runs in O(m + n) time because each iteration of the while loop decrements
either i or j or both by 1, and halts when either reaches 0. The final for loop
iterates at most min(m, n) times.
Exercise 15.4-3

Exercise 15.4-4

Since we only use the previous row of the c table to compute the current
row, we compute as normal, but when we go to compute row k, we free row k −2
since we will never need it again to compute the length. To use even less space,
observe that to compute c[i, j], all we need are the entries c[i−1, j], c[i−1, j −1],
and c[i, j − 1]. Thus, we can free up entry-by-entry those from the previous row
which we will never need again, reducing the space requirement to min(m, n).
Computing the next entry from the three that it depends on takes O(1) time
and space.

Exercise 15.4-5

Given a list of numbers L, make a copy of L called L0 and then sort L0 .

6
Algorithm 3 PRINT-LCS(c,X,Y)
n = c[X.length, Y.length]
Initialize an array s of length n
i = X.length and j = Y.length
while i > 0 and j > 0 do
if xi == yj then
s[n] = xi
n=n−1
i=i−1
j =j−1
else if c[i − 1, j] ≥ c[i, j − 1] then
i=i−1
else
j =j−1
end if
end while
for k = 1 to s.length do
Print s[k]
end for

Algorithm 4 MEMO-LCS-LENGTH-AUX(X,Y,c,b)
m = |X|
n = |Y |
if c[m, n]! = 0 or m == 0 or n == 0 then
return
end if
if xm == yn then
b[m, n] =-
c[m,n] =MEMO-LCS-LENGTH-AUX(X[1,. . . , m-1],Y[1,. . . ,n-1],c,b) +1
else if M EM O − LCS − LEN GT H − AU X(X[1, . . . , m − 1], Y, c, b) ≥
M EM O − LCS − LEN GT H − AU X(X, Y [1, . . . , n − 1], c, b) then
b[m, n] =↑
c[m,n] =MEMO-LCS-LENGTH-AUX(X[1,. . . , m-1],Y,c,b)
else
b[m, n] =←
c[m,n] =MEMO-LCS-LENGTH-AUX(X,Y[1,. . . ,n-1],c,b)
end if

Algorithm 5 MEMO-LCS-LENGTH(X,Y)
let c be a (passed by reference) |X| by |Y | array initiallized to 0
let b be a (passed by reference) |X| by |Y | array
MEMO-LCS-LENGTH-AUX(X,Y,c,b)
return c and b

7
Then, just run the LCS algorithm on these two lists. The longest common sub-
sequence must be monotone increasing because it is a subsequence of L0 which
is sorted. It is also the longest monotone increasing subsequence because being
a subsequence of L0 only adds the restriction that the subsequence must be
monotone increasing. Since |L| = |L0 | = n, and sorting L can be done in o(n2 )
time, the final running time will be O(|L||L0 |) = O(n2 ).

Exercise 15.4-6

The algorithm LONG-MONOTONIC(S) returns the longest monotonically


increasing subsequence of S, where S has length n. The algorithm works as
follows: a new array B will be created such that B[i] contains the last value of
a longest monotonically increasing subsequence of length i. A new array C will
be such that C[i] contains the monotonically increasing subsequence of length
i with smallest last element seen so far. To analyze the runtime, observe that
the entries of B are in sorted order, so we can execute line 9 in O(log(n)) time.
Since every other line in the for-loop takes constant time, the total run-time is
O(n log n).

Algorithm 6 LONG-MONOTONIC(S)
1: Initialize an array B of integers length of n, where every value is set equal
to ∞.
2: Initialize an array C of empty lists length n.
3: L = 1
4: for i = 1 to n do
5: if A[i] < B[1] then
6: B[1] = A[i]
7: C[1].head.key = A[i]
8: else
9: Let j be the largest index of B such that B[j] < A[i]
10: B[j + 1] = A[i]
11: C[j + 1] = C[j]
12: C[j + 1].insert(A[i])
13: if j + 1 > L then
14: L=L+1
15: end if
16: end if
17: end for
18: Print C[L]

Exercise 15.5-1

Run the given algorithm with the initial argument of i = 1 and j = m[1].length.
Exercise 15.5-2

8
Algorithm 7 CONSTRUCT-OPTIMAL-BST(root,i,j)
if i > j then
return nil
end if
if i == j then
return a node with key ki and whose children are nil
end if
let n be a node with key kroot[i,j]
n.left = CONSTRUCT-OPTIMAL-BST(root,i,root[i,j]-1)
n.right = CONSTRUCT-OPTIMAL-BST(root,root[i,j]+1,j)
return n

After painstakingly working through the algorithm and building up the ta-
bles, we find that the cost of the optimal binary search tree is 3.12. The tree
takes the following structure:
5

2 7

1 3 6 NIL

NIL 4

Exercise 15.5-3

Each of the Θ(n2 ) values of w[i, j] would require computing those two sums,
both of which can be of size O(n), so, the asymptotic runtime would increase
to O(n3 ).

Exercise 15.5-4

Change the for loop of line 10 in OPTIMAL-BST to “for r = r[i, j − 1] to


r[i + 1, j]”. Knuth’s result implies that it is sufficient to only check these values
because optimal root found in this range is in fact the optimal root of some
binary search tree. The time spent within the for loop of line 6 is now Θ(n).
This is because the bounds on r in the new for loop of line 10 are nonoverlap-
ping. To see this, suppose we have fixed l and i. On one iteration of the for
loop of line 6, the upper bound on r is r[i + 1, j] = r[i + 1, i + l − 1]. When
we increment i by 1 we increase j by 1. However, the lower bound on r for
the next iteration subtracts this, so the lower bound on the next iteration is

9
r[i + 1, j + 1 − 1] = r[i + 1, j]. Thus, the total time spent in the for loop of line 6
is Θ(n). Since we iterate the outer for loop of line 5 n times, the total runtime
is Θ(n2 ).

Problem 15-1

Since any longest simple path must start by going through some edge out of
s, and thereafter cannot pass through s because it must be simple, that is,

LON GEST (G, s, t) = 1 + max0 {LON GEST (G|V \{s} , s0 , t)}


s∼s

with the base case that if s = t then we have a length of 0.


A naive bound would be to say that since the graph we are considering is
a subset of the vertices, and the other two arguments to the substructure are
distinguished vertices, then, the runtime will be O(|V |2 2|V | ). We can see that
we can actually will have to consider this many possible subproblems by taking
|G| to be the complete graph on |V | vertices.
Problem 15-2

Let A[1..n] denote the array which contains the given word. First note that
for a palindrome to be a subsequence we must be able to divide the input word
at some position i, and then solve the longest common subsequence problem on
A[1..i] and A[i + 1..n], possibly adding in an extra letter to account for palin-
dromes with a central letter. Since there are n places at which we could split the
input word and the LCS problem takes time O(n2 ), we can solve the palindrome
problem in time O(n3 ).

Problem 15-3

First sort all the points based on their x coordinate. To index our subprob-
lem, we will give the rightmost point for both the path going to the left and
the path going to the right. Then, we have that the desired result will be the
subproblem indexed by v,v where v is the rightmost point. Suppose by symme-
try that we are further along on the left-going path, that the leftmost path is
going to the ith one and the right going path is going until the jth one. Then,
if we have that i > j + 1, then we have that the cost must be the distance from
the i − 1st point to the ith plus the solution to the subproblem obtained where
we replace i with i − 1. There can be at most O(n2 ) of these subproblem, but
solving them only requires considering a constant number of cases. The other
possibility for a subproblem is that j ≤ i ≤ j + 1. In this case, we consider for
every k from 1 to j the subproblem where we replace i with k plus the cost from
kth point to the ith point and take the minimum over all of them. This case
requires considering O(n) things, but there are only O(n) such cases. So, the
final runtime is O(n2 ).

10
Problem 15-4

First observe that the problem exhibits optimal substructure in the following
way: Suppose we know that an optimal solution has k words on the first line.
Then we must solve the subproblem of printing neatly words lk+1 , . . . , ln . We
build a table of optimal solutions
Pn solutions to solve the problem using dynamic
programming. If n − 1 + k=1 lk < M then put all words on a single line for an
optimal solution. In the following algorithm Printing-Neatly(n), C[k] contains
the cost of printing neatly words lk through ln . We can determine the cost
of an optimal solution upon termination by examining C[1]. The entry P [k]
contains the position of the last word which should appear on the first line of
the optimal solution of words l1 , l2 , . . . , ln . Thus, to obtain the optimal way to
place the words, we make LP [1] the last word on the first line, lP [P [1]] the last
word on the second line, and so on.

Algorithm 8 Printing-Neatly(n)
1: Let P [1..n] and C[1..n] be a new tables.
2: for k = n downto 1 do
Pn
3: if i=k li + n − k < M then
4: C[k] = 0
5: end if
6: q=∞
7: for j =P1j downto n − k do Pj
8: if m=1 lk+j +j−1 < M and (M − m=1 lk+j +j−1)+C[k+j+1] < q
then Pj
9: q = (M − m=1 lk+j + j − 1) + C[k + j + 1]
10: P [k] = k + j
11: end if
12: end for
13: C[k] = q
14: end for

Problem 15-5

a. We will index our subproblems by two integers, 1 ≤ i ≤ m and 1 ≤ j ≤ n.


We will let i indicate the rightmost element of x we have not processed and
j indicate the rightmost element of y we have not yet found matches for. For
a solution, we call EDIT (x, y, i, j)
b. We will set cost(delete) = cost(insert) = 2, cost(copy) = −1, cost(replace) =
1, and cost(twiddle) = cost(kill) = ∞. Then a minimum cost translation of
the first string into the second corresponds to an alignment. where we view
a copy or a replace as incrementing a pointer for both strings. A insert as
putting a space at the current position of the pointer in the first string. A

11
Algorithm 9 EDIT(x,y,i,j)
let m = x.length and n = y.length
if i = m then
return (n-j)cost(insert)
end if
if j = n then
return min{(m − i)cost(delete), cost(kill)}
end if
o1 , . . . , o5 initialized to ∞
if x[i] = y[j] then
o1 = cost(copy) + EDIT (x, y, i + 1, j + 1)
end if
o2 = cost(replace) + EDIT (x, y, i + 1, j + 1)
o3 = cost(delete) + EDIT (x, y, i + 1, j)
o4 = cost(insert) + EDIT (x, y, i, j + 1)
if i < m − 1 and j < n − 1 then
if x[i] = y[j + 1] and x[i + 1] = y[j] then
o5 = cost(twiddle) + EDIT (x, y, i + 2, j + 2)
end if
end if
return mini∈[5] {oi }

delete operation means putting a space in the current position in the second
string. Since twiddles and kills have infinite costs, we will have neither of
them in a minimal cost solution. The final value for the alignment will be
the negative of the minimum cost sequence of edits.

Problem 15-6

The problem exhibits optimal substructure in the following way: If the root r
is included in an optimal solution, then we must solve the optimal subproblems
rooted at the grandchildren of r. If r is not included, then we must solve
the optimal subproblems on trees rooted at the children of r. The dynamic
programming algorithm to solve this problem works as follows: We make a
table C indexed by vertices which tells us the optimal conviviality ranking of a
guest list obtained from the subtree with root at that vertex. We also make a
table G such that G[i] tells us the guest list we would use when vertex i is at
the root. Let T be the tree of guests. To solve the problem, we need to examine
the guest list stored at G[T.root]. First solve the problem at each leaf L. If
the conviviality ranking at L is positive, G[L] = {L} and C[L] = L.conviv.
Otherwise G[L] = ∅ and C[L] = 0. Iteratively solve the subproblems located
at parents of nodes at which the subproblem has been solved. In general for a

12
node x,
 
X X
C[x] = min  C[y], C[y] .
y is a child of x y is a grandchild of x

The runtime of the algorithm is O(n2 ) where n is the number of vertices,


because we solve n subproblems, each in constant time, but the tree traversals
required to find the appropriate next node to solve could take linear time.

Problem 15-7

a. Our substructure will consist of trying to find suffixes of s of length one less
starting at all the edges leaving ν0 with label σ0 . if any of them have a
solution, then, there is a solution. If none do, then there is none. See the
algorithm VITERBI for details.

Algorithm 10 V IT ERBI(G, s, ν0 )
if s.length = 0 then
return ν0
end if
for edges (ν0 , ν1 ) ∈ V for some ν1 do
if σ(ν0 , ν1 ) = σ1 then
res = V IT ERBI(G, (σ2 , . . . , σk ), ν1 )
if res != NO-SUCH-PATH then
return ν0 , res
end if
end if
end for
return NO-SUCH-PATH

Since the subproblems are indexed by a suffix of s (of which there are only
k) and a vertex in the graph, there are at most O(k|V |) different possible
arguments. Since each run may require testing a edge going to every other
vertex, and each iteration of the for loop takes at most a constant amount of
time other than the call to PROB=VITERBI, the final runtime is O(k|V |2 )
b. For this modification, we will need to try all the possible edges leaving from
ν0 instead of stopping as soon as we find one that works. The substructure is
very similar. We’ll make it so that instead of just returning the sequence, we’ll
have the algorithm also return the probability of that maximum probability
sequence, calling the fields seq and prob respectively. See the algorithm
PROB-VITERBI
Since the runtime is indexed by the same things, we have that we will call
it with at most O(k|V |) different possible arguments. Since each run may

13
Algorithm 11 P ROB − V IT ERBI(G, s, ν0 )
if s.length = 0 then
return ν0
end if
let sols.seq = N O − SU CH − P AT H, and sols.prob = 0
for edges (ν0 , ν1 ) ∈ V for some ν1 do
if σ(ν0 , ν1 ) = σ1 then
res = P ROB − V IT ERBI(G, (σ2 , . . . , σk ), ν1 )
if p(ν0 , ν1 ) · res.prob >= sols.prob then
sols.prob = p(ν0 , ν1 ) · res.prob and sols.seq = ν0 , res.seq
end if
end if
end for
return sols

require testing a edge going to every other vertex, and each iteration of the
for loop takes at most a constant amount of time other than the call to
PROB=VITERBI, the final runtime is O(k|V |2 )

Problem 15-8

a. If n > 1 then for every choice of pixel at a given row, we have at least 2
choices of pixel in the next row to add to the seam (3 if we’re not in column
1 or n). Thus the total number of possibilities is bounded below by 2m .

b. We create a table D[1..m, 1..n] such that D[i, j] stores the disruption of an
optimal seam ending at position [i, j], which started in row 1. We also create
a table S[i, j] which stores the list of ordered pairs indicating which pixels
were used to create the optimal seam ending at position (i, j). To find the
solution to the problem, we look for the minimum k entry in row m of table
D, and use the list of pixels stored at S[m, k] to determine the optimal seam.
To simplify the algorithm Seam(A), let M IN (a, b, c) be the function which
returns −1 if a is the minimum, 0 if b is the minimum, and 1 if c is the mini-
mum value from among a, b, and c. The time complexity of the algorithm is
O(mn).

Problem 15-9

The subproblems will be indexed by contiguous subarrays of the arrays of


cuts needed to be made. We try making each possible cut, and take the one
with cheapest cost. Since there are m to try, and there are at most m2 possible
things to index the subproblems with, we have that the m dependence is that
the solution is O(m3 ). Also, since each of the additions is of a number that

14
Algorithm 12 Seam(A)
Initialize tables D[1..m, 1..n] of zeros and S[1..m, 1..n] of empty lists
for i = 1 to n do
S[1, i] = (1, i)
D[1, i] = d1i
end for
for i = 2 to m do
for j = 1 to n do
if j == 1 then //Handles the left-edge case
if D[i − 1, j] < D[i − 1, j + 1] then
D[i, j] = D[i − 1, j] + dij
S[i, j] = S[i − 1, j].insert(i, j)
else
D[i, j] = D[i − 1, j + 1] + dij
S[i, j] = S[i − 1, j + 1].insert(i, j)
end if
else if j == n then //Handles the right-edge case
if D[i − 1, j − 1] < D[i − 1, j] then
D[i, j] = D[i − 1, j − 1] + dij
S[i, j] = S[i − 1, j − 1].insert(i, j)
else
D[i, j] = D[i − 1, j] + dij
S[i, j] = S[i − 1, j].insert(i, j)
end if
end if
x = M IN (D[i − 1, j − 1], D[i − 1, j], D[i − 1, j + 1])
D[i, j] = D[i − 1, j + x]
S[i, j] = S[i − 1, j + x].insert(i, j)
end for
end for
q=1
for j = 1 to n do
if D[m, j] < D[m, q] then q = j
end if
end for
Print the list stored at S[m, q].

15
is O(n), each of the iterations of the for loop may take time O(lg(n) + lg(m)),
so, the final runtime is O(m3 lg(n)). The given algorithm will return (cost,seq)
where cost is the cost of the cheapest sequence, and seq is the sequence of cuts
to make

Algorithm 13 CUT-STRING(L,i,j,l,r)
if l = r then
return (0,[])
end if
mincost = ∞
for k from i to j do
if l + r + CU T − ST RIN G(L, i, k, l, L[k]).cost + CU T −
ST RIN G(L, k, j, L[k], j).cost < mincost then
mincost = l + r + CU T − ST RIN G(L, i, k, l, L[k]).cost + CU T −
ST RIN G(L, k, j, L[k], j).cost
minseq = L[k] concatenated with the sequence returned fromCU T −
ST RIN G(L, i, k, l, L[k]) and from CU T − ST RIN G(L, i, k, l, L[k])
end if
end for
return (mincost,minseq)

Problem 15-10

a. Without loss of generality, suppose that there exists an optimal solution


S which involves investing d1 dollars into investment k and d2 dollars into
investement m in year 1. Further, suppose in this optimal solution, you
don’t move your money for the first j years. If rk1 + rk2 + . . . + rkj >
rm1 + rm2 + . . . + rmj then we can perform the usual cut-and-paste maneuver
and instead invest d1 + d2 dollars into investment k for j years. Keeping
all other investments the same, this results in a strategy which is at least
as profitable as S, but has reduced the number of different investments in a
given span of years by 1. Continuing in this way, we can reduce the optimal
strategy to consist of only a single investment each year.

b. If a particular investment strategy is the year-one-plan for a optimal invest-


ment strategy, then we must solve two kinds of optimal suproblem: either
we maintain the strategy for an additional year, not incurring the money-
moving fee, or we move the money, which amounts to solving the problem
where we ignore all information from year 1. Thus, the problem exhibits
optimal substructure.

c. The algorithm works as follows: We build tables I and R of size 10 such that
I[i] tells which investment should be made (with all money) in year i, and

16
R[i] gives the total return on the investment strategy in years i through 10.

Algorithm 14 Invest(d,n)
Initialize tables I and R of size 11, all filled with zeros
for k = 10 downto 1 do
q=1
for i = 1 to n do
if rik > rqk then // i now holds the investment which looks best for
a given year
q=i
end if
end for
if R[k + 1] + drI[k+1]k − f1 > R[k + 1] + drqk − f2 then //If revenue is
greater when money is not moved
R[k] = R[k + 1] + drI[k+1]k − f1
I[k] = I[k + 1]
else
R[k] = R[k + 1] + drqk − f2
I[k] = q
end if
end for
Return I as an optimal strategy with return R[1].

d. The previous investment strategy was independent of the amount of money


you started with. When there is a cap on the amount you can invest, the
amount you have to invest in the next year becomes relevant. If we know
the year-one-strategy of an optimal investment, and we know that we need
to move money after the first year, we’re left with the problem of investing a
different initial amount of money, so we’d have to solve a subproblem for every
possible initial amount of money. Since there is no bound on the returns,
there’s also no bound on the number of subproblems we need to solve.
Problem 15-11

Our subproblems will be indexed by and integer i ∈ [n] and another integer
j ∈ [D]. i will indicate how many months have passed, that is, we will restrict
ourselves to only caring about (di , . . . , dn ). j will indicate how many machines
we have in stock initially. Then, the recurrence we will use will try producing
all possible numbers of machines from 1 to [D]. Since the index space has size
O(nD) and we are only running through and taking the minimum cost from D
many options when computing a particular subproblem, the total runtime will
be O(nD2 ).

17
Problem 15-12

We will make an N +1 by X +1 by P +1 table. The runtime of the algorithm


is O(N XP ).

18
Algorithm 15 Baseball(N,X,P)
Initialize an N + 1 by X + 1 table B
Initialize an array P of length N
for i = 0 to N do
B[i, 0] = 0
end for
for j = 1 to X do
B[0, j] = 0
end for
for i = 1 to N do
for j = 1 to X do
if j < i.cost then
B[i, j] = B[i − 1, j]
end if
q = B[i − 1, j]
p=0
for k = 1 to p do
if B[i − 1, j − i.cost] + i.value > q then
q = B[i − 1, j − i.cost] + i.value
p=k
end if
end for
B[i, j] = q
P [i] = p
end for
end for
Print: The total VORP is B[N, X] and the players are:
i=N
j=X
C=0
for k = 1 to N do //Prints the players from the table
if B[i, j] 6= B[i − 1, j] then
Print P [i]
j = j − i.cost
C = C + i.cost
end if
i=i−1
end for
Print: The total cost is C

19

You might also like