Dynamic programming – Probability, gambling and a die

Let’s say we have a fair 6-sided die. Player A can earn money based on what shows up when the die is rolled. The rule of the game is as follows: Player A will roll a die upto 3 times. After each roll, Player A has two choices: (1) He can take money in dollars equal to the number showing on the upturned face of the die. (i.e. if 5 shows on the die, he can take $5.) In this case, the die is not rolled any more, and the game ends. (2) He can decide to discard this roll of the die, and decide to roll the die again. Note that the die can be rolled at most 3 times. Question — What is the expected payoff for Player A? ——– Let’s say that instead of 3 rolls of the die, the game allowed only for 1 roll. What is the expected payoff for Player A in this case? It is clearly equal to the expected value of the number that shows up when the die is rolled. And since the die is fair, this expectation is equal to $(1 + 2 + 3 + 4 + 5 + 6)/6 = 3.5$. And Player A has no choice but to accept whatever shows up on the die because there is only one roll possible. Hence, in this game, the expected payoff for Player A is$3.5.

Now, consider what happens if instead of 1 roll, we allow for upto 2 rolls.

Now, if the 1st roll is a 6, obviously Player A would pocket $6 and decide to end the game there itself. Also, if the 1st roll is a 1, Player A can always decide to roll again [because irrespective of what shows up on the 2nd roll, it cannot be less than$1, and hence Player A has nothing to lose by deciding to discard the 1st roll and opting to roll again.]

Now consider what happens if the 1st roll is a 3. Now, Player A is faced with a predicament. What should he do? Should he decide to stop there and go home with $3, or should he discard this 3 in the hope of higher returns in the 2nd and final roll? This is where having learnt probability well in school helps. We know that the expected value of the number that shows up when the die is rolled once is 3.5. Hence, if in the 1st roll, Player A gets a number greater than 3.5 i.e. either 4, 5 or 6, he can decide to stop there and take the corresponding amount of money. However, if he gets either 1, 2 or 3 in his 1st roll, he should discard that, and decide to roll again, because the expected value of the 2nd roll is 3.5. Hence, the expected payoff for Player A from a game which allows for at most 2 rolls is: $\frac{3}{6} 3.5 + \frac {1}{6} 4 + \frac{1}{6} 5 + \frac{1}{6} 6 = 4.25$. (Here, $3/6$ is the probability of getting a 1, 2 or 3 on the 1st roll, in which case the expected payoff becomes 3.5. ) ——- Now, coming back to our original game, we can decide as to how we should proceed after the die is rolled once. We know that the expected payoff from 2 rolls of the die is$4.25.

Hence the strategy of Player A should be the following:

– If the 1st roll of the die yields either a 5 or a 6, take the money, and end the game.

– Else, discard the 1st roll, and continue the game.

The expected payoff can be computed easily based on the preceding observations:

Expected payoff for Player A = $\frac{4}{6} 4.25 + \frac{1}{6} 5 + \frac{1}{6} 6 = \frac {14}{3}$.

———-

This is a classic question using the principle of dynamic programming. Let’s say the the game allows for the die to be tossed upto $n$ times.

Let $E[k]$ denote the expected payoff for Player A in a game that allows upto $n-k$ rolls of the dice.

The strategy for Player A after roll $k$ should be the following:

— If the number showing on the dice is greater than $E[k]$, take the money and stop the game.

— Else, discard that roll, and decide to continue on to the next roll.

——-

The recurrence relation for computing the $E[k]$ values can be derived in terms of $E[k+1]$ as was done in the case of the 3-roll example that we saw earlier. Once we have done that, we can calculate the values in the order, $E[n-1]$, $E[n-2]$ and so on uptil $E[0]$.

——-

Dynamic Programming – Maximum sum contiguous subsequence

We are given an array A having n integers. What we want to find is a contiguous subseqnence (A[i], A[i+1], …, A[j]) such that the sum of the elements in that subsequence is maximized. (Note that, in general, there may be more than one such optimal subsequence.)

Note that if all elements in A are non-negative, we could directly return the entire array as the required optimal subsequence.

APPROACH 1

A simple brute-force method could be to compute the sums of all possible subseqnences in A. How many subsequences are there? There are $n+ (n-1) + (n-2) + ... + 1 = O(n^2)$ possible subsequences. Computing the sum of a subsequence takes O(n) in the worst-case. Hence, the running time of the algorithm would be $O(n^3)$.

In C++, we could write the following function to do what was explained above:

// start and end are the starting and ending indices of an optimal subsequence.

void f ( int* A, int n, int &start, int& end)

{

int sum, max = A[0];

for (int i = 0; i < n ; i++)

for (int j = i; j < n; j++)

{

sum = 0;

for (int k = i; k <=j; k++)

sum+= A[k];

if (sum >= max)

{

start = i;

end = j;

max = sum;

}

}

}

————

APPROACH 2:

We can improve the running time to $O(n^2)$ by being a bit more clever in computing the sums of different subsequences. We observe that the sum of the subsequence A[i … j+1] = A[j+1] + sum of the subsequence A[i … j].

In C++, we could write as follows:

void f (int *A, int n, int &start, int &end)

{

int sum, max = A[0];

for (int i = 0; i < n; i++)

{

sum = 0;

for (int j = i; j < n; j++)

{

sum + = A[j];

if (sum >= max)

{

start = i;

end = j;

max = sum;

}

}

}

}

———–

APPROACH 3:

Using dynamic programming, we can solve the problem in linear time.

We consider a linear number of subproblems, each of which can be solved using previously solved subproblems in constant time, this giving a running time of $O(n)$.

Let $S[k]$ denote the sum of a maximum sum contiguous subsequence ending exactly at index $k$.

Thus, we have that:

$S[k+1] = max \{S[k] + A[k+1], A[k+1] \}$ (for all $1 \leq k \leq n-1$)

Also, S[0] = A[0].

——–

Using the above recurrence relation, we can compute the sum of the optimal subsequence for array A, which would just be the maximum over $S[i]$ for $0 \leq i \leq n-1$.

Since we are required to output the starting and ending indices of an optimal subsequence, we would use another array $T$ where $T[i]$ would store the starting index for a maximum sum contiguous subsequence ending at index i.

In prseducode form, the algorithm would look thus:

Create arrays S and T each of size n.

S[0] = A[0];

T[0] = 0;

max = S[0];

max_start = 0, max_end = 0;

For i going from 1 to n-1:

// We know that S[i] = max { S[i-1] + A[i], A[i] .

If ( S[i-1] > 0)

S[i] = S[i-1] + A[i];

T[i] = T[i-1];

Else

S[i] = A[i];

T[i] = i;

If ( S[i] > max)

max_start = T[i];

max_end = i;

max = S[i];

EndFor.

Output max_start and max_end.

———–

The above algorithm takes $O(n)$ time and $O(n)$ space.

——–

We can however improve upon the space requirements, reducing it to $O(1)$. The key idea is this: for computing S[i] and T[i], all we need are the values of S[i-1] and T[i-1] apart from the given array A. Hence, there is no need to store all $n$ values of the S and T arrays.

We could proceed as follows:

max = A[0];

max_start = 0;

max_end = 0;

S = A[0];

T = 0;

For i going from 1 to n-1:

// S = max { S + A[i], A[i] )

if ( S > 0)

S = S + A[i];

Else

S = A[i];

T = i;

If ( S > max)

max_start = T;

max_end = i;

max = S;

End For.

Output max_end and max_start.

———–

The above algorithm takes $O(n)$ time and $O(1)$ space.

———-

Kite-cutting : More on dynamic programming – 4

Question. We are given a sheet of rectangular paper of length L and width W (where L and W are integral). We need to cut it in order to make rectangular kites. We have a set of n types of kites, each of integral dimensions, which we can make. Kite i has length $l_i$, width $w_i$ and sells for a profit of $p_i$. As is obvious, we would like to maximize the profit we can make from our rectangular sheet of paper.

Now, we are given a cutting device to cut the sheet. The cutting device can cut a sheet of paper only either horizontally or vertically. What this means is that we cannot cut a sheet into arbitrary shapes. Given a rectangular sheet to start with, the above condition implies that we will always obtain 2 smaller rectangles from it. The smaller rectangles can be obtained by either cutting along the width or along the length.

Also, we are told that there is no restriction on the number of kites we are allowed to make of any particular type.

Solution.

We can solve this in a fairly straightforward fashion using dynamic programming.

Let’s denote by $OPT(x,y)$ the maximum profit we can derive out of a sheet of paper of length x and width y.

Thus, our aim is to compute $OPT(L,W)$.

Now, we know that we can cut the $x \times y$ dimension sheet only either along its length or along its width (or we can also choose not to cut the sheet at all).

If we cut along the length, the 2 smaller rectangles obtained are of sizes: $x \times w$ and $x \times (y-w)$ where $w$ lies in the range $\{1, y-1\}$.

Similarly, one can cut width-wise.

Also, let us denote by $P(x,y)$ the profit we would get if by directly selling (i.e. without cutting) our $x \times y$ sheet of paper. (Note that this means that there exists some kite-type which has exactly these dimensions.)

——

We obtain the following recurrence based on the above observations:

$OPT (x,y) = \max \{ Term 1, Term 2, P(x,y) \}$.

$Term1 = \{ OPT(x, w) + OPT(x,y-w) : w \in \{1, \ldots, y-1\} \}$.

$Term2 = \{ OPT(l, y) + OPT(x-1, y) : l \in \{1, \ldots, x-1\} \}$.

——-

Further, we have the following obvious results:

$OPT(0, y) = 0$ for all $y$.

$OPT (x, 0) = 0$ for all $x$.

——–

Using the above recurrence, we can compute the OPT table as follows:

——

// in C++

//we assume we have already computed P[x][y].

int OPT[L]{W];

for (int i = 0; i < L; i++)

OPT[i][W] = 0;

for ( i =0; i < W; i++)

OPT [L][i] = 0;

int j, k, temp1 = 0, temp2 = 0, max;

for (i = 0; i < L; i++)

for (j=0; j < W; j++)

{

for (k =1; k < i ; k++)

{

if (OPT[k][j] + OPT[i-k][j] > temp1)

//temp1 stores the max value of term1

temp1 = OPT[k][j] + OPT[i-k][j];

}

for ( k = 1; k < j; k++)

{

if ( OPT[i][k] + OPT [i][j-k] > temp2)

//temp 2 stores the max value of term2

temp2 = OPT[i][k] + OPT [i][j-k];

}

//now take the max of temp1, temp2 and P[i][j].

max = (temp1 > temp2) ?temp1 : temp2;

OPT[i][j] = (max > P[x][y]) ? max : P[x][y];

}

———

The answer that we want is OPT[L][W].

The above code would run in time O(L^2 W^2).

(Note that we assume P[x][y] values to be part of the input.)

——-

Frog crossing – More on dynamic programming – 3

Question. There is a river n meters wide. A frog wants to cross the river. There were originally n-1 stones across the river at a distance of 1 meter from one another. However, some of these stones were removed recently.

The frog has a peculiar jumping rule. If it jumps x meters on one jump, the next jump has to lie in the range {x-1, x, x+1}. Further, the 1st jump that the frog makes is of exactly 1 meter. (We assume that the stone at a distance of 1 meter from the frog’s end was not removed.)

Given which of the stones have been removed, how can you tell whether or not it is possible for the frog to reach the other end.

Solution.

Let us define an array, Stone [1 .. n] as:

Stone[i] = true, iff there is a stone at a distance of i meters from the frog’s end.

Note that Stone[1] and Stone[n] are already given to be true.

——

We define another quantity, can_reach[d,s] as:

can_reach [d,s] = true,

if all of the following conditions hold true:

1. Stone[d] = true.

2. Stone[s] = true.

3. Given the previous jumps of the frog, the frog can reach the stone at distance d from the bank, by directly jumping from the stone at distance s from the bank.

——

Our aim is to figure out if can_reach[n,s] is true for any value of s i.e. for any of s = {1, …, n-1}.

—–

We make some observations about can-reach [d,s]:

1. can_reach [1,0] = true.

2. can_reach [d,0] = false, if d > 1.

3. can_reach [d,s] = false, if s >= d.

4. We also have the following recurrence:

If d > 1, can_reach [d,s] =

Stone[d] AND Stone[s] AND [ can_reach [s, (d-s)-1] OR can_reach[s,d-s] OR can_reach[s,(d-s)+1] ].

——-

Using the above equations, we can compute a table of can_reach values as follows:

——-

can_reach [1,0] = true;

For (d going from 2 to n)

can_reach [d,0] = false;

For (d going from 1 to n)

For (s going from d+1 to n)

can_reach[d,s] = false;

For (d going from 2 to n)

For (s going from 1 to d-1)

can_reach[d,s] is as given by equation (4) above.

For (s going from 1 to n-1)

If (can_reach[n,s] == true)

Output: “Can reach the other bank”;

Output: “Cannot reach the other bank”;

———–

As can be seen, the algorithm runs in O(n^2) time.

Maximum Sum Subarray in C++ : More on dynamic programming – 2

Question. We are given an array A of n integers. We wish to find a contiguous subarray of A having the greatest sum.

Solution.

We note first that the required subarray can end at any of the n indexes , 0 through n-1. Hence, let us define $V[k]$ as the value (i.e. sum) of the optimal subarray ending at index k.

Let OPT denote the value of the required optimal subarray.

Therefore, we have that:

$OPT = max_{0 \leq k \leq n-1} V[k]$.

We can compute OPT using the V[k] values in O(n) time.

——-

We now look at computing the V[k] values.

We have the following recurrence:

$V[k]=max(A[k]+V[k-1],A[k])$

(Since the subarray has to end at index $k$, $A[k]$ is always part of it.)

——–

Further, in order to compute the optimal subarray, we need to keep track of the starting index of the subarray. We store the starting index of the optimal subarrays for our subproblems in an array of length n called Index.

———

We can compute the $V$ array and the Index array using the above recurrence in a bottom up manner.

Pseudocode for computing V and Index:

V[0] = A[0];

Index[0] = 0;

For (i going from 1 to n-1)

//Use the above recurrence to compute V[i].

If ( V[i-1] > 0)

V[i] = V[i-1] + A[i];

Index[i] = Index[i-1];

Else

V[i] = A[i];

Index[i] = i;

End of algorithm.

——————————

We can write the above pseudocode in C/ C++ as follows:

// 1. computing the V and Index arrays. (array A of length n is given to us.)

int V[n], Index[n], OPT;

V[0] =A[0];

Index[0] = 0;

for ( int i = 1; i < n; i ++ )

{

//use the recurrence to find V[i] and Index[i].

if ( V[i-1] > 0)

{

V[i] = A[i] + V[i-1];

//starting index of opt. subarray ending at i is Index[i-1].

Index [i] = Index[i-1];

}

else

{

V[i] = A[i];

//starting index of opt subarray ending at i is i itself.

Index[i] = i;

}

}

—————-

// 2. Now, we can compute OPT using V in O(n) time.

OPT = V[0];

int final_index = 0; //final_index will store the ending index of the opt subarray in A.

for (i = 1; i < n; i++)

{

if (V[i] > OPT)

{

OPT = V[i];

final_index = i;

}

}

————–

// 3. We can compute the optimal subarray (the starting and ending indices) from Index.

//ending index is final_index (which we have already computed).

int begin_index = Index[final_index];

———-

That completes our algorithm.

Analysis of the algorithm:

1. Computing V and Index arrays takes O(n) time.

2. Computing OPT and final_index from V takes O(n) time.

3. Computing begin_index from Index takes O(1) time.

————

Longest non-decreasing subsequence : More on dynamic programming – 1

Question. We are given an array A of n integers. We wish to find the longest subsequence such that if the indices in the subsequence are $$ (where $i_1 < \ldots ), we want that $A[i_1] \leq \ldots A[i_k]$.

Solution.

Let us define $L(k)$ to be the length of the longest non-decreasing subsequence ending at index $k$.

Now if $OPT$ denotes the length of the longest non-decreasing subsequence in A, then we have that:

$OPT = \max_{0 \leq k \leq n-1} L(k)$.

We can do the above step in $O(n)$ time.

——

Let us now look at computing $L(k)$ values.

We have the following recurrence:

$L(k) = 1 + \max_i L(i)$.

In the above equation, the maximum is taken over all indices $0 \leq i \leq k-1$ for which $A[i] \leq A[k]$. (i.e. for which we can safely add A[k] to the already existing longest non-decreasing subsequence ending at i.)

In case there exists no such index $i$, $L(k) = 1$.

——-

Now that we have the recurrence, we can compute the $L(k)$ values in a bottom up fashion iteratively. We will also keep a record of the index $i$ (as defined above) for each $L(k)$ that we compute. This will help us when we want to reconstruct the optimal solution.

—–

Pseudo-code for computing L(k) values:

L[0] = 1;

For ( k going from 1 to n-1)

Compute L[k] using the above recurrence.

Index[k] = i;

End.

——

Coding in C/ C++ would give us:

int L[n], Index[n], OPT = 0, final_index = -1;

//final_index is the index of the last term in the optimal subsequence. (Note: there can be //more that one optimal subsequence. We consider one of them.

L[0] = 1;

for (int k=1; k<n; k++)

{

int temp_max = 0, temp_index = k; //default values of L[k]-1 and Index[k].

for (int i = 0; i<k; i++)

{

if (A[i] <= A[k]) //it is safe to include A[k] in this subsequence.

{

if (L[i] > temp_max)

{

temp_max = L[i];

temp_index = i;

}

}

}

L[k] = 1 + temp_max;

Index[k] = temp_index;

}

//now, to compute OPT.

for (k=0; k<n; k++)

{

if (L[k] > OPT)

{

OPT = L[k];

final_index = k;

}

}

//Now, we construct the optimal subsequence using our Index array.

// We do so recursively using a function Print_Solution.

// void Print_Solution ( int Index [ ], int n);

void Print_Solution ( int Index [ ], int n, int final_index)

{

int prev_index = Index[final_index];

if (prev_index == final_index) //this is where the subsequence begins; end recursion.

return;

Print_Solution (Index, n, prev_index);

cout << A [final_index] << endl;

}

———–

Analysis of the algorithm:

– Computing the L and Index arrays takes $O(n^2)$ time.

– Computing OPT from L values takes $O(n)$ time.

– Constructing the optimal subsequence from Index values takes O(n) time.

———–

Dynamic Programming – 6 : Optimum sum of tree nodes

Optimum sum of tree nodes

We are given a rooted tree $T$ with root $r$. Each node $x$ has a real value $v(x)$ associated with it. We are to find a subset of nodes of the tree, such the sum of the values of the selected nodes is maximized, subject to the condition that if we choose a node $x$, we cannot choose any of its children or its parent.

Solution.

Notation. Let $OPT (x)$ denote the optimum value on a sub-tree rooted at node $x$. Let $OPT (1,x)$ denote the optimum value on a sub-tree rooted at node $x$ subject to the condition that $x$ is selected as part of the optimal subset. Let $OPT(0,x)$ denote the optimum value on a sub-tree rooted at $x$ subject to the condition that $x$ is not selected as part of the optimal subset.

Then, we have that:

$OPT(x) = max \{OPT(1,x), OPT(0,x)\}$.

$OPT(1,x) = v(x) + \sum_y OPT (0,y)$ (where the summation is over all children $y$ of $x$.)

$OPT(0,x) = \sum_y OPT (y)$ (where the summation is over all children $y$ of $x$.)

Also, if $x$ is a leaf, then $OPT(1,x) = v(x)$, and $OPT(0,x) = 0$.

Pseudocode for computing OPT(root)

We can compute OPT(root) in a bottom up fashion starting from the leaves. The recurrence equations for a node $x$ involve the OPT values for its children. Thus, before computing the $OPT$ value for node $x$, we compute the $OPT$ values for its children.

Let the nodes of the tree be arranged in levels from Level 0 till Level L such that the root is at Level 0.

For i going from L downto 0:

For all nodes x in Level i:

Compute OPT[1,x], OPT[0,x] and OPT[x] using the above recurrence.

Set Soln[x] = 1 if OPT[1,x] >= OPT [0,x]. Else, set Soln[x] = 0.

End.

—-

The above code has a running time of $O(n)$ where $n$ represents the number of nodes in the tree. [ We compute OPT values for n nodes. For each node, we compute the OPT values by summing up the OPT values over all of its children. Aggregating over all nodes, and observing the each node can have at most one parent, we find that the summing up operation costs O(n). Therefore, the total cost is O(n+n) which is O(n). Another way to look at it is that the aggregate cost of the summing up operations over all nodes is O(m) [where m represents the number of edges in the tree], which for a tree is O(n). ]

—-

Computing the optimal subset:

We do so using the Soln[x] values.

Function: Print_Solution (x)

If (Soln[x] == 1)

Print x.

For all grand-children y of x:

Print_Solution (y);

Else

For all children y of x:

Print_Solution (y);

End.

—-

Print_Solution(root) has a running time of $O(n)$.

—-