Minimizing waiting time – Greedy Algorithms – 2

Question.We are given n requests which need to be serviced. Assume that all requests are given to us at time t=0. It takes time, t [i] to service request i.

The waiting time of request i, denoted by W[i] is defined as :

W[i] = t[i] + \sum_{k} t[k]. (where the summation is over all requests serviced prior to servicing request i.)

We want to service the requests such that the total waiting i.e. the sum of the waiting times of the n requests, is minimized.

Solution.

This is a simple application of the greedy paradigm.

As is intuitively clear, we would like to service shorter requests first, before moving on to requests with a larger t [i] value.

We make the intuition concrete as follows:

——–

Sort the n requests in non-decreasing order of t[i] values.

Service the requests in this order.

——–

Intuitive proof of correctness of algorithm:

Consider what happens when you have to decide which request to service, and there are more than one requests. Consider any 2 such requests. If you schedule the request with the larger t[i], it would mean that the other request would have to wait for that larger amount of time before being serviced, thus adding that large quantity to the total waiting time. On the other hand, in case you service the smaller request first, the addition to the total waiting time would only be equal to the t[i] value of the smaller request. Hence, whenever you have a choice, go for the request with the smaller t[i] value.

This directly translates to the above algorithm.

——–

Advertisements

Scheduling Tutors – Greedy Algorithms – 1

Question.We are a tutor-scheduling organization. We have at our disposal an infinite number of tutors using whom we service the tutoring requests that come our way.

Consider any particular day. We get a certain number of tutoring requests. Each request has a specified starting time, and the duration of the tutoring session is exactly 30 minutes. Each tutor can work for any continuous period of 2 hours in a day. (The tutor need not actually be tutoring for the whole of the 2 hours; it is just that he/she will certainly not tutor outside of that 2 hour period.) We are the ones who decide the starting time of the 2 hour working period for each tutor.

We also need to remember that a tutor cannot service more than one request simultaneously.

Our aim is to service the tutoring requests using a minimum number of tutors. How can we achieve that?

Solution.

We can tackle this problem greedily.

We order the requests in non-descending order on the basis of their starting times. Take the first request. Schedule a tutor to start his/her 2 hour period at exactly the starting time of the 1st request. Now, look at the 2nd request. Ask yourself the following question: Can the 1st tutor (i.e. the one who has already been scheduled to work) also service this request? If yes, let him/her do so. If not, schedule a 2nd tutor to start his/her 2 hour period at exactly the starting time of this new request. Continue in the same manner for the succeeding requests. i.e. Do the following: For each new request, check if that request can be serviced by any of the tutors already scheduled to work. If yes, schedule the tutor who has the shortest time remaining in his/her 2 hr period, to service this request. If not, schedule a new tutor to start his/her 2 hour period at exactly the starting time of this new request.

——-

In pseudo code form, we would have:

Sort the requests in non-decreasing order of starting times.

count = 0; // this is the # of tutors assigned thus far.

flag = UNSERVICED. // an indicator for whether a particular request has been serviced or //not.

While (one or more requests remain to be serviced)

Remove the 1st request from our sorted list.

For ( each of the tutors scheduled thus far )

//Note: we iterate through the already assigned tutors in the order in which they were

// assigned (so as to be able to schedule the tutor with the least remaining time in

// his/her 2 hr period.

If ( that tutor can service this request )

Schedule him/her to service this request.

Come out of the For loop.

Set: flag = SERVICED.

If (flag == UNSERVICED)

// means that no already scheduled tutor was able to service the new request.

// We need a new tutor.

Assign a new tutor to start his/ her 2 hr period at the starting time of the new request.

Set: flag = UNSERVICED.

count ++ ;

End of While loop.

End.

———-

Analysis of running time:

– Say there are n requests. Sorting them will take O(n log n) time.

– In the worst case, we might need to schedule n tutors to service n requests.In such a scenario, the time required for assigning tutors will be O(n^2) (this is because of the fact that before we assign a new tutor we check with each of the already assigned tutors as to whether they can service the new request).

– Hence, the algorithm will run in O(n^2) time.

———-

Intuitive proof of correctness of the algorithm:

– Consider the 1st request (in sorted order). Obviously, since no tutors have been assigned thus far, we need to schedule a new tutor for this request. Now, consider the choice we have wrt when to schedule this new tutor. Certainly, we cannot schedule him/her to start his/her 2 hr period after the starting time of the 1st request. If we schedule him/her to start working before the starting time of the 1st request, the time that elapses between the moment the tutor’s 2 hr period starts and the starting time of the 1st request would be effectively wasted. Hence, the best thing would be to schedule the 1st tutor to start his/her 2 hr period at exactly the starting time of the 1st request.

– Similarly, one can argue that for each newly assigned tutor, the algorithm chooses exactly the right time for him/her to start their working period.

– Also, we observe that the algorithm does not assign any new tutors unless there is a need to do so.

– And finally, the algorithm always schedules a tutor with the least remaining time in their schedule to service a new request (when selecting among already assigned tutors). This results in maximum number of requests being serviced by already assigned tutors without having the need to assign a new one.

————–

Dynamic Programming – 6 : Optimum sum of tree nodes

Optimum sum of tree nodes

We are given a rooted tree T with root r. Each node x has a real value v(x) associated with it. We are to find a subset of nodes of the tree, such the sum of the values of the selected nodes is maximized, subject to the condition that if we choose a node x, we cannot choose any of its children or its parent.

Solution.

Notation. Let OPT (x) denote the optimum value on a sub-tree rooted at node x. Let OPT (1,x) denote the optimum value on a sub-tree rooted at node x subject to the condition that x is selected as part of the optimal subset. Let OPT(0,x) denote the optimum value on a sub-tree rooted at x subject to the condition that x is not selected as part of the optimal subset.

Then, we have that:

OPT(x) = max \{OPT(1,x), OPT(0,x)\}.

OPT(1,x) = v(x) + \sum_y OPT (0,y) (where the summation is over all children y of x.)

OPT(0,x) = \sum_y OPT (y) (where the summation is over all children y of x.)

Also, if x is a leaf, then OPT(1,x) = v(x), and OPT(0,x) = 0.

Pseudocode for computing OPT(root)

We can compute OPT(root) in a bottom up fashion starting from the leaves. The recurrence equations for a node x involve the OPT values for its children. Thus, before computing the OPT value for node x, we compute the OPT values for its children.

Let the nodes of the tree be arranged in levels from Level 0 till Level L such that the root is at Level 0.

For i going from L downto 0:

For all nodes x in Level i:

Compute OPT[1,x], OPT[0,x] and OPT[x] using the above recurrence.

Set Soln[x] = 1 if OPT[1,x] >= OPT [0,x]. Else, set Soln[x] = 0.

End.

—-

The above code has a running time of O(n) where n represents the number of nodes in the tree. [ We compute OPT values for n nodes. For each node, we compute the OPT values by summing up the OPT values over all of its children. Aggregating over all nodes, and observing the each node can have at most one parent, we find that the summing up operation costs O(n). Therefore, the total cost is O(n+n) which is O(n). Another way to look at it is that the aggregate cost of the summing up operations over all nodes is O(m) [where m represents the number of edges in the tree], which for a tree is O(n). ]

—-

Computing the optimal subset:

We do so using the Soln[x] values.

Function: Print_Solution (x)

If (Soln[x] == 1)

Print x.

For all grand-children y of x:

Print_Solution (y);

Else

For all children y of x:

Print_Solution (y);

End.

—-

Print_Solution(root) has a running time of O(n).

—-

Dynamic Programming – 5 : Matrix chain multiplication

Matrix chain mnultiplication

We are given a sequence of n matrices <M_1, \ldots, M_n >, such that matrix M_i has dimensions a_i \times a_{i+1}. Our task is to compute an optimal parenthesization of this sequence of matrices such that the sequence is fully parenthesized.

Terminology: A sequence of matrices is fully parenthesized if the sequence is either a single matrix or a product of 2 fully parenthesized sequences.

Optimality here refers to minimization of the number of scalar multiplications involved in computing the product of the matrix sequence.

Note 1:

// C code for multiplying A (m x n) with B (n x p). Product is C (m x p).

for (i=0; i < m; i++)

for (j=0; j < p; j++)

{

C[i][j] = 0;

for (k=0; k<n; k++)

C[i][j] + = A[i][k] * B[k][j];

}

//End of code.

Note 2: From the above, we can see that multiplying an (m x n) matrix with a (n x p) matrix requires m x n x p scalar multiplications.

Note 3: Matrix multiplication is associative. So, we can parenthesize the given sequence of n matrices whichever way we want as long as we do not disturb the sequence itself.

—-

Solution.

If n > 1, we know that the required parenthesization divides the n matrix sequence into 2 smaller fully parenthesized sequences.

Let this division occur at matrix M_k, i.e. the number of scalar multiplications in <M_1, \ldots, M_n> equals the sum of the number of scalar multiplications in <M_1, \ldots, M_k> and <M_{k+1}, \ldots, M_n> plus the value a_1 \times a_{k+1} \times a_n.

Notation: Let OPT(i,j) denote the number of scalar multiplications in the optimal parenthesization of the sequence <M_i, \ldots, M_j>.

From the above, we have that:

OPT(1,n) = \min_{1 \leq k \leq n-1}\{ OPT (1, k) + OPT(k+1, n) + a_1 \times a_{k+1} \times a_{n+1} \}.

Generalizing this, we can say that, for 1 \leq i < j \leq n:

OPT(i, j) = \min_{i \leq k \leq j-1}\{ OPT (i, k) + OPT(k+1, j) + a_i \times a_{k+1} \times a_{j+1} \}

Further, OPT(i,i) = 0 for all possible i.

—-

Pseudocode for computing optimum number of scalar multiplications:

For i from 1 to n:

For j from 1 to (n-i):

Compute OPT [i,j] according to the above recurrence.

Set Soln[i,j] = k; // k is the index that optimizes OPT[i,j] in the recurrence.

End of algorithm.

The above code will take O(n^3) to find OPT[i,n] (in the process, computing the OPT[i,j] and Soln[i,j] table values).

Computing the optimal solution i.e. parenthesization

We can do so using the Soln [i,j] table.

Function: Print_Solution (i, j)

If ( i == j)

Print A_i

Else

Print (

Print_Solution (i, Soln[i,j] );

Print_Solution ( Soln[i,j] + 1, j);

Print )

End.

—-

This function takes O(n) time for the call Print_Solution (1, n).

Dynamic Programming – 4 : Sequence Alignment

Sequence Alignment

We are given 2 sequences X = <x_1, \ldots, x_m> and Y = < y_1, \ldots, y_n>. Our task is to produce a valid matching of the elements of X and Y such that the cost of the matching is minimized.

Terminology:

A valid matching denotes a one-to-one correspondence of (some subset of) elements of X and Y with the added condition that there should be no crossing in the matching.i.e. if x_i is matched to y_j, and x_p is matched to y_q, then i < p implies j < q.

e.g.

X = < e l e p h a n t >

Y = < m o n k e y >

The matching M = \{ (1,2), (3,3), (6, 5) \} refers to the valid matching where we match x_1, y_2 and x_3, y_3 and x_6, y_5. Note that there are no crossings in M.

Cost of a matching M is the sum of:

(1) matching cost m_{x_i,y_j} for each (i,j) \in M.

(2) gap cost d for each element of X or Y which is not part of M.

Solution.

Given X and Y, we have 2 choices: either include (m,n) in the optimum matching, or exclude it from the optimum matching.

Case 1:

If (m,n) is part of optimum matching, then the optimum cost would be m_{x_m, y_n} plus the optimum cost of matching the sequences <x_1, \ldots, x_{m-1} and < y_1, \ldots, y_{n-1}.

Notation: Let OPT(i,j) denote the optimum cost of matching <x_1, \ldots, x_i> and <y_1, \ldots, y_j>.

Therefore, we have:

OPT (m,n) = m_{x_m, y_n} + OPT(m-1, n-1)

Case 2:

If (m,n) is not part of the optimum matching, then one of the following should hold true (because of the no crossing condition):

(1) x_m is not part of the optimum matching.

or, (2) y_n is not part of the optimum matching.

Therefore, we have:

OPT(m,n) = min \{ OPT(m-1, n) + d, OPT (m,n-1) + d\}

—-

Generalizing the above recurrence, we have, for 1 \leq i \leq m and 1 \leq j \leq n:

OPT (i, j) = min \{ m_{x_i, y_j} + OPT (i-1, j-1), OPT (i-1, j) + d, OPT (i, j-1) + d\}

Also, we have the base conditions that:

OPT (i, 0) = i \times d for all 0 \leq i \leq m; and analogously for OPT (0, j).

—-

Pseudocode for computing optimum value:

Initialize OPT [i, 0] and OPT [0,j] as given above.

For i from 1 to m:

For j from 1 to n:

Compute OPT [i,j] using the given recurrence.

End of algorithm.

—-

The above algorithm computes the optimum value OPT (m,n) in O(mn) time.

—-

Computing the optimum solution using OPT [i,j] values:

Let us denote :

(1) m_{x_i, y_j} + OPT (i-1, j-1) by Expr1.

(2) d + OPT (i, j-1) by Expr2.

(3) d + OPT (i-1, j) by Expr3.

Function: OPT_Solution (i,j)

If (i = = 0) OR ( j = = 0)

Return;

If Expr1 <= both Expr2 and Expr3

OPT_Solution (i-1, j-1);

Print: Index i of X is matched to Index j of Y.

Else if: Expr2 <= both Expr1 and Expr3

OPT_Solution (i, j-1);

Else:

OPT_Solution (i-1, j);

End of function.

—-

What we want to compute is OPT_Solution (m, n). The above function computes that in time O(m+n).

—-

Dynamic Programming – 3 : Subset Sum

Subset Sum

We are given n items \{I_1, \ldots, I_n \}, each having non-negative integral weight w_i. We are also given a non-negative integral weight bound W. Our task is to find a valid subset of items which maximizes the total weight of the items in the subset.

Terminology: A valid subset T is one such that the total weight of items in T is at most W.

Solution.

For item I_n, we have 2 choices : either to include it in the optimum solution, or to not include it.(Note that if w_n > W, then we have just one choice  — exclude I_n from the optimum solution.)

Case 1:

If I_n is included in the optimum solution, the optimum solution would be w_n plus the optimum solution obtained from the set of items \{I_1, \ldots, I_{n-1}\} with the weight bound as W-w_n.

Notation: Let OPT (i, w) denote the value of the optimum solution obtained on the set \{I_1, \ldots, I_i\} with weight bound w.The original task, then, is to compute OPT(n, W).

We have that, if I_n is included in the optimum solution,

OPT (n, W) = w_n + OPT (n-1, W-w_n).

Case 2:

If I_n is not included in the optimum solution, then we have that:

OPT (n, W) = OPT (n-1, W).

—-

Generalizing the above, we have that, for 1 \leq i \leq n and 0 \leq w \leq W:

OPT ( i, w) = OPT (i-1, w) [if w_i > w]

Else,

OPT (i, w) = max \{ w_i + OPT (i-1, w- w_i), OPT (i-1, w) \}

—-

Also, OPT (0, w) = 0 for all w.

—-

Pseudocode for computing the optimum value:

Based on the above recurrence, we can construct an algorithm.

For i from 1 to n:

For w from 0 to W:

Compute OPT[i,w] using the above recurrence.

End.

—-

The above code runs in O(nW) time [since O(1) time is spent for computing each OPT[i,w] value].

—-

Computing the Optimum solution:

We can compute the optimum solution recursively given the table of OPT[i,w] values.

Function: Print_OPT (k, w)

If k == o

Return;  //Breaking the recursion. We can also have the condition as k == 0 OR w == 0

If w_k > w

Print_OPT (k-1, w);

Else if w_k + OPT (k-1, w-w_k) > OPT (k-1, w)

Print_OPT (k-1, w- w_k);

Print I_k

Else

Print_OPT (k-1, w);

End of function.

The optimum solution is printed on calling Print_OPT (n, W). The time taken is O(n).

—-


Dynamic Programming – 2 : Longest Common Subsequence

Longest Common Subsequence (LCS)

We are given 2 sequences X = < x_1, \ldots, x_m > and Y = <y_1, \ldots, y_n >. Our task is to find the LCS of X and Y.

Terminology: A subsequence of X is a set of elements that can be obtained from the elements of X such that no 2 two elements of the subsequence are out of order wrt the ordering defined by X. e.g. if X = < E L E P H A N T >, then < E P H T>, < E E A > etc are subsequences of X.

A common subsequence of X and Y is a sequence Z that is a subsequence of both X and Y. The longest such Z is an LCS.

Solution.

Let OPT (m, n) denote the length of an LCS of X and Y. (Note that there may be more than one LCS’s, each of them having the same optimum length.)

Either of these 2 cases is true : either x_m = y_n or x_m \neq y_n.

Case 1:

If x_m = y_n then we can certainly include that element in the LCS and proceed to finding the LCS of <x_1, \ldots, x_{m-1} and <y_1, \ldots, y_{n-1}.

Notation: Let X_k denote <x_1, \ldots, x_k. Y_k is defined analogously.

Notation: Let OPT (i, j) denote the length of the LCS of X_i and Y_j.

Therefore, we have that:

OPT (m,n) = 1 + OPT (m-1, n-1) (if x_m = y_n)

Case 2:

If x_m \neq y_n, then we have that:

OPT (m, n) = max \{ OPT (m-1, n), OPT (m, n-1) \}.

—–

Generalizing the above, we have that, for i > 0 and j> 0:

OPT (i, j) = 1 + OPT (i-1, j-1) (if x_i = y_j)

and

OPT (i, j) = max \{ OPT (i, j-1), OPT (i-1, j) \} (if x_i \neq y_j)

—–

We can use the above recurrence to compute the optimum length in a bottom up fashion.

Pseudocode for computing the optimum length:

Initialize OPT (0, i) and OPT (j, 0) to be 0 for all 1 <= i <= n, and 1 <= j <= m.

For i from 1 to m:

For j from 1 to n:

Compute OPT (i,j) and OPT (j,i) using the above recurrence.

End of algorithm.

—-

What we want is the length of OPT (m,n). The above algorithm computes OPT(m,n) in O(mn) time.

—-

Using the m \times n table of OPT (i,j) values, we can compute the LCS recursively.

Function: OPT_Solution (i,j) :=

If (i == 0) OR ( j == o)

// Exit; Break recursion.

Return;

If x_i == y_j

OPT_Solution (i-1, j-1);

Print x_i

Else if OPT (i-1, j) \geq OPT (i, j-1)

OPT_Solution (i-1, j);

Else

OPT_Solution (i, j-1);

End of algorithm.
—-
The above code for printing the LCS runs in O(m+n) time.

—–


%d bloggers like this: