Dynamic Programming – 1 : Weighted Interval Scheduling

In general, problems on dynamic programming can be solved by first expressing the required optimum solution in a recursive fashion, and then solving the recurrence in a bottom-up fashion.

Weighted Interval Scheduling

We are given n intervals, with each interval having start time S_i, finish time F_i and non-negative weight W_i ( for i from 1 to n).

Our task is to find a compatible subset T  of these intervals so as to maximize \sum W_I for the intervals in T.

(A compatible subset is one where no two intervals ever overlap. )

—-

Solution.

Let us assume that the n intervals are sorted in the order of their finish times. Let this ordered list of intervals be I_1, \ldots, I_n.

Let OPT (n) denote the value of optimum solution. (i.e. the sum of weights of intervals in an optimum solution.)

Consider interval I_n.

The optimum solution may either include I_n or not include I_n.

If the optimum solution does not include I_n, then we have:

OPT (n) = OPT (n-1) .

Notation: Here, OPT ( k ) denotes the optimum value computed from the set of intervals \{I_1, \ldots, I_k\}.

If the optimum solution does includes I_n, then, apart from I_n, the optimum solution can only include those intervals which do not overlap with it, i.e. only those intervals, i for which F_i \leq S_n.

Notation Let p ( k ) denote the index of the interval, i for which (1) F_i \leq S_k, and (2) F_{i+1} > S_k. Also, if all intervals have finish times > S_k, then p_k = 0.

Thus, we have that, if the optimum solution includes I_n, then:

OPT (n) = W_n + OPT (p(n)).

Therefore,

OPT (n) = max \{ OPT (n-1), W_n + OPT (p(n)) \}

Generalizing, we have that, for 1 \leq i \leq n :

OPT ( i ) = max \{ OPT (i-1), W_i + OPT (p(i)) \}

and that, OPT (0) = 0.

—–

Using the above recurrence, we can write an algorithm to solve it in a bottom-up fashion:

/* Assuming that intervals are sorted according to F_i values we have p_i values for all intervals. */

OPT (0) = 0;

For i from 1 to n:

Compute OPT(i) using the above recurrence.

Return OPT (n).
End.

—–

The above algorithm takes O(n) time.

Sorting the intervals by F_i values takes O(n \log n) (using merge sort or quick sort), following which we can compute p_i values for all intervals in O(n \log n) time (by calling a slightly modified binary search routine n times).

Therefore, the algorithm will run in O(n \log n) time.

——

Computing the optimal solution from the optimal value:

Once we have the OPT values, we can compute the optimal solution (i.e. the sets of intervals, not just the value) recursively as follows:

Function: OPT_Solution ( k ) :=

If ( k == 0 )

//Do nothing. Break recursion.

Return;

For i decreasing from k to 1:

If [ OPT ( p(k) ) + W_k >= OPT (k-1) ]

OPT_Solution ( p(k) );

// Include interval k in Optimum solution

Print k.

Else

OPT_Solution (k-1);


End of function.

—-

The function OPT_Solution (n) has a worst-case running time of O(n) [because there can be at most n recursive calls (since the argument passed to the function calls strictly decreases), and the time taken to process the non-recursive part of each call is O(1) ).

—-

 

Matching parentheses

We will look at an interesting problem of evaluating whether a sequence of left and right parentheses is balanced.

Firstly, we need to know what is meant by the sequence being balanced. What we basically mean is that for every left parenthesis, there exists a corresponding right parenthesis, and also that, for every right parenthesis there exists a matching left parenthesis.

Let’s consider a few examples:

( )  : This simple sequence is balanced. i.e. There is only one left parenthesis in this sequence, and it has a matching right parenthesis. Also, there is only one right parenthesis in this sequence, and that has a matching left parenthesis.

( ( ) ) : This sequence is also balanced.

For clarity, let us index the elements of this sequence using the indices 1, 2, 3, 4 as we would do in the case of an array. (i.e. “(” ( ( ) — the highlighted “(” is Seq [1]. Similarly,  in ( ( “)” ), the highlighted “)” is Seq [3].)

Now, in the sequence ( ( ) ), we need to observe 2 things in order to ascertain whether or not it is balanced:

1. Every left parenthesis has a matching right parenthesis.

This can be seen to be true. Seq [1] has Seq [4] as its match, and Seq [2] has Seq [3] as its match.

2. Every right parenthesis has a matching left parenthesis.

Again, this is easily seen to be true. Seq [3] has Seq [2] as its match, and Seq [4] jas Seq [1] as its match.

Thus,  ( ( ) ) is balanced.

Now consider this example:

(  (  (  )  )  (  ) )

This sequence is also balanced because:

1. Every left parenthesis has a matching right parenthesis.

Seq [1] has Seq [8]; Seq [2] has Seq [5]; Seq [3] has Seq [4]; Seq [6] has Seq [7].

2. Every right parenthesis has a corresponding left one. This can be easily verified.

To see why both of the above requirements need to be satisfied in order for a sequence to be balanced, consider the following two examples:

( ) )

Here,

(1) Every left parenthesis has a matching right parenthesis. (Seq [1], the only left parenthesis in our sequence, can be matched to Seq [2].)

(2) Every right parenthesis does not have a matching left parenthesis. Seq [2] has Seq [1] as its match, but Seq [3] does not have any match.

Therefore, ( ) ) is not balanced, which also seems correct from what one would intuitively understand from the word “balanced”.

Similarly, it can be easily verified that ( ( ) is not balanced.

To develop a method for determining whether or not a sequence is balanced, let’s first look a bit more at what is meant a matching parenthesis.

Consider :

(         (           )            (             (            )           )            )

[1]    [2]      [3]         [4]          [5]       [6]        [7]         [8]

As can be seen, the matching left parenthesis for :

Seq [3] is Seq [2],

Seq [6] is Seq [5],

Seq [7] is Seq [4],

Seq [8] is Seq [1].

Intuitively, one can understand the method by which the brain works out which left parenthesis is the match for a given right parenthesis.

Read  the input from left to right. As soon as you encounter a right parenthesis (we know that its matching left parenthesis should have occurred earlier, and our intuitive definition of matching parenthesis tells us  that this matching left parenthesis is the one nearest to the right one that we read), the matching left parenthesis is the one that is exactly one index before it.

[In our example, this would translate to reading the input left to right from Seq [1] to Seq [3]. At Seq [3], we encounter a right parenthesis, and decide that its matching left parenthesis is Seq [2]. )

Now, since we’ve matched a left and right parenthesis, we don’t care about them anymore. So we assume that they have been removed from the sequence.

[ In our example, this would mean removing Seq [2] and Seq [3] from our sequence.]

We continue reading the input from left to right, and as soon as we encounter a right parenthesis, we decide that its matching left parenthesis based on the facts that the left parenthesis that we are finding (1) occurs before it, and (2) is the one closest to the right parenthesis under consideration (from among all the left parenthesis that we have not yet removed from consideration).

[In our example, this would translate to reading : Seq [4] , Seq [5] and Seq [6]. We encounter a right parenthesis at Seq [6], and search for its matching left parenthesis. We see that Seq [5] is a left parenthesis that has not yet been matched (i.e. is still part of the sequence), occurs before Seq [6], and is the closest such left parenthesis to Seq [6].

Once, we’ve done that, we remove Seq [5] and Seq [6] from our sequence. ]

We continue in the same vein — reading the input from left to right from the point where we left off, stopping at a right parenthesis, searching for its matching left parenthesis, and, on finding such a match, removing both of the matched parentheses from the sequence, and continuing the same process again.

[In our example, this becomes:

Read Seq [7]. Now, since, Seq [5] has already been matched (i.e. no longer part of our sequence), the correct match for Seq [7] becomes Seq [4].

Remove both Seq [7] and Seq [4] from the sequence.

Read Seq [8]. Stop, and search for its match.

We see that Seq [1] is the match (all other left parentheses have been removed).

We match Seq [1] and Seq [8], and remove both from the sequence.

We now find that the sequence is empty, and the input is also over, which means that we have matched all parentheses and that the sequence is balanced. Of course, if the sequence weren’t balanced, this wouldn’t be the case.]

—-

Now, that we have understood the idea, it should be easy enough to understand the method used to determine whether or not a sequence a balanced.

1. Scan the input from left to right, and while doing so, perform the following actions:

(i)  if you encounter a left parenthesis, push it into a stack. (This stack is initially empty.)

(ii) If you encounter a right parenthesis, pop the top-most left parenthesis (if it exists) from the stack.

2. If there are no more symbols to be read from the input :

(i) and the stack is empty, the sequence is balanced; end the algorithm.

(ii) and the stack is not not empty (i.e. contains some left parentheses), the sequence is not balanced; end the algorithm.

3. If you encounter a right parenthesis, and the stack is empty, then the sequence is not balanced; end the algorithm.

The above algorithm using stacks is based exactly on the approach we saw earlier, and helps us to decide whether or not a sequence is balanced.

Quick Sort

As the name suggests, quick sort is indeed a quick sorting technique.

Suppose that you have to sort an array of n integers. The method used in quick sort is to choose a pivot element, traverse the array and rearrange the elements such that all elements less than (or equal to) the pivot element are placed to the left of the pivot element, and those that are greater than the pivot element are placed to its right.

What we can observe here is that the pivot element has been placed in its correct final position (i.e. the position where it would be after the array is completely sorted).

So, now we need to only concentrate on rearranging the elements to the left and to the right of the pivot element. And to do so, we perform quick sort separately on the left-elements and on the right-elements.

Let’s look at an example.

Suppose the array we want to sort is: 23   12   3   43    26     18     95     6    34     11.

1. We observe first of all that if an array has just one element, or has no elements, the array is trivially sorted.

2. First, we select a pivot element. Let’s say it is 23.

3. We traverse the array once from left to right, and divide the elements into two parts (those less than, and those greater than, the pivot).

We compare 12 to 23, and find that 12 < 23 and hence, 12 would occur to the left of 23 at the end of this stage. We then compare 3 to 23, and again finding that 3 < 23, place 3 to the left of 23. Next, we compare 43 to 23. We find that 43 > 23, and hence place 43 to the right of 23. We similarly go through each of the remaining elements of the array, and place them either to the left or to the right of the pivot element.

At the end of this step, the array that we have is as follows:

12   3    18      6      11     23 43      26      95      34.

—-

Before proceeding to the next step, we observe an important point: at the end of Step 3, we find that the pivot element 23 is placed in position # 6, which is also its rightful position in the completely sorted array.

—-

4. We now look to sorting two smaller arrays — the ones to the left and to the right of the pivot element.

We resort to recursion in order to sort the arrays (12, 3, 18, 6, 11) and (43, 26, 95, 34).

[In order to sort the array on the left, we select a pivot (let’s say 12) and repeat Step 3 and Step 4 for this array. Similarly, we sort the array on the right of the pivot element. While doing so, we keep in mind the observation in Step 1.]

—–

Running time of Quick Sort algorithm (best-case scenario) :

1. Let’s say we always select the first element of the array as the pivot element. So, it takes O(!) time to select the pivot element.

2. Now, we use the pivot element to divide the array into elements less than (or equal to) and greater than the pivot element. This division of the array takes O(n) time (where n refers to the size of the array).

3. In our best-case scenario, we assume that the array gets divided into 2 (almost) equal halves. (i.e. the pivot element always divides the array as equally (in size) as possible).

Thus, we get the following recurrence relation:

T(n) <= 2 T(n/2) + kn.

Here, T(n) refers to the running time of the algorithm on an array of size n, and k is a constant.

We also have, T(1) = T(0) = O(1)  [this is another way of writing what we wrote earlier in Step 1.]

Thus, we get T(n) is O(n log n).

—-

Now, let’s look at the worst-case running time of Quick Sort.

To look for the worst-case scenario, we’ll assume that the pivot element always divides the array as unequally as possible.

Consider the following array:

2    5     9     14     45      56     89.

If we always use the first element as the pivot element, we find that the array gets divided in a totally lop-sided manner.

The recurrence that we get is as follows:

T(n) <= T(n-1) + kn.

This means that T(n) is O(n^2).

—-

A final note: Always selecting the first element as pivot can lead us into trouble (as we saw above in the worst-case scenario). A way of somewhat circumventing this problem is to pick the pivot element randomly from among the elements of the array. This is what is known as randomized quick sort.

Sorting things out with Selection

Not  long after you’ve been introduced to writing programs would a typical programming instructor teach you how to sort a list of elements. Despite our stated (or, is it unstated) aversion to the instructional approach generally employed, let us go ahead and expound a bit on sorting.

You are given an array of n elements. We assume that we know how to compare any two elements of the array with one another (in other words, you wouldn’t be asked to compare apples and oranges). Let us assume that the array is one of n integers. We wish to sort the elements such that the smallest element occurs first, followed by successively larger elements.

1. Selection Sort

As the name suggests, we sort by performing some “selection”.

Take an arbitrary array of 5 elements – 23, 11, 54, 34, 27.

Now, we know that the sorted order is: 11, 23, 27, 34, 54.

As is clearly seen, 11 is the minimum element in this array, and occurs first in the sorted list. If we remove this 11 from the array, we find that 23 is the minimum element from among the ones remaining. Hence, 23 occurs in the sorted list just after 11. Similarly, on removing 23 also from the array, we find that 27 is the minimum from among the remaining elements, and unsurprisingly, occurs right after 23 in the sorted order.

So, we can see our modus operandi — Select (and remove) the minimum element in the array, and add it to the sorted list.

We’ll digress here for a bit to figure out how exactly we find the minimum element in an array.

Let’s take the following array — 23, 54, 15, 11, 46. We know that the 4th element of the array – 11 is the minimum.

Let’s assume that we read the elements of the array one by one from left to right.

We begin by reading 23. Since it is the only element that we have read so far, and we have no other element to compare it with, 23 is the least element we have encountered thus far. So, our current minimum is 23.

We next read 54. We compare it with the current minimum, which is 23. Now, 54 >23, and hence, the minimum encountered thus far remains 23.

Now, we come across 15. We compare it with the current minimum (23) and find that, voila, 15 is smaller than 23, and hence, 15 is smaller than any element we have encountered so far. So, we naturally set our current minimum to 15, and proceed.

We next read 11, and compare it with the current minimum (which is 15). Now, 11 < 15, and so, we set the current minimum to 11.

We proceed to read the next element, which is 46, and compare it with our current minimum (11). Since, 46 > 11, we let 11 remain as the minimum element encountered thus far.

We now realize that we have run out of elements in the array. Hence, the minimum element encountered thus far, 11, is also the minimum element of the array.

—–

Let us look at pseudo-code for computing the minimum in an array of n elements. This should be simple enough to understand.

Input: Array A having n elements: A[1], A[2], …., A[n].

Output: The index of the minimum element in A.

Pseudo-code:

min_index \leftarrow 1. (Comment: min_index (the index of the least element encountered so far) is a variable, initialized to 1.)

For index i going from 2 till n:

{

If  ( A[i} < A[min_index])

then, min_index \leftarrow i.

}

Return min_index.

—–

For those familiar with the big-O notation, it should be clear that this procedure for finding the minimum in an n-element array takes O(n) steps.

—–

Let’s say that our function for finding the minimum index, Find_Min_Index takes 3 arguments – an array, a starting index, and an ending index.

eg. The function call Find_Min_Index(Z, 4, 9) will find find the index of minimum element from among Z[4], Z[5],…, Z[9].

Now, back to sorting by successive selection of minimum elements.

From what we’ve seen so far, we can write a procedure for sorting an n-element array.

Input: Array A of n elements: A[1], A[2], …, A[n].

Output: A re-ordering of the elements of A such that the elements are sorted in increasing (or, to be more precise, non-decreasing) order.

—–

For i going from 1 to n

{

index \leftarrow Find_Min_Index (A, i, n)

Swap A[i] and A[index].

}

—-

That wasn’t very difficult. Again, for those familiar with asymptotic notation, the selection sort algorithm takes O(n^2) time. (It is easy to see why  —- The first element of the sorted array is obtained by finding the minimum element from among n elements. This takes kn time (where, k is some constant). Similarly, the 2nd element of the sorted array is obtained by finding the minimum element from among n-1 elements. This takes k(n-1) time. Proceeding in the same manner, we find that the time needed to sort the array is O(n^2).)

%d bloggers like this: