# Weighted Interval Scheduling (Dynamic Programming)

August 31, 2009 Leave a comment

Dynamic Programming is a very convenient methodology to compute in polynomial time a solution that seemingly requires exponential time. For an optimization problem, it searches through the space of possible solutions, which may be exponential in size. Now, how can it test all such solutions in polynomial time? The answer is that it builds upon the solution stage-by-stage, and for computing values at each stage, it uses the pre-computed values of the previous stages.

Does the divide-and-conquer paradigm also not do the same, i.e. both of them solve smaller sub-problems in order to find the solution to the given problem. As can be seen via examples, there is marked difference in the problems that subject themselves to dynamic programming algorithms, and those coming under divide-and-conquer algorithms. While the former reduces a possibly exponential time complexity to a polynomial one, the latter tries to improve upon the running time of an algorithm that is already of polynomial time complexity.

Let us look at a very interesting example.

We are organizing a series of Theory lectures by speakers from all over the globe. Depending on the topic being presented, and the speaker presenting it, each talk is given an importance rating, which is a positive number (with a larger value denoting higher importance). We have already decided on the exact times at which each talk would begin, and also the exact time at which it would end. Now comes the twist: we have only one lecture hall. Quite obviously, we can not have talks that have any overlap among them with respect to timing. (It’s not fun listening to two people speaking simultaneously from a single podium). The solution we opt for is to schedule the talks such that the sum total of the importance ratings of the scheduled talks is maximized.

This is known as the Weighted Interval Scheduling problem. We are given a set, S of ‘n’ events (each with a fixed start and end time) and an associated weight, w(k) for each event, k. We need to schedule the events (i.e. find a subset, T of S) such that :-

(1). there is no overlap whatsoever among the scheduled events;

and (2) the sum of weights of scheduled events is maximized.

Let us look at a solution to this problem.

For making our lives easier, we shall order the events by their finishing times (from the earliest finishing time, to the last one). That is, we have numbered the events as 1, 2, 3, …, n, such that f(1) < f(2) < f(3) < …. < f(n). [Here, f(k) denotes finish time of event, k.]

Now, assume that we schedule an event, ‘x’. Let us try to find out what all events we can schedule before ‘x’. Clearly, we can not schedule any event that overlaps with ‘x’. Hence, the finish time of any such event, ‘y’ must be before the start time for x. i.e. f(y) < s(x). [Here, s(k) denotes start time of event, k.] Let ‘z’ be the event having the highest finish time among all such possible events,y that can be scheduled prior to ‘x’. Let us denote the event, ‘z’ by the symbol, P(x).

From our ordering of events on the basis of finish times, we can observe that all events that have an index less than that of P(k) (i.e. come before P(k) in the ordering) can be scheduled before event, ‘k’.

Let us consider an optimal solution to the problem, O. Now, event ‘n’ (the last one) either belongs to the set, O or it does not belong to O.

If ‘n’ belongs to O :- We can only schedule those events having index < = P(n). Also since O is optimal, the resulting subset of events chosen from among events 1, 2, .., P(n) should be optimal themselves. (Else, we could replace that subset with another one to produce a higher valued solution.)

If ‘n’ does not belong to O :- We can schedule any set of events having index <= n-1. Again since O is optimal, we will choose a subset which is optimal for the set of events 1, 2, …, (n-1).

The optimal value of the solution ( for a set of ‘n’ events), denoted by OPT (n) is then given by:

OPT (n) = max { w(n) + OPT (P(n)), OPT (n-1) }

Now, OPT (1) = w (1). We can quite easily compute the P(k) for each event. Using the value of P(2) and OPT(1), we get the value of OPT(2). Similarly, we get the value of OPT(3) using pre-computed OPT values for events numbered < 3 and P(k) values.

Hence, we see that computing OPT(k) for any given ‘k’ can be done in constant time given P(k) and the OPT values of previously computed indices < k.

Hence, we have achieved a running time that is linear in ‘n’. [ Computing P(k) for all 1<=k<=n is O(n); similarly computing OPT (k) for all 1<=k<=n is again O(n).]

Thus we have seemingly miraculously achieved an O(n) time complexity algorithm for a problem that does not seem very amenable at first sight. That is the power of dynamic programming.

The above post is based on an example from Ch. 6 of the excellent textbook on “Algorithm Design” by Kleinberg and Tardos. Enjoy the book.