## Greedy algorithms – Coin changing using minimum number of coins

March 9, 2011 Leave a comment

We have coins of denominations 1 cent, 5 cents and 25 cents with us. What we would like to do is represent a total of cents using these coins, such that a minimum number of coins are used.

For example, if we want to represent 31 cents, we could do that by using 1 25-cents coin, 1 5-cents coin and a single coin of 1-cent, thereby utilizing a total of 3 coins. And this indeed is the bast we can do. Why is that so? Let’s look at it this way — the current solution uses one 25-cent coin, one 5-cent coin and one 1-cent coin. One way in which we can try to modify this solution would be to decide to not use any 25-cent coin at all (instead of the one coin that we are currently using). That would mean that we would have to represent the shortage of 25 cents that we are now faced with, using coins of denominations 1 and 5-cents. We can see that we would need at least five coins of denomination 5-cents in order to represent 25-cents. Hence, using a 25-cent coin was actually a wise choice enabling us to save on the number of coins used.

Now, what if we had to represent 57 cents. The best solution would be to use two coins of 25-cents, one coin of 5-cents and two coins of 1-cent.

Again, we can intuitively infer its optimality by observing that:

1. if we use lesser number of 25-cents coins than what we are currently using, we would actually end up using more coins. —(If we use just one 25-cent coin, then we would require 5 coins of 5-cents each to cover up the deficit of 25-cents. Similarly if we do not use any 25-cents coins at all, we would need 10 coins of 5-cents each to cover the deficit.) Hence, the idea is to use as many 25-cent coins as we possibly can.

2. Similarly, once we have used the maximum number of 25-cent coins, we are left with cents (where ) to represent using 1 and 5-cent coins. Analogous to what we observed above, we can infer that we must use as many 5-cent coins as we can in order to cover the deficit of cents. Whatever is left can be represented using 1-cent coins.

———

Hence, the algorithm would look something like this.

//Represent n cents using the least number of total coins where coins are of denominations – 1, 5 and 25-cents.

1. Divide n by 25 to get quotient q1 and remainder k1.

2. Use q1 coins of 25-cents each.

3. If k1 == 0, we are done. Else, divide k1 by 5 to get quotient q2 and remainder k2.

4. Use q2 coins of 5-cents, and k2 coins of 1-cent.

———-

Now, if we have coins at our disposal having denominations -cents and we had to represent cents with the minimum number of coins, we could do the following:-

———–

i = 3;

num = n;

While ( exponent >= 0)

{

Divide num by k^i to get quotient q and remainder r.

Use q coins of denomination k^i cents.

If ( r == 0)

break;

num = r;

i – – ;

}

———

Now, take the following example, represent 31 cents using coins of denominations 1, 10 and 25 cents such that the least number of coins are used.

Here, the idea behind the greedy algorithm of using the maximum possible number of coins of the highest denomination would not work. That approach would get us a solution that uses 6 coins : one 25-cent coin, and 6 1-cent coins.

In contrast, we can get a better solution using 4 coins: 3 coins of 10-cents each and 1 coin of 1-cent.

In the problems presented at the beginning of this post, the greedy approach was applicable since for each denomination, the denomination just smaller than it was a perfect divisor of it. (i.e. 25 was a factor of 125; 5 was a factor of 25; 1 was a factor of 5}. However, that not being true in this case {10 is not a factor of 25} leads to the failure of the greedy approach.