Assignment 2

Contents

Assignment 2#


1. Using the master theorem, determine the complexity (in big-theta notation) of each of the following recurrence relations. Additionally, state whether the cost is dominated by the conquer operation, the combine operation, or if the two are roughly coequal.

a#

\(T(n) = 4T(n/4) + n = 4T(n/4) + \Theta(n)\)

\(a = 4, b = 4, d = 1 \implies (d = 1) \boxed{=} 1 = \log_{(b = 4)} (a = 4)\)

\(\Theta(n^d \log n) = \Theta(n^{(1)} \log n) = \boxed{\Theta(n \log n)}\)

The cost of the conquer and combine operations on the time complexity of the algorithm is roughly co-equal.

b#

\(T(n) = 2T(n/2) + n \log n = 2T(n/2) + \Theta(n \log n)\)

\(a = 2, b = 2, d = 1, s = 1 \implies (d = 1) \boxed{=} 1 = \log_{(b = 2)} (a = 2)\)

\(\Theta(n^d \log^{s + 1} n) = \Theta(n^{(1)} \log^{(1) + 1} n) = \boxed{\Theta(n \log^2 n)}\)

The cost of the conquer and combine operations on the time complexity of the algorithm is roughly co-equal.

c#

\(T(n) = 6T(n/3) + n^2 = 6T(n/3) + \Theta(n^2)\)

\(a = 6, b = 3, d = 2 \implies (d = 2) = \log_3 9 \boxed{\gt} \log_{(b = 3)} (a = 6)\)

\(\Theta(n^d) = \Theta(n^{(2)}) = \boxed{\Theta(n^2)}\)

The cost of the combine operation on the time complexity of the algorithm dominates the cost of the conquer operation.

d#

\(T(n) = 10T(n/2) + n^2 \log^{1.5} n = 10T(n/2) + \Theta(n^2 \log^{1.5} n)\)

\(a = 10, b = 2, d = 2, s = 1.5 \implies (d = 2) \boxed{\lt} \log_{(b = 2)} (a = 10)\)

\(\Theta(n^{\log_b a}) = \Theta(n^{\log_{(2)} (10)}) = \boxed{\Theta(n^{\log_2 10})}\)

The cost of the conquer operation on the time complexity of the algorithm dominates the cost of the combine operation.

e#

\(T(n) = 4T(n/4) + 1000n = 4T(n/4) + \Theta(n)\)

\(a = 4, b = 4, d = 1 \implies (d = 1) \boxed{=} \log_{(b = 4)} (a = 4)\)

\(\Theta(n^d \log n) = \Theta(n^{(1)} \log n) = \boxed{\Theta(n \log n)}\)

The cost of the conquer and combine operations on the time complexity of the algorithm is roughly co-equal.


2. In the introductory lecture on divide and conquer, an example algorithm for using divide and conquer for multiplication of integers was given, along with an optimization under the assumption that the second parameter \(n\) was even.

 INPUT    two integers a, and n even
OUTPUT    an integer an

MULT_OPT(a, n)
1    if n == 1
2        return a
3    s = MULT_OPT(a, n/2) // sub problem
4    return s + s         // combine

\( \begin{aligned} T(n) &= \Theta(1) + T(n/2) \\ &= \Theta(1) + \Theta(1) + T(n/2^2) \\ &= \Theta(1) + \Theta(1) + \Theta(1) + T(n/2^3) \\ &= \dots \\ &= \underbrace{\Theta(1) + \Theta(1) + \Theta(1) + \Theta(1) + \dotsb + 1}_{\text{how many times?}} \\ \end {aligned} \)

\(n = 2^k \iff k = \lg n\)

\( \begin{aligned} T(2^k) &= \Theta(1) + T(2^{k - 1}) \\ &= \Theta(1) + \Theta(1) + T(2^{k - 2}) \\ &= \Theta(1) + \Theta(1) + \Theta(1) + T(2^{k - 3}) \\ &= \dots \\ &= \underbrace{\Theta(1) + \Theta(1) + \Theta(1) + \Theta(1) + \dotsb + 1}_{k \,\,\,\text{times}} \\ &= \Theta(\lg n) \end {aligned} \)

a. Considering the optimized version, for what input values will the algorithm work? Will it work for any even \(n\), or is the restriction more significant than that? Why is this the case?

b. Adjust the optimized algorithm so that it will work for arbitrary integer values of \(n\). Prove, or at least argue, the correctness of your approach. The algorithm should retain \(\Theta(\log n)\) worst case performance.

a

The algorithm will not work for any even \(b\). The value of \(b\) must be even for all the recursive calls. For example, \(b = 6\) will break on the recursive call because \(b/2 = 3\). This is true when \(b\) is a power of \(2\).

Each recursive call requires \(n = 2m\) for some integer \(m\). Say \(n = 2^i\) for \(1 \le i \le k\). Then \(2^i = 2m \iff m = 2^{i - 1}\). Powers of 2 satisfy the property that when divided by two they yield an even number until unity is reached.

b

If \(n\) is even then the existing algorithm is used. If \(n\) is odd then \(n\) can be reduced by \(1\) to make it even, and an extra \(a\) added to the result of the recursive call. This approach only adds constant extra work–the conditional check and extra arithmetic–and requires no additional recursive calls, so it retains the same time complexity.

MULT_OPT(a, n)
1    if n == 1                      // base case
2        return a
3    if odd(n)
4        s = MULT_OPT(a, (n - 1)/2) // sub problem
5        return s + s + a           // combine
6    else
7        s = MULT_OPT(a, n/2)       // sub problem
8        return s + s               // combine

The correctness of this approach follows from the associative property.

If \(n\) is odd then \(n = 2k + 1\) for some integer \(k\).

\(an = \underbrace{a + a + \dotsb + a}_{\text{2k + 1 terms}} = \underbrace{a + a + \dotsb + a}_{\text{2k terms}} + a\)

Basis

\(n = 1 \implies \texttt{MULT}(a, n) = a\).

Inductive Hypothesis

\(n \ge 2\)

If \(n/2\) is odd then \(\texttt{MULT}(a, n/2) = a + 2 \times \texttt{MULT}(a, (n/2 - 1)/2)\) returns the correct result.

If \(n/2\) is even then \(\texttt{MULT}(a, n/2) = 2 \times \texttt{MULT}(a, (n/2)/2)\) returns the correct result.

Inductive Step

\(n\) is even and \(n/2\) is odd

\( \begin{aligned} \texttt{MULT}(a, n) &= 2 \times \texttt{MULT}(a, n/2)\\ &= 2 \times (a + 2 \times \texttt{MULT}(a, (n/2 - 1)/2)) \\ &= 2 \times (a + 2 \times (an/4 - a/2)) \\ &= 2 \times (a + (an/2 - a)) \\ &= an \\ \end {aligned} \)

\(n\) is even and \(n/2\) is even

\( \begin{aligned} \texttt{MULT}(a, n) &= 2 \times \texttt{MULT}(a, n/2)\\ &= 2 \times (2 \times \texttt{MULT}(a, (n/2)/2)) \\ &= 4 \times a \times n/4 \\ &= an \\ \end {aligned} \)


3. A more general version of the sorted array merge problem we saw in the last unit is called a \(k\)-way merge. In this problem you are given \(k\) sorted arrays and must merge them all together into a single sorted output array. For simplicity, assume that all \(k\) arrays have the same size \(n\).

a. The most straightforward solution to this problem is to merge the arrays one at a time–merge the first into the second. Then merge the third into that result, and so on. Prove the big-theta bound on the time complexity of this algorithm in terms of both \(k\) and \(n\) (i.e., do not treat \(k\) as a constant in the analysis, but rather use a cost function of the form \(T(k, n)\)).

b. Another option is to merge all \(k\) arrays together into the output array by examining each of them on each iteration to determine which one contains the smallest element at their head. State an algorithm based on this idea and prove its time complexity in big-theta notation in terms of \(k\) and \(n\). Is it any better than the algorithm from part a?

c. This problem can be solved more efficiently by using a divide and conquer approach. State a divide and conquer based algorithm for solving this problem, and prove its time complexity in big-theta notation. Compare it to the previous algorithm–why do you think it is more efficient?

 INPUT    k arrays A_1, A_2, ..., A_k each of size n sorted in ascending order
OUTPUT    an array of size kn sorted in ascending order

K_WAY_MERGE_0 (A_1, A_2, ..., A_k, n)
1    S = SORTED_ARRAY_MERGE(A_1, A_2, n, n)
2
3    i = 2
4    while i < k
5        S = SORTED_ARRAY_MERGE(A3, sorted, n, in)
6        i++
7
8    return S

\( T(n, k) = \begin{cases} 2n & S_1 = \texttt{SORTED\_ARRAY\_MERGE}(A_1, A_2) \\ 3n & S_2 = \texttt{SORTED\_ARRAY\_MERGE}(S_1, A_3) \\ 4n & S_3 = \texttt{SORTED\_ARRAY\_MERGE}(S_2, A_4) \\ \vdots \\ kn & S_{k - 1} = \texttt{SORTED\_ARRAY\_MERGE}(S_{k - 2}, A_k) \\ \end {cases} \\ \)

\( \begin{aligned} T(n, k) &= \Theta(kn) + T(n, k - 1) \\ &= \Theta(kn) + \Theta((k - 1)n) + T(n, k - 2) \\ &= \Theta(kn) + \Theta((k - 1)n) + \Theta((k - 2)n) + T(n, k - 3) \\ &= \dots \\ &= \underbrace{\Theta(kn) + \Theta((k - 1)n) + \Theta((k - 2)n) + \Theta((k - 3)n) + \dotsb + \Theta(4n) + \Theta(3n) + \Theta(2n)}_{k - 1 \,\,\text{terms}} \\ &\le \underbrace{\Theta(kn) + \Theta(kn) + \dotsb + \Theta(kn)}_{k - 1 \,\,\text{terms}} \\ &= \Theta(k^2n) \end {aligned} \)


4. We saw in lecture a multiplication algorithm based on divide and conquer technique, where we assumed that addition could be performed in \(\Theta(1)\) time. Let’s use a similar approach to examine exponentiation. Much like multiplication can be thought of as repeated addition, we can think of exponentiation as repeated multiplication. Consider the calculation of \(a^n\).

a. Provide a divide and conquer algorithm for calculating \(a^n\) assuming a logarithmic time multiplication operation is used. State the recurrence relation. Can it be solved using the master theorem as is?

b. Solve the recurrence relation, making simplifying assumptions if necessary. Ensure that you explicitly state any assumptions that you make.

c. For some strange reason, you find yourself in a situation where you only have a quadratic multiplication algorithm available to you. Repeat the analysis of your above algorithm in this case, stating the overall time complexity (and any assumptions you made to get there).

d. Much like in the multiplication case, the exponentiation algorithm can be optimized by only solving half of the sub problems when \(n\) is even (as the result of both sub problems will be the same). Consider again the two algorithms above: will this optimization provide any benefit for either of them? Why or why not?