The purpose of this article is to analyze the complexity of algorithms written using a top down approach.

Most of the problems can inherently be described using top down recursion. The complexity of an algorithm, however, makes it far from usable in practical scenarios. This kind of algorithm shows exponential complexity and might need some kind of specialized solution to bring the hardness of the solution from exponential to polynomial complexity. To describe one such case, let's take the example of one moderate problem. The problem is as described below and is another variation of well known LCS (Longest common subsequence problem). The purpose of taking a problem with moderate complexity is to provide the audience with a challenge to come to a conclusion and work out a solution that is close enough for practical use. The problem is also different from what most of the readers have seen elsewhere. The problem is as given below.

A subsequence is palindromic if it is the same whether read left to right or right to left. For Instance, the sequence A;C; G; T; G; T;C; A; A; A; A; T;C;G has many palindromic subsequences, including A;C; G;C;A and A; A; A;A (on the other hand, the subsequence A;C; T is not palindromic). Devise an algorithm that takes a sequence x[1 : : : n] and returns the (length of the) longest palindromic subsequence. Its running time should be O(n2).

So as a layman programmer the problem seems to be recursive and solved with some recursive scheme. Let's call the Longest Palindrome Sequence as LPS. So given a string Aij, the solution is below.

Ai = Aj --o 2 + LPS(i+1, j-1) LPS(i, j) = Max Ai != Aj --o Max(LPS(i, j-1), LPS( i+1, j))

The algorithm can easily be reproduced in any available language which supports recursion. So our naC/ve algorithm looks like

int LPS(i, j) { If (Ai == Aj) { return 2 + LPS(i+1, j-1); } else { return Max(LPS(i, j-1), LPS( i+1, j)); } }

The solution by far looks correct. But can it be used practically for any good use? The exponential complexity of the above algorithm might even make it difficult for strings of moderate lengths. If we look closely at the solution we see the depth of the recursion increases exponentially. So given string of length k, the complexity of the above algorithm is 2 ^ k, since every problem is divided into 2 new problems. Worse, the same problem is repeating itself. So the total complexity in the worst case, when there is no palindrome in the problem string, is (2* 2 ^ K).

To make it more clear consider substring I, j. The problem I +1, j and I, j-1 has I + 1, j -1 common problems. Is there a way to avoid processing the same problems? Can we make use of computations, which we have already done? A minute's reflection will tell yes we could.

Below shows how we could do this. Read M as matrix in below code of length I, j.

int LPS(i, j) { int lcs; If (M[i][j] == -1) { If (Ai == Aj) { lcs = 2 + LPS(i+1, j-1); } else { lcs = return Max(LPS(i, j-1), LPS( i+1, j)); } M[i][j] = lcs; } Return M[i][j]; }

The above algorithm is memoization and reduces the complexity of the problem by half. The above solution skips computing similar problems.

There is yet another solution to the above problem, which reduces the complexity to polynomial time. As most of us know it's a bottom up approach using Dynamic Programming.

The solution is given below and self explanatory. M is matrix where we store results for previous computations.

F Mk,I o Ak != Ai o Mk,i-1 i = 0, n Ak = Ai o 2 + Mk+1, i-1 j = 0, i k = j, i

Read F as for each. For the above algorithm the complexity is n ^ 3. The table is constructed bottom up. For ex A0, A0..Aj, A0 - An. So if you know Mi, j-1 you know M I, j. Do we need more explanation to code above thing? I guess not.

The complexity can further be reduced to n ^ 2 if we reverse the same string and run LCS over the two strings. Since complexity of LCS (Largest common ancestor is n ^ 2).