Finding the time complexity of the following program that uses recursion

  softwareengineering

I need to find the time complexity in terms of Big Oh notation for the following program which computes the factorial of a given number: The program goes like this:

public int fact(int n){

  if (n <=1)
    return 1;
  else
  return n * fact (n-1);
}

I know that the above program uses recursion but how to derive the time complexity for the above program?

2

This solution can be easily transformed into much simplier:

int res = 1;
for(int i = 1; i <= n; ++i) {
    res *= i;
}

Considering that multiplication is O(1) (if using Karatsuba multiplication, it’s O(m^1.585), where m is the length of a number) the result is O(n) for this function.

3

Okay, let’s assume multiplication takes time as m0nhawk suggests.

We have to define a recursive time equation:

If you resolve this equation, you will get:

This is basically:

where k is n-1 and therefore:

But if you assume that multiplication takes constant time, O(n) is the correct runtime approximation.

5

I’m sorry for reviving an old question, but I think a mistake might be made here that is repeated over and over.

As far as I know, computational complexity is defined over the size of an efficient encoding of the input (n).

Given an input number m for for factorial, it is true that the algorithm requires m multiplications.
But this is not of linear order (i.e. the same order as the size of the encoded input), because an efficient encoding of a number m is of size n := log m

This means that the time complexity indeed IS exponential (m = 2^n multiplications) in the size of (an efficient encoding of) the input!

m multiplications are only linear in the input if you choose a unary encoding of the input, which is not an “efficient” choice.

On “Efficient encodings”

Because user SSH asked, let me elaborate a bit on the notion of “efficient encodings”.

The key to understanding it is understanding that the asymptotic complexity of an algorithm is defined in terms of “how fast the computational time grows, relative to how fast the input grows“.

Usually the size of the input is intuitive: a list of n items has size n. But sometimes it can trip you up, as in the example that this thread focusses on. This is why the definition of computational complexity explicitly states “efficient encoding”: to prevent you from making two possible mistakes:

1: forgetting you need to encode your input using a finite alphabet

In the factorial example, you might make the mistake of thinking your input size does not grow for bigger numbers, since you’re always dealing with “one number”. This is not valid, because it doesn’t make sense to feed ‘a number’ to a turing machine (you need an infinite size alphabet). You need to encode the number using a finite alphabet and this representation of a number will grow when the number n get’s bigger.

2: choosing an inefficient encoding

Because computational complexity is defined relative to the encoded input size, we could manipulate the complexity by choosing a very inefficent encoding.

For example: say we would encode a list of n items very inefficiently using n^2 tokens on the machine input tape; now suddenly all algorithms that use n^2 computational steps for lists of size n are linear in the encoded input size! That doesn’t make sense ofcourse. So our encoding has to be efficient in order for our analysis to succeed.

Another example is listening to your intuition and thinking that the number 4 is twice as big in size as the number 2. This is not the case, because we can efficiently encode the number 2 using 2 bits and the number 4 using only 3 bits.

Potential pitfall: growth of vs. absolute encoded input size.

Just remember that we are not interested in the absolute size but in the growth rate of the encoded input: a list of 2*n integers is still twice the size of a list of n items.

2

Recurrence equation:

           | e                if n = 1
T(n) =     |
           | T(n - 1) + d     if n > 1

f(n) = d so is a 0-degree polynomial, n^0

T(n) ∈ Θ(n^0+1) = Θ(n)

Method for Chip & Conquer

The problem of size n is chipped down into one subproblem of size n-c.

T(n) = T(n - c) + f(n)

If c > 0 (the chipping factor) and f(n) is the nonrecursive cost (to create subproblem and/or combine with solutions of other subproblems) then T(n) can be asymptotically bounded as follows:

  • If f(n) is a polynomial n^α, then T(n) ∈ Θ(n^(α+1))
  • If f(n) is lg n, then T(n) ∈ Θ(n lg n)

Theme wordpress giá rẻ Theme wordpress giá rẻ Thiết kế website

LEAVE A COMMENT