Broad Network


Time Complexity for app.codility.com Explained in C Plus Plus

Foreword: Category of Article - Technology, Computers and Software, Algorithm

By: Chrysanthus Date Published: 21 Jan 2024

Introduction

Time Complexity refers to how fast a piece of code (a function for example, and not the whole program) runs from beginning to completion. Since different computers have different microprocessor clock frequencies and different sizes of memory, the same code will take different durations to run in different computers. Because of this issue and others, time complexity is given as relative speed.

Time complexity is the maximum number of main operations the piece of code needs, to solve a type of problem, in the worst case situation. There is Constant Time Complexity, Linear Time Complexity, Logarithmic Time Complexity, Quadratic Time Complexity, Exponential Time Complexity and Factorial Time Complexity.

The aim of this article, is to enable the reader pass the tests at app.codility.com/programmers. The tutorials at app.codility.com/programmers are given in Python, but this tutorial is given in C++. In Britain, a tutorial is called a lesson.

A typical way to express time complexity, is with the Big-O notation, of which examples are given below.

Constant Time - O(1)
Consider the following function:

    void solution() {
        int result = 5 + 5;
        cout << result << endl;
    }

There are two statements in the body of the function, solution(). After compilation, these statements are in binary code, of more than two lines, but still not of many lines. These two statements are executed  in a short time. This is known as constant time, written in Big-O notation as O(1), where the 1 in parentheses means, short duration or constant time. The exact time in milliseconds (seconds), is not emphasized upon, in the topic, time complexity. Constant time operation is best considered as one main operation.

Linear Time - O(n)
The above two statements can be considered as the main operation for the above function code. Consider the following function:

    void solution() {
        int N = 8;
        for (int i=0; i<N; i++) {
            cout << i << ' ';
        }
        cout << endl;
    }

The for-loop displays 8 numbers, which are:

    0 1 2 3 4 5 6 7

Though the statement, \93cout << i << ' ';\94 takes slightly longer to execute than a previous statement, \93int N = 8;\94 or the last statement, \93cout << endl;\94 , it is the statement of interest, and it is executed (N = 8) times. The statement of interest, \93cout << i << ' ';\94 is the main operation, in this function.

Time complexity is the number of main operations in some piece of code. It is actually the maximum possible number of main operations in the code. In this situation, the first and last statements, \93int N = 8;\94 and \93cout << endl;\94 respectively, are ignored, when quoting the time complexity. They are also ignored, because they do not repeat. The repeating main operation, is called a dominant operation. Using the Big-O notation, the time complexity for this function is given as

    O(n)

where n in this case is 8. This is linear time, which can be appreciated as one complete scan of a list (consecutive counting).

Logarithmic Time \96 O(log n)
The mathematical symbol, =>, means \93this implies that\94.

Now,

        8 = 2 x 2 x 2
=>    8 = 23

In this second math statement, 2 is called the base, 3 is called the index and 8 is just called the number. So 8 equals 2 raised to the power (index) 3, because 2 is multiplied by itself 3 times to give 8.

The second statement can be rewritten as:

        log2 8 = 3

read as, log to the base 2 of 8 equals 3. The word, log, is of the math topic, logarithm. There are three mathematical statements above. The second and third are two different ways of writing the same thing. The two different ways, each has its pros and cons.

From out of 8 possible repetitions of the main operation in a function, if the main operation repeats 3 times, then the time complexity is written as:

   O(log2 n)

using the Big-O notation, where n in this case is 8. The subscript 2 is optional to write. The logarithmic time, O(log n) is usually less than half the linear time, O(n), e.g 3 is less than half of 8, which is 4.  

For code example of logarithmic time, with the number 8, consider the following function:

    void solution() {
        int N = 8;
        while (N > 1) {
            N = N / 2;  //integer division
            cout << N << ", ";
        }
        cout << endl;
    }

The main operation consists of the two statements:

            N = N / 2;  //integer division
            cout << N << ", ";

With integer division (N / 2), the remainder is abandon after each division. The output is:

    4, 2, 1,

That is, out of 8 operations, there are 3 operations, corresponding to the 3 numbers at the output. With time complexity, what matters is the number of main operations, and not necessarily, the number of numbers at the output. In this situation, the number of numbers at the output happens to be the same as the number of main operations (number of round loopings). For each loop repeat, the main operation executes in constant time, O(1). For the 3 repeats, the function takes logarithmic time, O(3) as opposed to linear time O(8), out of 8 operations.

Quadratic Time - O(n2)
In the alphabet, A, B, C, D, E, F, G, H are characters, and each is less than \91I\92.  Consider the following function:

    void solution() {
        char ch = 'A';
        for (char i='A'; i<'I'; i++) {
            for (char j='A'; j<'I'; j++) {
                cout << j << ' ';
            }
            cout << endl;
        }
    }

There are two for-loops, with one nested in the other. The inner-loop prints a row of 8 characters. The outer-loop prints 8 of these rows. The output is:

    A B C D E F G H
    A B C D E F G H
    A B C D E F G H
    A B C D E F G H
    A B C D E F G H
    A B C D E F G H
    A B C D E F G H
    A B C D E F G H

This is an 8-by-8 table. The number of characters are 64 = 8 x 8. The main operation is \93cout << j << ' ';\94, which executes 8 x 8 times = 64 = 82, that is 8 square. Squaring means quadratic. So the time complexity for this function is written as:

    O(n2)

where n is 8 in this case. 82 = 8 x 8. This is quadratic time complexity.

n.log2n Time
n.log2n means, n x log2n .

From above,

        8 = 2 x 2 x 2
=>    8 = 23
=>   log2 8 = 3

Now,

    if n = 8, then O(n2) = 64 operations = O(82).

    if n = 8, it means log2 8 = 3;
    therefore n x log2n = 8 x 3 = 24

64 = 82
There are 24 main operations for (8xlog2 8) time. 24 is less than half of 64, which is 32. 24 is greater than 3, which equals log28

So, O(n.log2n) time complexity lies between O(log2n) and half of O(n2) , generally.

For code example of O(n.log2n) time, with the number 8, consider the following function:

    void solution() {
        int N = 8, Q = 8;
        for (int i=0; i<Q; i++) {
            while (N > 1) {
                N = N / 2;  //integer division
                cout << N << ", ";
            }
            cout << endl;
            N = 8;
        }
    }

The main operation consists of the two statements:

            N = N / 2;  //integer division
            cout << N << ", ";

With integer division, the remainder is abandon after each division. The while-loop does the main operation, 3 times. The for-loop does what is in the while-loop 8 times. This makes 8 x 3 = 24. The exact time complexity from O(n.log2n) here, is O(8.log2 3) . The output is:

    4, 2, 1,
    4, 2, 1,
    4, 2, 1,
    4, 2, 1,
    4, 2, 1,
    4, 2, 1,
    4, 2, 1,
    4, 2, 1,

There are 8 rows and 3 columns, giving 24 output entries.

n + m Linear Time : O(n + m)
For code example of n + m linear time, with the number n = 8, consider the following function:

    void solution() {
        int start = 1;
        for (int i=start; i<9; i++) {
            cout << i << ' ';
        }
        
        cout << endl;
        char ch = 'A';
        for (char i='A'; i<'I'; i++) {  //'I' is 9th letter in alphabet
            cout << i << ' ';
        }
        cout << endl;
    }

The main operation is \93cout << i << ' ';\94 for both non-nested for-loops. The output is:

    1 2 3 4 5 6 7 8
    A B C D E F G H

Each for-loop in the code, produces 8 = n main operations. The first set of main operations is referred to as, n operations; and the second set of main operations is referred to as, m operations. Here, n + m = 8 + 8 = 16

Here, n + m = 16 operations lies between the particular time complexity O(log2 8) = 3 operations, and the particular time complexity O(8.log2 8) = 24 operations . n + m time generally lies between O(log2 n) time and O(n.log2 n) time.

Even if there were three non nested for-loops as in the following code, the time complexity would still be given as O(n + m) :

    void solution() {
        int start = 1;
        for (int i=start; i<9; i++) {
            cout << i << ' ';
        }
        cout << endl;
        char ch = 'A';
        for (char i='A'; i<'I'; i++) {  //'I' is 9th letter in alphabet
            cout << i << ' ';
        }
        cout << endl;
        char c = 'a';
        for (char i='a'; i<'i'; i++) {  //'i' is 9th small letter in alphabet
            cout << i << ' ';
        }
        cout << endl;
    }

The output is:

    1 2 3 4 5 6 7 8
    A B C D E F G H
    a b c d e f g h

Exponential Time Complexity:

O(2n)

21 = 2
22 = 2 x 2
23 = 2 x 2 x 2
24 = 2 x 2 x 2 x 2
25 = 2 x 2 x 2 x 2 x 2

This is generally written as:

2n = 2 x 2 x 2 - - - x nth 2

2n is called the nth power of 2. Time complexity of O(25) means, there are 25 = 2 x 2 x 2 x 2 x 2 = 32 main operations. In general, O(2n) means, there are 2n operations.

Multiplication is continuous addition; so 25 = 2 x 2 x 2 x 2 x 2 = 32, is the same as:

    1 + 1 +
    1 + 1 +
    1 + 1 + 1 + 1 +
    1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
    1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1

The main operation is just addition. There are 31 addition operations here. However, as concerns time complexity, the number of additions are assumed to be 32.

The following function is supposed to be for 25 main operations, of time complexity, O(25), and generalized as O(2n) . The first two main operations are outside the nesting loop.

    void solution() {
        int n = 5;
        int number = 1 + 1;
        int sum = 1 + 1;
        for (int i=2; i <= n; i++) {
            for (int j=1; j <= sum; j++) {
                number = number + 1;
            }
            sum = number;
        }
        cout << number << endl;
    }

The output is 32, as expected. The statements, \93int number = 1 + 1;\94 and \93int sum = 1 + 1;\94 form the first two main operations in the code of interest. There are twenty eight more coded additions, each of which is just the addition of 1. The statements in the parentheses of the nesting for-loops are ignored, in determining the time complexity. The statements, \93sum = number;\94 and \93cout << number << endl;" are also ignored. The reader should read through the function to really understand what is going on.

The code does 30 coded additions in the nested for-loop, and two additions outside the nesting for-loop; but the 5 rows of manual additions of 1\92s, has 31 addition operations. That is how time complexity is, sometimes. The time complexity for this situation, is given as O(25), and generalized as O(2n).

Factorial Time Complexity : O(n!)

Now,

        5! = 5 x 4 x 3 x 2 x 1 = 120
=>    5! = 120

Notice that x1 (times 1) does not change the previous product of x2.

Multiplication is continuous addition; so the above is the same as:

    (1 + 1 + 1 + 1 + 1) + (1 + 1 + 1 + 1 +1) + (1 + 1 + 1 + 1 +1) + (1 + 1 + 1 + 1 +1) = 20 +

    (1 + 1 + 1 + 1 + 1) + (1 + 1 + 1 + 1 +1) + (1 + 1 + 1 + 1 +1) + (1 + 1 + 1 + 1 +1) = 40 +

    (1 + 1 + 1 + 1 + 1) + (1 + 1 + 1 + 1 +1) + (1 + 1 + 1 + 1 +1) + (1 + 1 + 1 + 1 +1) = 60 +

    (1 + 1 + 1 + 1 + 1) + (1 + 1 + 1 + 1 +1) + (1 + 1 + 1 + 1 +1) + (1 + 1 + 1 + 1 +1) = 80 +
    (1 + 1 + 1 + 1 + 1) + (1 + 1 + 1 + 1 +1) + (1 + 1 + 1 + 1 +1) + (1 + 1 + 1 + 1 +1) = 100 +
    (1 + 1 + 1 + 1 + 1) + (1 + 1 + 1 + 1 +1) + (1 + 1 + 1 + 1 +1) + (1 + 1 + 1 + 1 +1) = 120

All the 1\92s counted in this series gives 120. However, there are 119 addition operations in the series. Accept that.

The 1\92s are in two groups for x2 (times 2). The first group has 1\92s in three rows for x3. Each row is five 1\92s times four. The second group also has 1\92s in three rows for x3. Each row is five 1\92s times four.

The following function is for 5! = 120 main operations, of time complexity, O(5!), and generalized as  O(n!) . The first main operation is outside the nesting for-loop.

    void solution() {
        int n = 5;
        int factorial = 0;
        int newAddition = 5;
        int oldAddition = 5;
        for (int i = n; i>1; i--) {
            for (int j=1; j <= newAddition; j++) {
                factorial = factorial + 1;
            }
            newAddition = factorial * (i - 1) - oldAddition;
            oldAddition = factorial * (i - 1);  //5 x 4 = 20, then 20 x 3 = 60. then 20 x 2 = 120. x1 (times 1) does not change the previous product of x2
        }
        cout << factorial << endl;
    }

The output is 120, as expected. There are 119 addition operations in the code. The time complexity is given as O(120), and not O(119), and generalized as O(n!). Accept that. The main operation is \93factorial = factorial + 1;\94, where factorial is just a variable; and the rest of the operation statements in the code, are ignored, in the quotation of time complexity \96 accept that. If n = 4, then all the values of 5 in the code, have to be changed to 4. The reader should read through the function to really understand what is going on.

Time limit

Nowadays, an average computer can perform 108 operations in less than a second. Sometimes
the programmer has the information he needs about the expected time complexity (for example, app.codility.com would specify the expected time complexity), but at other times the programmer does not.

The time limit set for online tests at app.codility.com/programmers, is usually from 1 to 10 seconds. The candidate can therefore estimate the expected time complexity. During contests, the candidate is often given a limit on the size of data; and therefore he can estimate the time complexity within which the task should be solved. This is usually a great convenience because he can look for a solution that works in a specific time complexity, instead of worrying about a faster solution. For example, at app.codility.com/programmers, if the size of the data:

    \95  n <= 1 000 000, the expected time complexity is O(n) or O(n log n),
    \95  n <= 10 000, the expected time complexity is O(n2),
    \95  n <= 500, the expected time complexity is O(n3).

Of course, these limits are not precise. They are just approximations, and will vary depending
on the specific task.

Approximation of Time Complexity
Time complexity, given as O(10) may actually have 8 or 9 operations, or even 11 or 12 operations, though the maximum limit is aimed at. In many cases it would have 10 operations or less.

Exercise

Problem 1: You are given an integer n. Count the total of 1 + 2 + . . . + n, in O(n2) time.

Solution
The following program has one test case in the C++ main() function (read the comments in the whole program). In the function, 1 is 1; 2 is 1 + 1; 3 is 1 + 1 + 1; 4 is 1+1+1+1; etc.

    #include <iostream>
    using namespace std;

    int slow_solution (int N) {  //using zero based indexing
        int result = 0;
        for (int i=0; i<N; i++) {
            for (int j=0; j < i+1; j++) {  //i=0 or j=0 refers to the first element, 1
                result += 1;               //i=1 or j=1 refers to the second element, 2, etc.
            }
        }

        return result;
    }

    int main(int argc, char **argv)
    {
        int n = 4;
        int res = slow_solution(n);
        cout << res << endl;
        
        return 0;
    }

The output is 10, for input of N = 4.

Problem 2: Repeat the above problem for the time complexity of O(n).

Solution
The following program has one test case in the C++ main() function (read the comments in the whole program). The function does 1 + 2 + 3 + 4 = 10.

    #include <iostream>
    using namespace std;

    int fast_solution (int N) {  //using zero based indexing
        int result = 0;
        for (int i=0; i<N; i++) {
            result = result + (i + 1);  //1=0+1; 2=1+1; 3=2+1; etc.
        }

        return result;
    }

    int main(int argc, char **argv)
    {
        int n = 4;    //1 + 2 + 3 + 4 = 10
        int res = fast_solution(n);
        cout << res << endl;

        return 0;
    }
    
The output is 10; same as above, for input of N = 4.

Problem 3: Repeat the above problem for the constant time complexity, O(1).

Solution
This is a mathematical Arithmetic Progression with the first term, a = 1, and common difference, d = 1.  The sum, Sn of the first n terms, of an arithmetic progression, is given by,

    Sn = n/2[2a + (n-1)d]

This formula is one main operation and should be coded as such. The following program has one test case in the C++ main() function (read the comments for the whole program):

    #include <iostream>
    using namespace std;

    int model_solution (int n) {  //formula: sum of arithmetic progression, with first element, 1 and difference, 1

        int result = n/2*(2*1 + (n-1)*1);  //one operation, without repetition in any loop

        return result;
    }

    int main(int argc, char **argv)
    {
        int n = 4;  //sum = n/2*(2*1 + (n-1)*1)
        int res = model_solution(n);
        cout << res << endl;

        return 0;
    }

The output is 10; same as above, for input of N = 4.

Space Complexity

Memory limits provide information about the expected space complexity. The programmer can estimate the number of variables that he can declare in his programs. In short, if he has constant numbers of variables, he also has constant space complexity: in Big-O notation this is O(1). If he needs to declare an array with n elements, he has linear space complexity - O(n).

More specifically, space complexity is the amount of memory needed to perform the computation. It includes all the variables, both global and local, dynamic pointer data-structures, and in the case of recursion, the contents of the stack. Depending on the convention, input data may also be included. Space complexity is more tricky to calculate than time complexity, because not all of these variables and data-structures may be needed at the same time. Global variables exist and occupy memory all the time; local variables (and additional information kept on the stack) will exist only during invocation of the function.

Conclusion
The time complexity of a piece of code, is the number of main operations in the code. This is a good indication on how fast the code will run on a particular system (computer). The main operation may consist of one, or two or three simple statements.

If the main operation consists of many statements, then single statements sprinkled here and there in the code, will be ignored in the quotation of the time complexity.

Related Links

More Related Links

Cousins

NEXT

Comments