In the previous article, I taught you some basics of data structures and algorithms that help you store data efficiently and solve complex problems with easy and simple steps. Although it is good to know how to use and implement DSA to get optimal performance, learning how to analyze the efficiency of your code is equally important. The Time Complexity and Big O Notation are the two major factors that help you determine how well an algorithm will perform in a certain situation. So, in this article, we will learn how to calculate the Time Complexity and Big O notation for an algorithm.
Algorithm Analysis – A Foundation for Finding the Efficiency
The Algorithm Analysis is the process of analyzing the efficiency of an algorithm. When we say analysis, we refer to the variance in an algorithm's performance with increasing input size. The efficiency is measured in terms of both time (how much time the algorithm takes to run) and space (how much memory the algorithm consumes) complexities. Although both hold equal significance, we will focus on the Time Complexity and its Big O notation in this article. By learning these, you will be able to compare the algorithms that solve the same problems and choose the one that performs the best in certain situations.
Understanding Time Complexity
The amount of time an algorithm takes to execute according to the size of the input is called Time Complexity. It is the measure of the number of operations an algorithm performs as the input size increases. Instead of calculating execution time in seconds, it is denoted by 'n.'
The time complexity of an algorithm depends upon the following three scenarios:
- Best-Case Scenario: In this case, the data is arranged in such a way that the algorithm needs to perform the minimum number of operations. For instance, consider an algorithm that finds an element in an array. The best-case scenario would be if the element is placed in the first index since the algorithm will have to perform only one operation. Hence, the best-case time complexity would be very low.
- Worst-Case Scenario: The worst-case scenario is the one in which the data is arranged in a way that the algorithm has to perform the maximum number of operations. For instance, consider an algorithm that finds an element in an array. The worst-case scenario would be if the element is placed at the last index since the algorithm will have to perform too many operations to find the element. It is the most important scenario as it gives you the upperbounds on the algorithm's runtime.
- Average-Case Scenario: Involving probabilistic analysis, it is the most complex scenario that gives you the expected runtime of the algorithm given all the possible inputs. In practice, the worst-case scenario is most considered as it guarantees the upper limits of the algorithm's execution time. Especially with the Big O notation, it plays a vital role.
How to Calculate Time Complexity of an Algorithm?
Calculating the time complexity of an algorithm is pretty simple if you are good at maths. When I first learned it, I could not even understand the concept of time complexity, let alone calculate it. However, I'll try to make it as simple and easy as possible for you:
1. Assign a cost to all the basic operations (arithmetic, logical, comparison, assignment, and memory allocation). We usually assume that all the operations take a constant amount of time to execute. We denote this constant as 't' or 1 for simplicity.
2. After assigning the cost, we analyze the sequential steps, loops, conditionals, and recursions in the algorithm's structure and express the total number of basic operations performed as the mathematical function with the input size 'n.' Mathematically, T(n).
3. According to the formal definition of Big O notation, T(n) can be represented as O(f(n)). If there are positive constants t and n0, such that all n >= n0, we get the following inequality:
T(n) <= t.f(n).
Hence, f(n) represents the upperbounds for the T(n) growth rate for the large values of n.
Understanding Big O Notation
The Big O notation is the mathematical representation of how the execution time of an algorithm increases as the size of the input it processes increases. It describes the upper limit on the growth rate of an algorithm's run time as the size of the input approaches infinity. It provides a standardized way of comparing the efficiency of different algorithms regardless of the specific hardware or software implementation details.
Common Big O Notations
The following are some commonly used Big O notations widely used in DSA:
- Constant Time (O(1)): When an algorithm takes an equal amount of time to execute with different input sizes.
- Logarithmic Time (O(log n)): When the execution time of an algorithm increases logarithmically with the input size. It usually happens with the algorithms that divide the problem space in half in each step.
- Linear Time (O (n)): When the execution time of an algorithm directly increases with the increase in the input size.
- Linearithmic Time O ((n log n)): This notation is used where the execution time of an algorithm depends upon the linear as well as logarithmic factors.
- Quadratic Time (O (n2)): When the execution time of an algorithm is directly proportional to the square of the input size.
- Exponential Time (O(2n)): When the execution time of an algorithm doubles with each increase in the input size. Algos with this time complexity are usually inefficient and produce problems when used for large datasets.
- Factorial Time O(n!): When the execution time of an algorithm increases extremely rapidly with the increase in input size. These algorithms are only usable for scenarios with very small data.
Rules for Determining the Big O Notation
While determining the Big O notation for an algorithm, you should consider the following key rules:
• Ignoring Constant Factors: Ignore the constant factors in the number of operations. For instance, for an algorithm that performs 2n operations, the Big O notation would be O(n) rather than O(2n).
• Identifying Dominant Terms: With an algorithm with multiple steps having different growth rates, always consider the term that grows the fastest with the increase in input size. For instance, suppose an algorithm has the time complexity of (n2 + n + 1). The Big O notation for this algo would be O(n2) since it is the most dominant term. When the n becomes very large, the low-order terms become insignificant.
• Considering the Worst-Case Scenario: The Big O notation usually describes the worst-case scenario for the algorithm.
Now that you are familiar with the concept of time complexity and Big O notation, try to calculate both for future articles in which we will explore different data structures and algorithms.