Introduction Big O Notation

Big O Notation

A mathematical notation to describe the performance and complexity of algorithms.

Definition

Big O notation is used to describe an algorithm's performance or complexity, focusing on the most significant factors affecting its execution time or space.

Notations

Understanding different notations is necessary for analyzing and comparing the efficiency of algorithms. The following table provides a detailed overview of common Big-O notations you will encounter:

ComplexityNotationRank
ConstantO(1)#1
LogarithmicO(log n)#2
LinearO(n)#3
LinearithmicO(n log n)#4
QuadraticO(n^2)#5
CubicO(n^3)#6
ExponentialO(2^n)#7
FactorialO(n!)#8

These notations help in predicting how the performance of an algorithm changes with the size of the input. By understanding and recognizing these patterns, developers can make informed decisions about which algorithms and data structures to use based on the specific requirements and constraints of their applications.

Graph of different time complexities in comparison
Analyzing Algorithms

To determine the complexity of an algorithm, follow these steps:


  1. Identify: Look for the basic operations in the algorithm that contribute to its running time. These are the fundamental steps the algorithm takes, such as comparisons, assignments, arithmetic operations, and loops.

  2. Scaling: Determine how the execution time of these operations scales with the input size. Analyze how the number of basic operations changes as the size of the input increases. A helpful trick is to ask yourself: "What about an input size of a billion?"

  3. Significance: Focus on the most significant term and ignore constant factors and less significant terms. In Big-O notation, we are concerned with the term that grows the fastest as the input size increases because it dominates the overall time complexity.

  4. Worst-case Scenario: Analyze the algorithm's performance in the worst-case scenario. Big-O notation typically describes the upper bound of an algorithm's running time, representing the longest time it could take to complete. This helps in understanding the maximum resources required.
Amortized Time

Amortized time analysis gives us an average running time per operation across multiple operations. Although arrays are initialized with a fixed size, they may need to be resized—typically doubled in size—when additional space is required. This resizing involves copying all elements to a new, larger array.


While resizing the array can be costly, amortized analysis reveals that the average cost of these operations, including resizings, remains low. Spreading out these costs over many appends keeps the process efficient.

Easy Example

Let's consider a simple example to understand how to analyze the time complexity of different algorithms. The code below shows a loop that iterates through an array of size n:


Let's apply the steps we discussed earlier and walk through the code step-by-step:


  1. Identify: The basic operation in this algorithm is the loop that iterates through the array and prints each element. The primary operations are the iteration (loop control) and the print statement.

  2. Scaling: The loop runs once for each element in the array, so if the array has n elements, the loop will execute n times. This means the execution time scales linearly with the input size. If the input size were a billion, the loop would still run a billion times.

  3. Significance: The most significant term here is n, representing the number of iterations. Constant factors, such as the time taken to print each element, are less significant and can be ignored in Big-O notation.

  4. Worst-case Scenario: The worst-case scenario is the same as the best-case scenario for this simple loop: the loop will always run n times. Therefore, the time complexity in the worst-case scenario is O(n).

Answer: The loop runs n times, so the time complexity is O(n).

Harder Example

This example demonstrates using binary search to find the minimum eating speed such that all piles are eaten within a given number of hours:


To determine the time complexity of this algorithm, follow these steps:


  1. Identify: The basic operations in this algorithm include the binary search loop, the calculation of total hours for a given speed, and the comparison operations to adjust the search range.

  2. Scaling: We know that binary search runs in O(log m) time, where m is the range from 1 to the maximum number of piles. For each iteration of the binary search, the algorithm calculates the total hours required, which involves iterating through all piles, taking O(n) time where n is the number of piles. Thus, the total time complexity is O(n log m).

  3. Significance: The most significant term here is the product of the binary search complexity(log m) and the iteration through the piles (n). Constant factors and less significant terms are ignored in Big-O notation.

  4. Worst-case Scenario: The worst-case scenario is when the binary search runs its maximum number of iterations, and each iteration requires scanning through all the piles. The time complexity in the worst-case scenario remains O(n log m).

Answer: The binary search runs in O(log m) time, and for each iteration, it scans all piles in O(n) time. This results in a total time complexity of O(n log m).

Challenging Example

This example demonstrates generating all permutations of a list of numbers using depth-first search:


To determine the time complexity of this algorithm, follow these steps:


  1. Identify: The basic operations in this algorithm include the depth-first search (DFS) recursion and the iteration over the numbers in the list to build permutations.

  2. Scaling: The DFS function explores all possible permutations of the input list. For each position in the list, it tries every number that hasn't been used yet, leading to n! (factorial of n) permutations. Thus, the DFS function is called O(n!) times, and within each call, it performs operations that take O(n) time.

  3. Significance: The most significant term here is the product of the number of permutations(n!) and the operations within each DFS call (n). Constant factors and less significant terms are ignored in Big-O notation. Therefore, the time complexity is O(n * n!).

  4. Worst-case Scenario: The worst-case scenario for this algorithm involves generating all permutations for the input list, which is also the typical scenario. The time complexity in the worst-case scenario is O(n * n!).

Answer: The DFS function runs in O(n * n!) time because it generates all possible permutations of the input list and performs operations for each permutation.

Best Practices

When working with algorithms, keep the following tips in mind to optimize their performance:


  • • Understand the problem: Clearly define the problem and identify the requirements before choosing an algorithm. Break down the problem, consider edge cases, and ensure you understand the input and output requirements.

  • • Choose the right data structures: Selecting the right data structures can significantly impact the efficiency of your algorithm. For example, using a hash map can reduce the time complexity of search operations from O(n) to O(1).

  • • Base Cases:Ensure that your algorithm handles base cases first. Base cases are the simplest, smallest instances of the problem, and they help prevent infinite recursion or iterations.

  • • Space-Time Tradeoff: Consider the trade-offs between time and space complexity, and choose the approach that best suits your needs. Sometimes, using more memory can reduce the time complexity, or vice versa.

Copyright © StudyDSA. All rights reserved.