zaro

What is the big o quadratic time complexity?

Published in Time Complexity 5 mins read

The big O quadratic time complexity is denoted as O(N²), indicating an algorithm's execution time grows proportionally to the square of the input size (N).

Understanding Quadratic Time Complexity

Quadratic time complexity, represented as O(N²), classifies algorithms whose execution time or space requirements are directly proportional to the square of the input size, N. This means that as the number of elements (N) in the input grows, the number of operations an algorithm performs increases very rapidly, at a rate of N multiplied by N.

For example, if an algorithm takes 1 millisecond to process 10 elements, it might take 100 milliseconds (10²) to process 100 elements, and 10,000 milliseconds (100²) to process 1,000 elements. This rapid increase makes O(N²) algorithms less efficient for very large datasets.

Characteristics of O(N²) Algorithms

  • Growth Rate: The performance degrades significantly as the input size increases.
  • Typical Scenarios: This complexity often arises from nested iterations, where an inner loop iterates over a collection for each element of an outer loop.
  • Efficiency: Generally considered inefficient for large-scale applications compared to algorithms with linear (O(N)), logarithmic (O(log N)), or constant (O(1)) time complexities.

Common Scenarios Leading to O(N²)

Quadratic time complexity frequently appears in algorithms that involve processing pairs of elements within a dataset, often implemented using nested loops.

  • Nested Loops: The most classic example of O(N²) is an algorithm with two nested for loops, where both loops iterate up to N times. For every iteration of the outer loop, the inner loop completes all its iterations.
  • Operations within Loops: It's crucial to analyze the complexity of operations inside loops. If an operation that is inherently O(N) (e.g., computing the minimum of a list, or removing an element from a list which might require shifting subsequent elements) is performed within an outer loop that also iterates N times, the total complexity can escalate to *O(N) O(N) = O(N²)**.

Practical Implications and Examples

Understanding O(N²) is crucial for optimizing code performance. While acceptable for small datasets, it can become a significant bottleneck for larger ones.

Example 1: Nested Loop for Pairwise Comparison

Consider an algorithm designed to find if there are any duplicate numbers in an array by comparing each element with every other element.

def contains_duplicates(arr):
    n = len(arr)
    for i in range(n):       # Outer loop: N iterations
        for j in range(n):   # Inner loop: N iterations
            if i != j and arr[i] == arr[j]:
                return True
    return False

In this example, the outer loop runs N times, and for each iteration, the inner loop also runs N times, leading to N * N = N² operations in the worst case.

Example 2: Bubble Sort

Bubble Sort is a well-known sorting algorithm with O(N²) complexity. It repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order.

def bubble_sort(arr):
    n = len(arr)
    for i in range(n - 1):        # Outer loop: N-1 passes
        for j in range(0, n - i - 1): # Inner loop: ~N comparisons
            if arr[j] > arr[j + 1]:
                arr[j], arr[j + 1] = arr[j + 1], arr[j]
    return arr

Here, the nested loops contribute to the quadratic time complexity. The inner loop's iterations decrease slightly with each outer loop iteration, but asymptotically, it remains O(N²).

Example 3: Selection Sort

Selection Sort also exhibits O(N²) complexity. It works by repeatedly finding the minimum element from the unsorted part and putting it at the beginning.

def selection_sort(arr):
    n = len(arr)
    for i in range(n):
        min_idx = i
        # This inner loop finds the minimum element in the remaining unsorted array (an O(N) operation)
        for j in range(i + 1, n):
            if arr[j] < arr[min_idx]:
                min_idx = j
        # Swap the found minimum element with the first element of the unsorted part
        arr[i], arr[min_idx] = arr[min_idx], arr[i]
    return arr

The process of finding the minimum element (an O(N) operation) is repeated N times, leading to an overall O(N²) complexity.

Optimizing Beyond O(N²)

For large datasets, algorithms with quadratic time complexity can be prohibitively slow. Developers often seek more efficient algorithms, such as those with:

  • O(N log N) complexity: (e.g., Merge Sort, Quick Sort)
  • O(N) complexity: (e.g., counting sort for specific cases, or algorithms with a single pass through the data)
  • O(log N) complexity: (e.g., binary search)
  • O(1) complexity: (e.g., accessing an element by index in an array)

The following table provides a quick comparison of common Big O notations:

Big O Notation Description Growth Factor (N=100) Example Algorithm(s)
O(1) Constant Time 1 Array element access
O(log N) Logarithmic Time ~7 Binary Search
O(N) Linear Time 100 Simple loop iteration, searching an unsorted list
O(N log N) Linearithmic Time ~700 Merge Sort, Quick Sort
O(N²) Quadratic Time 10,000 Bubble Sort, Selection Sort, Nested Loops
O(2ⁿ) Exponential Time 1.26 x 10³⁰ Recursive Fibonacci (naive implementation)
O(N!) Factorial Time 9.33 x 10¹⁵⁷ Traveling Salesperson Problem (brute force)