Skip to content

Time complexities of common algorithms

AlgorithmCategoryBest Case Time ComplexityAverage Case Time ComplexityWorst Case Time ComplexityNotes
Kruskal’s AlgorithmGraph (MST) or or or Dominated by sorting edges. is number of edges, is number of vertices. Uses Disjoint Set Union.
Prim’s AlgorithmGraph (MST)
   (Adjacency Matrix)Efficient for dense graphs.
   (Min-Heap)Efficient for sparse graphs.
Dijkstra’s AlgorithmGraph (Shortest Path)For non-negative edge weights.
   (Adjacency Matrix)
   (Min-Heap)
Bellman-Ford AlgorithmGraph (Shortest Path)Handles negative edge weights; can detect negative cycles.
Depth-First Search (DFS)Graph Traversal
   (Adj. List)Visits each vertex and edge once.
   (Adj. Matrix)
Breadth-First Search (BFS)Graph Traversal
   (Adj. List)Finds shortest path in unweighted graphs.
   (Adj. Matrix)
Heap SortSortingIn-place sorting algorithm.
Bubble SortSortingSimple but generally inefficient.
Selection SortSortingSimple but generally inefficient.
Insertion SortSortingEfficient for small or nearly sorted arrays.
Merge SortSortingStable sort; requires auxiliary space.
Quick SortSortingGenerally fastest in practice for average cases; worst-case is rare but possible.
Binary SearchSearchingRequires a sorted input array.
  1. Kruskal’s Algorithm (for Minimum Spanning Tree)

    • Goal: Find a subset of edges that connects all vertices in a graph with the minimum possible total edge weight.
    • Time Complexity: or
      • Explanation:
        • : Number of edges, : Number of vertices.
        • The dominant factor is sorting all edges by weight, which takes time.
        • The Disjoint Set Union (DSU) operations (union and find) take nearly constant time on average (, where is the inverse Ackermann function, which is extremely slow-growing).
        • Since can be at most , is roughly equivalent to .
    • Typical Use: Finding the MST in sparse graphs.
  2. Prim’s Algorithm (for Minimum Spanning Tree)

    • Goal: Similar to Kruskal’s, finds a Minimum Spanning Tree.
    • Time Complexity:
      • Adjacency Matrix (Dense Graphs):
        • Explanation: Iterates times, and in each iteration, it scans all vertices to find the minimum edge.
      • Adjacency List + Binary Min-Heap (Sparse Graphs):
        • Explanation:
          • Each vertex is extracted from the min-heap once ( extractions, each ).
          • Each edge might lead to a decrease-key operation in the heap ( operations, each ).
    • Typical Use: Finding the MST, particularly efficient for dense graphs with the matrix implementation, or sparse graphs with the heap implementation.
  3. Dijkstra’s Algorithm (for Shortest Path)

    • Goal: Find the shortest paths from a single source vertex to all other vertices in a graph with non-negative edge weights.
    • Time Complexity:
      • Adjacency Matrix (Dense Graphs):
        • Explanation: Similar to Prim’s with a matrix, it iterates times, scanning vertices.
      • Adjacency List + Binary Min-Heap (Sparse Graphs):
        • Explanation: Similar to Prim’s with a heap, involves extractions and up to decrease-key operations.
    • Typical Use: GPS navigation, network routing, finding shortest paths in graphs with non-negative weights.
  4. Bellman-Ford Algorithm (for Shortest Path)

    • Goal: Find the shortest paths from a single source vertex to all other vertices, even in graphs with negative edge weights. Can also detect negative cycles.
    • Time Complexity:
      • Explanation: It relaxes all edges times. In each pass, it iterates through all edges. An additional -th pass can detect negative cycles.
    • Typical Use: Routing protocols that might encounter negative costs (e.g., in arbitrage opportunities), detecting negative cycles.
  5. Depth-First Search (DFS)

    • Goal: Traverse or search a tree or graph in a depthward motion, exploring as far as possible along each branch before backtracking.
    • Time Complexity:
      • Adjacency List:
      • Adjacency Matrix: (because to find neighbors of a vertex, you iterate through columns)
    • Explanation: Visits each vertex and each edge at most a constant number of times.
    • Typical Use: Topological sorting, finding connected components, cycle detection, maze solving.
  6. Breadth-First Search (BFS)

    • Goal: Traverse or search a tree or graph level by level, exploring all neighbor nodes at the present depth before moving on to nodes at the next depth level.
    • Time Complexity:
      • Adjacency List:
      • Adjacency Matrix:
    • Explanation: Similar to DFS, visits each vertex and each edge at most a constant number of times.
    • Typical Use: Finding the shortest path in unweighted graphs, finding connected components, network broadcasting.
  1. Heap Sort

    • Goal: A comparison-based sorting algorithm that uses a binary heap data structure.
    • Time Complexity:
      • Worst Case:
      • Average Case:
      • Best Case:
    • Explanation:
      • Building the max-heap takes time.
      • Extracting the maximum element times (each extraction taking ) and re-heapifying takes time.
    • Space Complexity: (in-place sort)
    • Typical Use: When worst-case performance and space complexity are required.
  2. Other Common Sorting Algorithms:

    • Bubble Sort: (Worst, Average, Best) - Simple, but inefficient.
    • Selection Sort: (Worst, Average, Best) - Simple, but inefficient.
    • Insertion Sort: (Worst, Average), (Best) - Efficient for small arrays or nearly sorted arrays.
    • Merge Sort: (Worst, Average, Best) - Stable sort, but requires auxiliary space.
    • Quick Sort: (Average), (Worst) - Generally faster in practice due to better constant factors, but worst-case is bad. In-place for typical implementations.
  1. Binary Search
    • Goal: Efficiently finds the position of a target value within a sorted array.
    • Time Complexity:
    • Explanation: With each comparison, the search space is halved. For an array of size , it takes steps to narrow down to a single element.
    • Space Complexity: (iterative), (recursive due to call stack)
    • Prerequisite: The array must be sorted.
    • Typical Use: Searching in databases, finding elements in sorted lists, implementing lower_bound or upper_bound operations.

Key Concepts for Understanding Time Complexity:

Section titled “Key Concepts for Understanding Time Complexity:”
  • Big O Notation (): Describes the upper bound or worst-case growth rate of an algorithm’s running time as the input size () grows.
  • (or , ): Represents the input size. For arrays, it’s the number of elements. For graphs, it’s typically the number of vertices () and edges ().
  • Logarithmic (): Very efficient. The time grows very slowly as the input size increases (e.g., binary search).
  • Linear (): The time grows proportionally to the input size (e.g., iterating through an array once).
  • Linearithmic (): Often seen in efficient sorting algorithms.
  • Quadratic (): The time grows as the square of the input size. Becomes inefficient for large inputs.
  • Polynomial (): Time grows as a polynomial of the input size.
  • Exponential (): Very inefficient. Only practical for very small input sizes.