Time complexities of common algorithms
Time Complexities of Common Algorithms
Section titled “Time Complexities of Common Algorithms”Algorithm | Category | Best Case Time Complexity | Average Case Time Complexity | Worst Case Time Complexity | Notes |
---|---|---|---|---|---|
Kruskal’s Algorithm | Graph (MST) | Dominated by sorting edges. | |||
Prim’s Algorithm | Graph (MST) | ||||
(Adjacency Matrix) | Efficient for dense graphs. | ||||
(Min-Heap) | Efficient for sparse graphs. | ||||
Dijkstra’s Algorithm | Graph (Shortest Path) | For non-negative edge weights. | |||
(Adjacency Matrix) | |||||
(Min-Heap) | |||||
Bellman-Ford Algorithm | Graph (Shortest Path) | Handles negative edge weights; can detect negative cycles. | |||
Depth-First Search (DFS) | Graph Traversal | ||||
(Adj. List) | Visits each vertex and edge once. | ||||
(Adj. Matrix) | |||||
Breadth-First Search (BFS) | Graph Traversal | ||||
(Adj. List) | Finds shortest path in unweighted graphs. | ||||
(Adj. Matrix) | |||||
Heap Sort | Sorting | In-place sorting algorithm. | |||
Bubble Sort | Sorting | Simple but generally inefficient. | |||
Selection Sort | Sorting | Simple but generally inefficient. | |||
Insertion Sort | Sorting | Efficient for small or nearly sorted arrays. | |||
Merge Sort | Sorting | Stable sort; requires | |||
Quick Sort | Sorting | Generally fastest in practice for average cases; worst-case is rare but possible. | |||
Binary Search | Searching | Requires a sorted input array. |
Detailed Explainations
Section titled “Detailed Explainations”Graph Algorithms
Section titled “Graph Algorithms”-
Kruskal’s Algorithm (for Minimum Spanning Tree)
- Goal: Find a subset of edges that connects all vertices in a graph with the minimum possible total edge weight.
- Time Complexity:
or - Explanation:
: Number of edges, : Number of vertices. - The dominant factor is sorting all edges by weight, which takes
time. - The Disjoint Set Union (DSU) operations (union and find) take nearly constant time on average (
, where is the inverse Ackermann function, which is extremely slow-growing). - Since
can be at most , is roughly equivalent to .
- Explanation:
- Typical Use: Finding the MST in sparse graphs.
-
Prim’s Algorithm (for Minimum Spanning Tree)
- Goal: Similar to Kruskal’s, finds a Minimum Spanning Tree.
- Time Complexity:
- Adjacency Matrix (Dense Graphs):
- Explanation: Iterates
times, and in each iteration, it scans all vertices to find the minimum edge.
- Explanation: Iterates
- Adjacency List + Binary Min-Heap (Sparse Graphs):
- Explanation:
- Each vertex is extracted from the min-heap once (
extractions, each ). - Each edge might lead to a
decrease-key
operation in the heap (operations, each ).
- Each vertex is extracted from the min-heap once (
- Explanation:
- Adjacency Matrix (Dense Graphs):
- Typical Use: Finding the MST, particularly efficient for dense graphs with the matrix implementation, or sparse graphs with the heap implementation.
-
Dijkstra’s Algorithm (for Shortest Path)
- Goal: Find the shortest paths from a single source vertex to all other vertices in a graph with non-negative edge weights.
- Time Complexity:
- Adjacency Matrix (Dense Graphs):
- Explanation: Similar to Prim’s with a matrix, it iterates
times, scanning vertices.
- Explanation: Similar to Prim’s with a matrix, it iterates
- Adjacency List + Binary Min-Heap (Sparse Graphs):
- Explanation: Similar to Prim’s with a heap, involves
extractions and up to decrease-key
operations.
- Explanation: Similar to Prim’s with a heap, involves
- Adjacency Matrix (Dense Graphs):
- Typical Use: GPS navigation, network routing, finding shortest paths in graphs with non-negative weights.
-
Bellman-Ford Algorithm (for Shortest Path)
- Goal: Find the shortest paths from a single source vertex to all other vertices, even in graphs with negative edge weights. Can also detect negative cycles.
- Time Complexity:
- Explanation: It relaxes all edges
times. In each pass, it iterates through all edges. An additional -th pass can detect negative cycles.
- Explanation: It relaxes all edges
- Typical Use: Routing protocols that might encounter negative costs (e.g., in arbitrage opportunities), detecting negative cycles.
-
Depth-First Search (DFS)
- Goal: Traverse or search a tree or graph in a depthward motion, exploring as far as possible along each branch before backtracking.
- Time Complexity:
- Adjacency List:
- Adjacency Matrix:
(because to find neighbors of a vertex, you iterate through columns)
- Adjacency List:
- Explanation: Visits each vertex and each edge at most a constant number of times.
- Typical Use: Topological sorting, finding connected components, cycle detection, maze solving.
-
Breadth-First Search (BFS)
- Goal: Traverse or search a tree or graph level by level, exploring all neighbor nodes at the present depth before moving on to nodes at the next depth level.
- Time Complexity:
- Adjacency List:
- Adjacency Matrix:
- Adjacency List:
- Explanation: Similar to DFS, visits each vertex and each edge at most a constant number of times.
- Typical Use: Finding the shortest path in unweighted graphs, finding connected components, network broadcasting.
Sorting Algorithms
Section titled “Sorting Algorithms”-
Heap Sort
- Goal: A comparison-based sorting algorithm that uses a binary heap data structure.
- Time Complexity:
- Worst Case:
- Average Case:
- Best Case:
- Worst Case:
- Explanation:
- Building the max-heap takes
time. - Extracting the maximum element
times (each extraction taking ) and re-heapifying takes time.
- Building the max-heap takes
- Space Complexity:
(in-place sort) - Typical Use: When
worst-case performance and space complexity are required.
-
Other Common Sorting Algorithms:
- Bubble Sort:
(Worst, Average, Best) - Simple, but inefficient. - Selection Sort:
(Worst, Average, Best) - Simple, but inefficient. - Insertion Sort:
(Worst, Average), (Best) - Efficient for small arrays or nearly sorted arrays. - Merge Sort:
(Worst, Average, Best) - Stable sort, but requires auxiliary space. - Quick Sort:
(Average), (Worst) - Generally faster in practice due to better constant factors, but worst-case is bad. In-place for typical implementations.
- Bubble Sort:
Searching Algorithms
Section titled “Searching Algorithms”- Binary Search
- Goal: Efficiently finds the position of a target value within a sorted array.
- Time Complexity:
- Explanation: With each comparison, the search space is halved. For an array of size
, it takes steps to narrow down to a single element. - Space Complexity:
(iterative), (recursive due to call stack) - Prerequisite: The array must be sorted.
- Typical Use: Searching in databases, finding elements in sorted lists, implementing
lower_bound
orupper_bound
operations.
Key Concepts for Understanding Time Complexity:
Section titled “Key Concepts for Understanding Time Complexity:”- Big O Notation (
): Describes the upper bound or worst-case growth rate of an algorithm’s running time as the input size ( ) grows. (or , ): Represents the input size. For arrays, it’s the number of elements. For graphs, it’s typically the number of vertices ( ) and edges ( ). - Logarithmic (
): Very efficient. The time grows very slowly as the input size increases (e.g., binary search). - Linear (
): The time grows proportionally to the input size (e.g., iterating through an array once). - Linearithmic (
): Often seen in efficient sorting algorithms. - Quadratic (
): The time grows as the square of the input size. Becomes inefficient for large inputs. - Polynomial (
): Time grows as a polynomial of the input size. - Exponential (
): Very inefficient. Only practical for very small input sizes.