Introduction
Suppose you are organizing a deck of cards, and you want to sort it from the smallest to the largest. Naturally, you might look through the cards, find the smallest one, and place it first. Then you would look for the next smallest, and so on till you are done. This intuitive approach simulates one of computer science’s fundamental sorting algorithms: selection sort. While it may not be the fastest method for sorting, its beauty lies in its simplicity and the valuable lessons it teaches about sorting.
Prerequisites
To understand this topic, you need to have a basic understanding of arrays and linked lists. Your understanding of these data structures doesn’t have to be language specific. I will be using arrays for this article.
How It Works
The key idea of selection sort is to find the minimum or maximum element from the unsorted portion of the array and put it in its final position. This process is repeated until every element in the array is correctly positioned (based on a pre-selected order).
Think of selection sort as a meticulous way of organizing items, much like arranging books on a shelf by height. Suppose you need to rearrange books on a shelf by height in descending order. You would first look through the books to find the tallest one and place it at the beginning. Then, you would look for the next tallest book and repeat the process until all the books are sorted.
We can break this algorithm into these process:
Start with your entire array considered as unsorted.
Look through (scan) the unsorted portion to find the smallest element. The first scan covers the entire array, and each subsequent scan covers a smaller portion.
Once found, swap this smallest element with whatever sits at the beginning of the unsorted section. This position now becomes part of the sorted portion.
Move the index between sorted and unsorted sections one position to the right.
Repeat steps 2 to 4 until the entire array is sorted.
Each iteration through the array guarantees that one more element lands in its final position. The result is gradually sorted from the left to the right. Consider this like a mason building a wall brick by brick, where each brick is carefully selected to be the next smallest one available, contributing to the steady formation of the complete structure.
Pseudocode
FUNCTION selection_sort(array):
FOR i FROM 0 TO length(array) - 1:
smallest_index = i
FOR j FROM i + 1 TO length(array) - 1:
IF array[j] < array[smallest_index]:
smallest_index = j
IF smallest_index != i:
SWAP array[i] WITH array[smallest_index]
RETURN array
Explanation:
Declare the function
FUNCTION selection_sort(array) /** This line defines a function named selection_sort that takes an array as input. **/
Outer loop
```javascript FOR i FROM 0 TO length(array) - 1: /** this is the outer loop and it iterates through each element in the array, starting from the first element (index 0) to the last element (index array.length -1).
P.S: array indexing starts from 0. The last element is going to be array.length -1. **/
* **Initialize the smallest\_index**
```jsx
smallest_index = i
/* assume that the smallest index is the current value of i. */
Inner Loop
FOR j FROM i + 1 TO length(array) - 1: /** This line loops the remaining unsorted elements in the array, starting from the next element after i (index i + 1) to the last element (index array.length - 1). **/
Find Smallest Element
IF array[j] < array[smallest_index]: smallest_index = j /** 1. compare the current element (array[j]) with the assumed smallest element (array[smallest_index]). 2. if the current element is smaller, update the smallest_index variable to the current index j. **/
Swap Smallest Element
IF smallest_index != i: SWAP array[i] WITH array[smallest_index] /** 1. After the inner loop completes, check if the smallest element is not the current element (i). 2. If it is not, swap the smallest element with the current element. **/
Return Sorted Array
RETURN array
Code implementation in JavaScript
const selection_sort = (array) => {
for (let i = 0; i < array.length; i++) {
let smallest_index = i;
for (let j = i + 1; j < array.length; j++) {
if (array[j] < array[smallest_index]) {
smallest_index = j;
}
}
if (smallest_index !== i) {
[array[i], array[smallest_index]] = [array[smallest_index], array[i]]
}
}
return array;
}
console.log(selection_sort([3, 9, 1 , 0, 5])) // returns [0, 1, 3, 5, 9]
Time Complexity And Space Complexity
Time Complexity:
The time complexity of selection sort is O(n²). This is knowns as quadratic time. Selection sort works by repeatedly selecting the smallest(or largest) element from the unsorted part of the array and placing it in its final position. In selection sort, the outer loop iterates n times, where n is the length of the array. For each iteration of this outer loop, the algorithm searches through the remaining unsorted portion of the array to find the smallest (or largest) element. This search happens in the in the inner loop, which starts with n-1 elements on the first iteration, then n-2 elements on the second iteration, and so on. With each iteration, the size of the unsorted portion of the array decreases by one until only one element is left. When we count the total number of comparisons made, we will find that they follow an arithmetic sequence. Starting with n-1 comparisons in the first iteration, then n-2 in the second iteration, n-3 in the third iteration until we make just one comparison in the final iteration. The sum of this sequence can be mathematically expressed as n(n-1)/2 which simplifies to n². Hence, O(n²).
By the way, the time complexity of selection sort stays the same in all cases - best, average and worst. Unlike some other sorting algorithms that might perform better in certain scenarios, selection sort maintains this quadratic time complexity regardless of the input array’s initial oder. Meaning, even if an array is already sorted, selection sort doesn’t know this and will still default to its core algorithm. For example, if you pass [3, 4, 5, 6] to a selection sort function, and you want to sort them in ascending order, it will still go through its core algorithm of performing the same amount of comparison. In all cases, selection sort must scan the entire unsorted portion to find the minimum element. It has no way to take advantage of any existing order in the array.
While this makes it inefficient for large datasets compared so algorithms like quicksort, or mergesort, selection sort has one advantage: it makes exactly one swap per iteration of the outer loop. This is very beneficial in systems where writing to memory (performing swaps) is expensive compared to reading (comparing elements). This is because, selection sort sorts an array in place(it does not use extra memory for another array when sorting).
Space Complexity:
Like I said earlier, selection sort sorts the array in place, this means that it does not use extra memory for another array or data structure. The only additional memory used is for the variables used (for example, the loop counters, temporary storage for swapping), which is constant. Hence, the space complexity is O(1) which is known as constant time.
Application of This Algorithm
When to Use Selection Sort
Small Datasets: Selection sort works best for small datasets, typically arrays with fewer than 1000 elements, where its quadratic time complexity will not be a significant bottleneck. In cases like this, it’s simplicity can be an advantage. Selection sort algorithm is an easy algorithm to implement and debug compared to more complex sorting algorithms like quicksort or mergesort.
When memory is constrained: Selection sort is advantageous in this case because it does not require an extra array to sort an array. It sorts the array by swapping elements within the original array without needing any extra space for swapping. While this is true, it's worth noting that many other sorting algorithms like bubble sort and quicksort are also in-place, so this isn't a unique advantage to selection sort.
A more solid reason to use selection sort is in systems where writing to memory is significantly more expensive than reading, as we discussed earlier, since it guarantees the minimum number of swaps(one swap per iteration of the outside loop).
However, for most general-purpose applications, especially with larger datasets as inputs, I suggest using more efficient algorithms like quicksort or mergesort.
Comparison to Other Simple Sort Algorithms
Selection sort vs Bubble sort: these two sorting algorithms are often compared since the both of them have the same average and worst case time complexity of O(n²). But they are different in execution. Selection sort finds the minimum element and places it in it’s rightful position with a single swap per pass. On the other hand, bubble sort repeatedly compares adjacent elements and swaps them if they are not in order. This means that selection sort will typically use less swaps than bubble sort, especially on highly unordered data. For example, sorting [3,6,9,12,15] would require 10 swaps with bubble sort but only 4 swaps with selection sort.
Selection sort vs Insertion sort: Insertion sort performs very well on nearly sorted data. When the input array is mostly in order, like [3, 9, 6, 12, 15], insertion sort can quickly place slightly misplaced elements without scanning the entire array. This can be as fast as linear time O(n) which is way faster than the quadratic time O(n²). However, selection sort would still need to scan the entire portion of the array regardless of how close the elements are to their final positions. Insertion sort is mostly useful in scenarios where you are dealing with groups of mostly-sorted data. However, on random array or reversed arrays, insertion sort will generally perform similarly to selection sort with O(n²) time complexity. The key difference is that insertion sort can take advantage of an existing order in the input data, while the performance of selection sort algorithm remains consistent regardless of the initial order of the input data.
Conclusion
Selection sort transforms an unordered array into a sorted one through a meticulous process of finding and placing minimum (or maximum) elements. While its quadratic O(n²) complexity makes it inefficient for large datasets, it offers valuable insights into algorithm design and array manipulation. With exactly n-1 swaps regardless of input order, it can be excellent in memory-constrained environments where writes are expensive.
Its applications span from teaching fundamentals to specific hardware scenarios where minimum writes to memory is crucial. Consider how selection sort's predictable behaviour contrasts with bubble sort's variable swaps and insertion sort's adaptive performance on nearly sorted data. This shows important principles about algorithm analysis and optimization.
To truly understand its mechanics, practice implementing selection sort in different contexts:
Start with basic integer arrays
Try sorting custom objects with multiple fields
Implement it with different comparison strategies
Use it as a building block for more complex algorithms
The effort invested in mastering selection sort, while perhaps not directly applicable to production systems, will help in building a strong foundation in algorithmic thinking and developing intuition for more advanced sorting algorithms. Its simplicity makes it an ideal starting point for understanding how computers organize and manipulate data.