Sorting Algorithms in C++
Sorting Algorithms are methods of reorganizing a large number of items into some specific order such as highest to lowest, or vice-versa, or even in some alphabetical order.
These algorithms take an input list, processes it (i.e, perform some operations on it) and produce the sorted list.
The most common example we experience every day is sorting clothes or other items on an e-commerce website either by lowest-price to highest, or list by popularity, or some other order.
Sort Implementation Details
C++ Sorting vector V,
sort(V.begin(), V.end());
Bubble Sort
Bubble sort, also referred to as comparison sort, is a simple sorting algorithm that repeatedly goes through the list, compares adjacent elements and swaps them if they are in the wrong order. This is the most simplest algorithm and inefficient at the same time. Yet, it is very much necessary to learn about it as it represents the basic foundations of sorting.
Application
Understand the working of Bubble sort
- Bubble sort is mainly used in educational purposes for helping students understand the foundations of sorting.
- This is used to identify whether the list is already sorted. When the list is already sorted (which is the best-case scenario), the complexity of bubble sort is only
O(n)
. - In real life, bubble sort can be visualised when people in a queue wanting to be standing in a height wise sorted manner swap their positions among themselves until everyone is standing based on increasing order of heights.
Explanation
Algorithm: We compare adjacent elements and see if their order is wrong (i.e a[i] > a[j] for 1 <= i < j <= size of array
; if array is to be in ascending order, and vice-versa). If yes, then swap them.
Explanation:
- Let us say, we have an array of length
n
. To sort this array we do the above step (swapping) forn - 1
passes. - In simple terms, first, the largest element goes at its extreme right place then, second largest to the last by one place, and so on. In the ith pass, the ith largest element goes at its right place in the array by swappings.
- In mathematical terms, in
ith
pass, at least one element from(n - i + 1)
elements from start will go at its right place. That element will be the ith(for 1 <= i <= n - 1)
largest element of the array. Because in theith
pass of the array, in thejth
iteration(for 1 <= j <= n - i)
, we are checkingif a[j] > a[j + 1]
, anda[j]
will always be greater thana[j + 1]
when it is the largest element in range[1, n - i + 1]
. Now we will swap it. This will continue untilith
largest element is at the(n - i + 1)
th position of the array.
Bubble Sort Example:
Consider the following array: Arr=14, 33, 27, 35, 10
. We need to sort this array using bubble sort algorithm.
First Pass:
- We proceed with the first and second element i.e., Arr[0] and Arr[1]. Check if
14 > 33
which is false. So, no swapping happens and the array remains the same.
- We proceed with the second and third element i.e., Arr[1] and Arr[2]. Check if
33 > 27
which is true. So, we swap Arr[1] and Arr[2].
Thus the array becomes:
- We proceed with the third and fourth element i.e., Arr[2] and Arr[3]. Check if
33 > 35
which is false. So, no swapping happens and the array remains the same.
- We proceed with the fourth and fifth element i.e., Arr[3] and Arr[4]. Check if
35 > 10
which is true. So, we swap Arr[3] and Arr[4].
Thus, after swapping the array becomes:
Thus, marks the end of the first pass, where the Largest element reaches its final(last) position.
Second Pass:
- We proceed with the first and second element i.e., Arr[0] and Arr[1]. Check if
14 > 27
which is false. So, no swapping happens and the array remains the same.
We now proceed with the second and third element i.e., Arr[1] and Arr[2]. Check if 27 > 33 which is false. So, no swapping happens and the array remains the same.
- We now proceed with the third and fourth element i.e., Arr[2] and Arr[3]. Check if
33 > 10
which is true. So, we swap Arr[2] and Arr[3].
Now, the array becomes
Thus marks the end of second pass where the second largest element in the array has occupied its correct position.
Third Pass:
After the third pass, the third largest element will be at the third last position in the array.
i-th Pass:
After the ith pass, the ith largest element will be at the ith last position in the array.
n-th Pass:
After the nth pass, the nth largest element(smallest element) will be at nth last position(1st position) in the array, where ’n’ is the size of the array.
After doing all the passes, we can easily see the array will be sorted.
Thus, the sorted array will look like this:
Pseudocode of Bubble Sort Algorithm
bubbleSort( Arr[], totat_elements)
for i = 0 to total_elements - 1 do:
swapped = false
for j = 0 to total_elements - i - 2 do:
/* compare the adjacent elements */
if Arr[j] > Arr[j+1] then
/* swap them */
swap(Arr[j], Arr[j+1])
swapped = true
end if
end for
/*if no number was swapped that means
array is sorted now, break the loop.*/
if(not swapped) then
break
end if
end for
end
Implementation Of Bubble Sort
Implementation of Bubble sort in C++
#include <iostream>
#include <vector>
using namespace std;
void BubbleSort (vector<int> &arr, int n)
{
for (int i = 0; i < n - 1; ++i)
{
bool swapped = false;
for (int j = 0; j < n - i - 1; ++j)
{
if (arr[j] > arr[j+1]) //check if adjacent element is
//not in order
{
swap(arr[j], arr[j + 1]);
swapped = true;
}
}
// Value at n-i-1 will be maximum of all the values below this index. if(!swapped)
break;
}
}
int main()
{
vector<int> arr = {5, 6, 2, 6, 9, 0, -1};
int n = 7;
BubbleSort(arr, n);
// Display the sorted data.
cout << "\nSorted Data: ";
for (i = 0; i < n; i++)
Cout << arr[i] << “ “;
return 0;
}
Complexity Analysis
Time Complexity of Bubble sort
- Best case scenario: The best case scenario occurs when the array is already sorted. In this case, no swapping will happen in the first iteration (The
swapped
variable will be false). So, when this happens, we break from the loop after the very first iteration. Hence, time complexity in the best case scenario isO(n)
because it has to traverse through all the elements once. - Worst case and Average case scenario: In Bubble Sort,
n-1
comparisons are done in the 1st pass,n-2
in 2nd pass,n-3
in 3rd pass and so on. So, the total number of comparisons will be:
Sum = (n-1) + (n-2) + (n-3) + ..... + 3 + 2 + 1 Sum = n(n-1)/2
- Hence, the time complexity is of the order n2 or O(n2).
Space Complexity of Bubble sort
The space complexity for the algorithm is O(1), because only a single additional memory space is required i.e. for temporary variable used for swapping.
FAQs
- What is the best case time complexity of bubble sort?
The time complexity in the best case scenario isO(n)
because it has to traverse through all the elements once to recognize that the array is already sorted. - What is the advantage of bubble sort over other sorting techniques?
- The built-in ability to detect whether the list is sorted efficiently is the only advantage of bubble sort over other sorting techniques.
- When the list is already sorted (which is the best-case scenario), the complexity of bubble sort is only
O(n)
. - It is faster than other in case of sorted array and consumes less time to describe whether the input array is sorted or not.
- Why bubble sort is called “bubble” sort?
The “bubble” sort is called so because the list elements with greater value than their surrounding elements “bubble” towards the end of the list. For example, after first pass, the largest element is bubbled towards the right most position. After second pass, the second largest element is bubbled towards the second last position in the list and so on. - Is bubble sort a stable algorithm?
- Bubble sort is a stable algorithm.
- A sorting algorithm is said to be stable if two objects with equal keys appear in the same order in sorted output as they appear in the input array to be sorted.
- Is bubble sort an inplace algorithm?
- Bubble sort is an inplace algorithm.
- An algorithm is said to be inplace if it does not need an extra space and produces an output in the same memory that contains the data by transforming the input ‘in-place’. However, a small constant extra space used for variables is allowed.
- Is Bubble sort slow?
- Bubble sort is slower than the other O(n2) sorting algorithms.
- It is about four times as slow as insertion sort and twice as slow as selection sort.
- It has a good best-case behavior, but is impractically slow on almost all real world data sets and is not considered for implementation in real applications.
- Can bubble sort be implemented recursively?
- Yes.
- Recursive Bubble Sort has no additional performance/implementation advantages, but can be used to understand recursion and sorting concepts better.
- Base Case: If array size is 1, return.
- Do One Pass of normal Bubble Sort. This pass bubbles largest element of current subarray to correct position.
- Recur for all elements except last of current subarray.
recursiveBubbleSort(arr[], n){
// Base case
if (n == 1)
return; // One pass of bubble sort. After
// this pass, the largest element
// is moved (or bubbled) to end.
for(i=0; i<n-1; i++){
if(arr[i] > arr[i+1])
{
swap(arr[i], arr[i+1]);
}
} // recursion for remaining elements in array
recursiveBubbleSort(arr, n-1);
}
Selection Sort
Selection sort is a simple comparison-based sorting algorithm. It is in-place and needs no extra memory.
The idea behind this algorithm is pretty simple. We divide the array into two parts: sorted and unsorted. The left part is sorted subarray and the right part is unsorted subarray. Initially, sorted subarray is empty and unsorted array is the complete given array.
We perform the steps given below until the unsorted subarray becomes empty:
- Pick the minimum element from the unsorted subarray.
- Swap it with the leftmost element of the unsorted subarray.
- Now the leftmost element of unsorted subarray becomes a part (rightmost) of sorted subarray and will not be a part of unsorted subarray.
A selection sort works as follows:
This is our initial array A = [5, 2, 6, 7, 2, 1, 0, 3]
Leftmost element of unsorted part = A[0]
Minimum element of unsorted part = A[6]
We will swap A[0] and A[6] then, make A[0] part of sorted subarray.
Leftmost element of unsorted part = A[1]
Minimum element of unsorted part = A[5]
We will swap A[1] and A[5] then, make A[1] part of sorted subarray.
Leftmost element of unsorted part = A[2]
Minimum element of unsorted part = A[4]
We will swap A[2] and A[4] then, make A[2] part of sorted subarray.
Leftmost element of unsorted part = A[3]
Minimum element of unsorted part = A[5]
We will swap A[3] and A[5] then, make A[3] part of sorted subarray.
Leftmost element of unsorted part = A[4]
Minimum element of unsorted part = A[7]
We will swap A[4] and A[7] then, make A[4] part of sorted subarray.
Leftmost element of unsorted part = A[5]
Minimum element of unsorted part = A[6]
We will swap A[5] and A[6] then, make A[5] part of sorted subarray.
Leftmost element of unsorted part = A[6]
Minimum element of unsorted part = A[7]
We will swap A[6] and A[7] then, make A[6] part of sorted subarray.
This is the final sorted array.
Pseudocode
FindMinIndex:
FindMinIndex(Arr[], start, end)
min_index = start
FOR i from (start + 1) to end:
IF Arr[i] < Arr[min_index]:
min_index = i
END of IF
END of FOR
Return min_index
Suppose, there are ’n’ elements in the array. Therefore, at worst case, there can be n iterations in FindMinIndex() for start = 1 and end = n. We did not take any auxiliary space.
Therefore,
Time complexity: O(n)
Space complexity: O(1)
Selection Sort:
SelectionSort(Arr[], arr_size):
FOR i from 1 to arr_size:
min_index = FindMinIndex(Arr, i, arr_size)
IF i != min_index:
swap(Arr[i], Arr[min_index])
END of IF
END of FOR
Suppose, there are ’n’ elements in the array. Therefore, at worst case, there can be n iterations in FindMinIndex() for start = 1 and end = n. No auxiliary space used.
Total iterations = (n — 1) + (n — 2) + . . . + 1 = (n * (n — 1)) / 2 = (n2 — n) / 2
Therefore,
Time complexity: O(n2)
Space complexity: O(1)
Implementation:
Selection sort program in C++:#include <iostream>
#include <vector>
using namespace std;
int findMinIndex(vector<int> &A, int start) {
int min_index = start;
++start;
while(start < (int)A.size()) {
if(A[start] < A[min_index])
min_index = start;
++start;
}
return min_index;
}
void selectionSort(vector<int> &A) {
for(int i = 0; i < (int)A.size(); ++i) {
int min_index = findMinIndex(A, i);
if(i != min_index)
swap(A[i], A[min_index]);
}
}
int main() {
vector<int> A = {5, 2, 6, 7, 2, 1, 0, 3};
selectionSort(A);
for(int num : A)
cout << num << ' ';
return 0;
}
Insertion Sort Algorithm
Insertion sort is the sorting mechanism where the sorted array is built having one item at a time. The array elements are compared with each other sequentially and then arranged simultaneously in some particular order. The analogy can be understood from the style we arrange a deck of cards. This sort works on the principle of inserting an element at a particular position, hence the name Insertion Sort.
Insertion Sort works as follows:
- The first step involves the comparison of the element in question with its adjacent element.
- And if every comparison reveals that the element in question can be inserted at a particular position, then space is created for it by shifting the other elements one position to the right and inserting the element at the suitable position.
- The above procedure is repeated until all the elements in the array are at their apt position.
Let us now understand working with the following example:
Consider the following array: 25, 17, 31, 13, 2
First Iteration: Compare 25 with 17. The comparison shows 17< 25. Hence swap 17 and 25.
The array now looks like this:
17, 25, 31, 13, 2
First Iteration
Second Iteration: Begin with the second element (25), but it was already swapped on for the correct position, so we move ahead to the next element.
Now hold on to the third element (31) and compare it with the ones preceding it.
Since 31> 25, no swapping takes place.
Also, 31> 17, no swapping takes place and 31 remains at its position.
The array after the Second iteration looks like this:
17, 25, 31, 13, 2
Second Iteration
Third Iteration: Start the following Iteration with the fourth element (13), and compare it with its preceding elements.
Since 13< 31, we swap the two.
Array now becomes: 17, 25, 13, 31, 2.
But there still exist elements that we haven’t yet compared with 13. Now the comparison takes place between 25 and 13. Since, 13 < 25, we swap the two.
The array becomes 17, 13, 25, 31, 2.
The last comparison for the iteration is now between 17 and 13. Since 13 < 17, we swap the two.
The array now becomes 13, 17, 25, 31, 2.
Third Iteration
Fourth Iteration: The last iteration calls for the comparison of the last element (2), with all the preceding elements and make the appropriate swapping between elements.
Since, 2< 31. Swap 2 and 31.
Array now becomes: 13, 17, 25, 2, 31.
Compare 2 with 25, 17, 13.
Since, 2< 25. Swap 25 and 2.
13, 17, 2, 25, 31.
Compare 2 with 17 and 13.
Since, 2<17. Swap 2 and 17.
Array now becomes:
13, 2, 17, 25, 31.
The last comparison for the Iteration is to compare 2 with 13.
Since 2< 13. Swap 2 and 13.
The array now becomes:
2, 13, 17, 25, 31.
This is the final array after all the corresponding iterations and swapping of elements.
Fourth Iteration
Pseudocode
INSERTION-SORT(A)
for i = 1 to n
key ← A [i]
j ← i – 1
while j > = 0 and A[j] > key
A[j+1] ← A[j]
j ← j – 1
End while
A[j+1] ← key
End for
Implementation:
Insertion sort Implementation in C++:
#include <stdlib.h>
#include <iostream>
using namespace std;
//member functions declaration
void insertionSort(int arr[], int length);
void printArray(int array[], int size);
// main function
int main()
{
int array[6] = {5, 1, 6, 2, 4, 3};
// calling insertion sort function to sort the array
insertionSort(array, 6);
return 0;
}
void insertionSort(int arr[], int length)
{
int i, j, key;
for (i = 1; i < length; i++)
{
key = arr[i];
j = i-1;
while (j >= 0 && arr[j] >key)
{
arr[j+1] = arr[j];
j--;
}
arr[j +1] = key;
}
cout << "Sorted Array: ";
// print the sorted array
printArray(arr, length);
}
// function to print the given array
void printArray(int array[], int size)
{
int j;
for (j = 0; j < size; j++)
{
cout <<" "<< array[j];
}
cout << endl;
}
Time Complexity Analysis:
Even though insertion sort is efficient, still, if we provide an already sorted array to the insertion sort algorithm, it will still execute the outer for loop, thereby requiring n steps to sort an already sorted array of n elements, which makes its best case time complexity a linear function of n.
Wherein for an unsorted array, it takes for an element to compare with all the other elements which means every n element compared with all other n elements. Thus, making it for n x n, i.e., n2 comparisons. One can also take a look at other sorting algorithms such as Merge sort, QuickSort, Selection Sort, etc., and understand their complexities.
Worst Case Time Complexity [ Big-O ]: O(n2)
Best Case Time Complexity [Big-omega]: O(n)
Average Time Complexity [Big-theta]: O(n2)
Merge Sort Algorithm
Merge sort is one of the most efficient sorting algorithms. It works on the principle of Divide and Conquer. Merge sort repeatedly breaks down a list into several sublists until each sublist consists of a single element and merging those sublists in a manner that results into a sorted list.
A merge sort works as follows:
Top-down Merge Sort Implementation:
The top-down merge sort approach is the methodology which uses recursion mechanism. It starts at the Top and proceeds downwards, with each recursive turn asking the same question such as “What is required to be done to sort the array?” and having the answer as, “split the array into two, make a recursive call, and merge the results.”, until one gets to the bottom of the array-tree.
Example: Let us consider an example to understand the approach better.
- Divide the unsorted list into n sublists, each comprising 1 element (a list of 1 element is supposed sorted).
Top-down Implementation
2. Repeatedly merge sublists to produce newly sorted sublists until there is only 1 sublist remaining. This will be the sorted list.
Merging of two lists done as follows:
The first element of both lists is compared. If sorting in ascending order, the smaller element among two becomes a new element of the sorted list. This procedure is repeated until both the smaller sublists are empty and the newly combined sublist covers all the elements of both the sublists.
Merging of two lists
Implementation Of Merge Sort
// example of merge sort in C/C++
// merge function take two intervals
// one from start to mid
// second from mid+1, to end
// and merge them in sorted ordervoid merge(int *Arr, int start, int mid, int end) {
// create a temp array
int temp[end - start + 1]; // crawlers for both intervals and for temp
int i = start, j = mid+1, k = 0; // traverse both arrays and in each iteration add smaller of both elements in temp
while(i <= mid && j <= end) {
if(Arr[i] <= Arr[j]) {
temp[k] = Arr[i];
k += 1; i += 1;
}
else {
temp[k] = Arr[j];
k += 1; j += 1;
}
} // add elements left in the first interval
while(i <= mid) {
temp[k] = Arr[i];
k += 1; i += 1;
} // add elements left in the second interval
while(j <= end) {
temp[k] = Arr[j];
k += 1; j += 1;
} // copy temp to original interval
for(i = start; i <= end; i += 1) {
Arr[i] = temp[i - start]
}
}// Arr is an array of integer type
// start and end are the starting and ending index of current interval of Arrvoid mergeSort(int *Arr, int start, int end) { if(start < end) {
int mid = (start + end) / 2;
mergeSort(Arr, start, mid);
mergeSort(Arr, mid+1, end);
merge(Arr, start, mid, end);
}
}
Bottom-Up Merge Sort Implementation:
The Bottom-Up merge sort approach uses iterative methodology. It starts with the “single-element” array, and combines two adjacent elements and also sorting the two at the same time. The combined-sorted arrays are again combined and sorted with each other until one single unit of sorted array is achieved.
Example: Let us understand the concept with the following example.
Iteration (1)
Merge pairs of arrays of size 1
Iteration (2)
Merge pairs of arrays of size 2
Iteration (3)
Merge pairs of arrays of size 4
Thus the entire array has been sorted and merged.
Question 1: Wave Array
Given an array of integers, sort the array into a wave like array and return it,
In other words, arrange the elements into a sequence such that a1 >= a2 <= a3 >= a4 <= a5.....
Example
Given [1, 2, 3, 4]One possible answer : [2, 1, 4, 3]
Another possible answer : [4, 1, 3, 2]
Solution:
array = {5, 1, 3, 4, 2}
Sort the above array.
array = {1, 2, 3, 4, 5}
Now swap adjacent elemets in pairs.
swap(1, 2)
swap(3, 4)
Now, our array = {2, 1, 4, 3, 5}
and voila!, the array is in the wave form.vector<int> wave(vector<int> Vec) {
sort(Vec.begin(), Vec.end());
int N = Vec.size();
for(int i = 0; i < N - 1; i += 2) {
swap(Vec[i], Vec[i + 1]);
}
return Vec;
}
Quick Sort Algorithm
The algorithm was developed by a British computer scientist Tony Hoare in 1959. The name “Quick Sort” comes from the fact that, quick sort is capable of sorting a list of data elements significantly faster (twice or thrice faster) than any of the common sorting algorithms. It is one of the most efficient sorting algorithms and is based on the splitting of an array (partition) into smaller ones and swapping (exchange) based on the comparison with ‘pivot’ element selected. Due to this, quick sort is also called as “Partition Exchange” sort. Like Merge sort, Quick sort also falls into the category of divide and conquer approach of problem-solving methodology.
Application
Quicksort works in the following way
Before diving into any algorithm, its very much necessary for us to understand what are the real world applications of it. Quick sort provides a fast and methodical approach to sort any lists of things. Following are some of the applications where quick sort is used.
- Commercial computing: Used in various government and private organizations for the purpose of sorting various data like sorting of accounts/profiles by name or any given ID, sorting transactions by time or locations, sorting files by name or date of creation etc.
- Numerical computations: Most of the efficiently developed algorithms use priority queues and inturn sorting to achieve accuracy in all the calculations.
- Information search: Sorting algorithms aid in better search of information and what faster way exists than to achieve sorting using quick sort.
Basically, quick sort is used everywhere for faster results and in the cases where there are space constraints.
Explanation
Taking the analogical view in perspective, consider a situation where one had to sort the papers bearing the names of the students, by name from A-Z. One might use the approach as follows:
- Select any splitting value, say L. The splitting value is also known as Pivot.
- Divide the stack of papers into two. A-L and M-Z. It is not necessary that the piles should be equal.
- Repeat the above two steps with the A-L pile, splitting it into its significant two halves. And M-Z pile, split into its halves. The process is repeated until the piles are small enough to be sorted easily.
- Ultimately, the smaller piles can be placed one on top of the other to produce a fully sorted and ordered set of papers.
- The approach used here is reduction at each split to get to the single-element array.
- At every split, the pile was divided and then the same approach was used for the smaller piles by using the method of recursion.
Technically, quick sort follows the below steps:
Step 1 − Make any element as pivot
Step 2 − Partition the array on the basis of pivot
Step 3 − Apply quick sort on left partition recursively
Step 4 − Apply quick sort on right partition recursively
Quick Sort Example:
Problem Statement
Consider the following array: 50, 23, 9, 18, 61, 32
. We need to sort this array in the most efficient manner without using extra place (inplace sorting).
Solution
Step 1:
- Make any element as pivot: Decide any value to be the pivot from the list. For convenience of code, we often select the rightmost index as pivot or select any at random and swap with rightmost. Suppose for two values “Low” and “High” corresponding to the first index and last index respectively.
- In our case low is 0 and high is 5.
- Values at low and high are 50 and 32 and value at pivot is 32.
- Partition the array on the basis of pivot: Call for partitioning which rearranges the array in such a way that pivot (32) comes to its actual position (of the sorted array). And to the left of the pivot, the array has all the elements less than it, and to the right greater than it.
- In the partition function, we start from the first element and compare it with the pivot. Since 50 is greater than 32, we don’t make any change and move on to the next element 23.
- Compare again with the pivot. Since 23 is less than 32, we swap 50 and 23. The array becomes
23, 50, 9, 18, 61, 32
- We move on to the next element 9 which is again less than pivot (32) thus swapping it with 50 makes our array as
23, 9, 50, 18, 61, 32
. - Similarly, for next element 18 which is less than 32, the array becomes
23, 9, 18, 50, 61, 32
. Now 61 is greater than pivot (32), hence no changes. - Lastly, we swap our pivot with 50 so that it comes to the correct position.
Thus the pivot (32) comes at its actual position and all elements to its left are lesser, and all elements to the right are greater than itself.
Step 2: The main array after the first step becomes
23, 9, 18, 32, 61, 50
Step 3: Now the list is divided into two parts:
- Sublist before pivot element
- Sublist after pivot element
Step 4: Repeat the steps for the left and right sublists recursively. The final array thus becomes9, 18, 23, 32, 50, 61
.
The following diagram depicts the workflow of the Quick Sort algorithm which was described above.
Pseudocode of Quick Sort Algorithm:
Quick Sort
/**
* The main function that implements quick sort.
* @Parameters: array, starting index and ending index
*/
quickSort(arr[], low, high)
{
if (low < high)
{
// pivot_index is partitioning index, arr[pivot_index] is now at correct place in sorted array
pivot_index = partition(arr, low, high); quickSort(arr, low, pivot_index - 1); // Before pivot_index
quickSort(arr, pivot_index + 1, high); // After pivot_index
}
}
Partition Method
/**
* The function selects the last element as pivot element, places that pivot element correctly in the array in such a way
* that all the elements to the left of the pivot are lesser than the pivot and
* all the elements to the right of pivot are greater than it.
* @Parameters: array, starting index and ending index
* @Returns: index of pivot element after placing it correctly in sorted array
*/
partition (arr[], low, high)
{
// pivot - Element at right most position
pivot = arr[high];
i = (low - 1); // Index of smaller element
for (j = low; j <= high-1; j++)
{
// If current element is smaller than the pivot, swap the element with pivot
if (arr[j] < pivot)
{
i++; // increment index of smaller element
swap(arr[i], arr[j]);
}
}
swap(arr[i + 1], arr[high]);
return (i + 1);
}
Implementation of Quick Sort
Implementation of Quick Sort Algorithm in C:
# include <stdio.h>// to swap two numbers
void swap(int* a, int* b)
{
int t = *a;
*a = *b;
*b = t;
}int partition (int arr[], int low, int high)
{
int pivot = arr[high]; // selecting last element as pivot
int i = (low - 1); // index of smaller element
for (int j = low; j <= high- 1; j++)
{
// If the current element is smaller than or equal to pivot
if (arr[j] <= pivot)
{
i++; // increment index of smaller element
swap(&arr[i], &arr[j]);
}
}
swap(&arr[i + 1], &arr[high]);
return (i + 1);
}
/*
a[] is the array, p is starting index, that is 0,
and r is the last index of array.
*/
void quicksort(int a[], int p, int r)
{
if(p < r)
{
int q;
q = partition(a, p, r);
quicksort(a, p, q-1);
quicksort(a, q+1, r);
}
}
// function to print the array
void printArray(int a[], int size)
{
int i;
for (i=0; i < size; i++)
{
printf("%d ", a[i]);
}
printf("\n");
}int main()
{
int arr[] = {9, 7, 5, 11, 12, 2, 14, 3, 10, 6};
int n = sizeof(arr)/sizeof(arr[0]);
// call quickSort function
quicksort(arr, 0, n-1);
printf("Sorted array: \n");
printArray(arr, n);
return 0;
}
Complexity Analysis
Time Complexity of Quick sort
- Best case scenario: The best case scenario occurs when the partitions are as evenly balanced as possible, i.e their sizes on either side of the pivot element are either are equal or are have size difference of 1 of each other.
- Case 1: The case when sizes of sublist on either side of pivot becomes equal occurs when the subarray has an odd number of elements and the pivot is right in the middle after partitioning. Each partition will have
(n-1)/2
elements. - Case 2: The size difference of 1 between the two sublists on either side of pivot happens if the subarray has an even number,
n
, of elements. One partition will haven/2
elements with the other having(n/2)-1
. - In either of these cases, each partition will have at most
n/2
elements, and the tree representation of the subproblem sizes will be as below:
The best case complexity of quick sort algorithm is O(log n)
- Worst case scenario: This happens when we encounter the most unbalanced partitions possible, then the original call takes
n
time, the recursive call onn-1
elements will take(n-1)
time, the recursive call on(n-2)
elements will take(n-2)
time, and so on. The worst case time complexity of Quick Sort would be O(n2).
Space Complexity of Quick sort
The space complexity is calculated based on the space used in the recursion stack. The worst case space used will be O(n)
. The average case space used will be of the order O(log n)
. The worst case space complexity becomes O(n)
, when the algorithm encounters its worst case where for getting a sorted list, we need to make n
recursive calls.
FAQs
- What is the average case run time complexity of Quick Sort?
The average case run time of quick sort isO(n logn)
. This case happens when we dont exactly get evenly balanced partitions. We might get at worst a 3-to-1 split on either side of pivot element. The proof of this is quite mathematically rigorous and is out of scope of the discussion. - Is Quick Sort a stable algorithm?
Quick sort is not a stable algorithm because the swapping of elements is done according to pivot’s position (without considering their original positions). A sorting algorithm is said to be stable if it maintains the relative order of records in the case of equality of keys. - Is Quick Sort an inplace algorithm?
Quick sort is an inplace algorithm which means the numbers are all sorted within the original array itself. - What is Randomised Quick Sort? Why is it used?
- Sometimes, it happens that by choosing the rightmost element at all times might result in the worst case scenario.
- In such cases, choosing a random element as your pivot at each step will reduce the probability of triggering the worst case behavior. We will be more likely choosing pivots closer to the center of the array, and when this happens, the recursion branches more evenly and thus the algorithm terminates a lot faster.
- The runtime complexity is expected to be
O(n log n)
as the selected random pivots are supposed to avoid the worst case behavior. - Why Quick Sort is better than Merge Sort?
- Auxiliary Space : Quick sort is an in-place sorting algorithm whereas Merge sort uses extra space. In-place sorting means no additional storage space is used to perform sorting (except recursion stack). Merge sort requires a new temporary array to merge the sorted arrays thereby making Quick sort the better option.
- Worst Cases : The worst case runtime of quick sort is O(n2) can be avoided by using randomized quicksort as explained in the previous point. Obtaining average case behavior by choosing random pivot element improves the performance and becomes as efficient as merge sort.
- Cache Friendly: Quick Sort is also a cache friendly sorting algorithm as it has good locality of reference when used for arrays.
- Which is faster quick sort or merge sort?
Quick sort is faster than the merge sort. Please refer the above question. - Where is quick sort used?
Quick sort is basically used to sort any list in fast and efficient manner. Since the algorithm is inplace, quick sort is used when we have restrictions in space availability too. Please refer to the Application section for further details.
Author: https://www.linkedin.com/in/shivam-ross/ | https://twitter.com/BeastofBayArea | https://www.instagram.com/sup.its.shiv/