Time Complexity for Sorting Algorithms

I was looking into the time taken for each of the sorting functions to be run and I’m quite confused as to why Divide and Conquer or any sorting algorithm is used when the in-built one is miles faster.

In-Built sort


This is run on 10000 samples. Since I’m new I can only post a single Image. However, if I can attach images to the comments, I will.

If this has been answered or a related topic is answered, please re-direct me.

Well, the built-in solution uses probably not only even more efficient sorting technique, but also may be multi-threaded or even written in C to speed things up.

The only reason you code them up is to practice coding skills :stuck_out_tongue:


Hi guys I would like to tell u all something that by default the complexity of the Insertion sort is O(n^2) [time] . So I have updated it a by putting the concept of binary search to it and
made it O(nlogn)< complexity >O(n^2).

And the code for it is attached below must see

There are also few edge cases for this
I have made the Insertion Sort to the complexity of O(n^2) to O(nlogn)
See below for the final code


Hi everyone the code of Merge Sort is taking the space complexity of O(n) .
I have optimised the Merge Sort with time complexity : O(nlogn) same . but the space complexity as O(1). By doing the sorting inline in the original list/arr.
So in this way it is more reliable to use when working with huge lists.
Providing the GitHub link for the code

1 Like

This is nice! I see you have mentioned on Hoare Partitioning. Could you share more on that? Thanks in advance!