I was looking into the time taken for each of the sorting functions to be run and I’m quite confused as to why Divide and Conquer or any sorting algorithm is used when the in-built one is miles faster.
This is run on 10000 samples. Since I’m new I can only post a single Image. However, if I can attach images to the comments, I will.
If this has been answered or a related topic is answered, please re-direct me.
Hi guys I would like to tell u all something that by default the complexity of the Insertion sort is O(n^2) [time] . So I have updated it a by putting the concept of binary search to it and
made it O(nlogn)< complexity >O(n^2).
And the code for it is attached below must see
Hi everyone the code of Merge Sort is taking the space complexity of O(n) .
I have optimised the Merge Sort with time complexity : O(nlogn) same . but the space complexity as O(1). By doing the sorting inline in the original list/arr.
So in this way it is more reliable to use when working with huge lists.
Providing the GitHub link for the code