Published in · 5 min read · Mar 6, 2020
--
I recently wondered about use cases for algorithms with poor time complexity. When might it be appropriate, rather advantageous, to use an algorithm with quadratic time: O(n^2)
, polynominal time: O(n^c)
, exponential time: O(2^n)
, or factorial time: O(n!)
? I need to do more research, but from what I can tell so far, these time complexities simply are not advantageous. They are best to avoid, especially if developing software or writing programs meant to scale, and are used out of necessity or naivety.
In the process of my research on this topic, I did gain an insight: I clarified my understanding of, and the difference between, logarithmic time: O(log n)
, and linearithmic time: O(n log n)
.
Both time complexities concern logarithms. In mathematics, a logarithm is a quantity representing the power to which a fixed number (the base) must be raised to produce a given number, i.e. a logarithm is the number of times a fixed number, the base, needs to be multiplied to produce the result.
log2(8) = 3 -> "log base 2 of 8 is 3"which is the inverse of:2^3 = 8 -> "2 to the 3rd power is 8"
A logarithm essentially determines the exponent that would be applied to the base if multiplying exponentially, therefore performing the inverse: division. Algorithms with logarithmic and linearithmic time complexity both utilize division to compute data and produce an output, but with a key difference. Let’s peek at a common implementation of each time complexity, then highlight the difference.
A common algorithm of logarithmic time complexity is for binary search. Imagine a program that receives a sorted array of integers and a target integer. The program must search the array to determine if the target integer is included, and output true
or false
, accordingly.
(JavaScript)const arr = [1, 3, 4, 6, 12, 32, 43, 45, 78, 98]function binarySearch(arr, num) {
let begin = 0
let end = arr.length - 1 while (begin <= end) {
let mid = Math.floor((begin + end) / 2)
let aNum = arr[mid] if (num === aNum) return true
if (num > aNum) begin = mid + 1
if (num < aNum) end = mid - 1
}
return false
}////////////////////////////////////////////////////////////////////binarySearch(arr, 100)
//=> false////////////////////////////////////////////////////////////////////binarySearch(arr, 12)
//=> true
As opposed to a linear time (O(n)
) approach, which would evaluate each element in succession, this logarithmic time (O(log n)
) approach more efficiently determines if the target integer is included in the array by establishing beginning, middle, and end indices of the array, then dividing the array in half at the middle index; returning true
if the element at the middle index is equal to the target integer, or determining if the target integer is lesser or greater than the element at the middle index. Depending on that evaluation, the half of the array that could possibly include the target integer is recycled, the other is discarded (put a pin in that!). The while loop
repeats this pattern, and the beginning, middle, and end indices are reassigned to the recycled half. The loop continues until the target integer is found, or terminates and proceeds to the default return value of the function: false
, meaning the target integer is not included in the array.
A common algorithm of linearithmic time complexity is for merge sort. Imagine a program that receives an array of integers and returns that array sorted. Of course, there are built-ins like Array.sort()
, but different algorithms can be used by different browsers or programs in different situations and for different reasons. An algorithm like merge sort is efficient and more performant at scale.
(JavaScript)const arr = [9, 56, 3, 47, 8, 23, 1, 5, 20]// auxiliary functionfunction merge(arr1, arr2) {
const mergedArr = [] while (arr1.length > 0 && arr2.length > 0) {
arr1[0] < arr2[0] ?
mergedArr.push(arr1.shift()) :
mergedArr.push(arr2.shift())
}
return [...mergedArr, ...arr1, ...arr2]
}// primary functionfunction mergeSort(arr) {
const midIdx = arr.length / 2
const spanA = arr.slice(0, midIdx)
const spanB = arr.slice(midIdx, arr.length) if (arr.length === 1) return arr const sortedArr = merge(mergeSort(spanA), mergeSort(spanB)) return sortedArr
}////////////////////////////////////////////////////////////////////mergeSort(arr)
//=> [1, 3, 5, 8, 9, 20, 23, 47, 56]
*Note 8/11/20: It has been brought to my attention that using Array.prototype.shift()
within the while loop of the auxiliary merge()
function indicates a quadratic operation, and suggests that the actual time complexity of this specific merge sort example is O(n² log n)
. I’m not certain of the impact of using the shift method—there are some interesting details to consider—but I’ve added an alternative code example as an addendum to this article, which uses index counters in place of the shift method, to provide a definite example with a linearithmic time complexity.
This linearithmic time (O(n log n)
) approach is a combination of linear time (O(n)
) and logarithmic time (O(log n)
). The mergeSort
function utilizes the division indicative of logarithmic time complexity. Notice the similarity to the binary search algorithm, wherein the input is repeatedly divided in half, e.g. spanA
and spanB
. Recursion is used to further divide each half, and at each step, the auxiliary function merge
is used to reassemble the resultant sub-Array
in sorted order. The time complexity of the merge
function, and the recursion, is linear: O(n)
. Paired with the logarithmic time within the mergeSort
function, the ultimate evaluation of the merge sort algorithm is linearithmic time: O(n log n)
. A simplified way I’ve seen this described is O(n * log(n))
, i.e. “doing log(n)
work n
times”.
Both logarithmic and linearithmic time complexity utilize logarithms and divide data, but note the key difference between the two algorithms explained above. While the binary search algorithm continued to divide the input, half of that input was discarded each operation, repeatedly halving the amount of data to be evaluated. The merge sort algorithm repeatedly halved the input, but retained both halves and merged the sorted sub-Array
s. The retention and return of the full length of data that was inputted was necessary.
How does the performance of these time complexities compare?
Logarithmic time (O(log n)
) is the Powdered Toast Man of time complexities. It is highly performant and highly praised. It strays not far from constant time (O(1)
). It is faster than linearithmic time.
Linearithmic time (O(n log n)
) is the Muddy Mudskipper of time complexities—the worst of the best (although, less grizzled and duplicitous). It is a moderate complexity that floats around linear time (O(n)
) until input reaches advanced size. It is slower than logarithmic time, but faster than the less favorable, less performant time complexities.
This article about log-laden time complexities from BLAMMO!
github.com/dangrammer
linked.com/in/danieljromans
danromans.com
Addendum:
Alternative merge sort example without use of shift method:
// auxiliary functionfunction merge(arr1, arr2) {
const mergedArr = []
let i = 0
let j = 0 while (i < arr1.length && j < arr2.length) {
if (arr1[i] < arr2[j]) {
mergedArr.push(arr1[i])
i++
} else {
mergedArr.push(arr2[j])
j++
}
} return arr1[i] ?
mergedArr.concat(arr1.slice(i)) :
mergedArr.concat(arr2.slice(j))
}// primary functionfunction mergeSort(arr) {
const midIdx = arr.length / 2
const spanA = arr.slice(0, midIdx)
const spanB = arr.slice(midIdx, arr.length) if (arr.length === 1) return arr
const sortedArr = merge(mergeSort(spanA), mergeSort(spanB)) return sortedArr
}