dynamic programming
The method of solving problems exhibiting the properties or overlapping subproblems.[1] Real-world applications that have been solved using dynamic programming: the magnet selection tool in Photoshop uses dynamic programming.[2]
exponentiation by squaring
Also called left-to-right binary exponentiation or square-and-multiply[3].
- Right the exponent in binary. Example 225. The exponent is
25
. In binary it is11001
- Remove the first binary digit leaving
1001
and then replace each remaining1
with the pair of letters “sx” and each0
with the letter “s” to get: sx s s sx (remember, using1001
) - “s” means square, “x” means multiply. So, we have square mutiply by x, square, square, square, multiply by x where “x” is the base
In general, binary exponentiation (or square-and-multiply as learned in class) will take less than 2log(n)/log(2) multiplications.
There is also a right-to-left binary exponentiation. It is easier to program but takes more storage.[3]
Primality Testing
Fermat, Pierre de
Fermat’s Little Theorem (non-deterministic) – if p is prime, then 2p divides 2p-2.
Other Algorithms
Sieve of Eratosthenes (deterministic)
Better to use when you are dealing with a certain range of numbers.
Incrementally testing up to sqrt(n) (deterministic)
Fermat primality test and Miller-Rabin primality test (non-deterministic)
Miller-Rabin has a deterministic variant but you have to balance time complexity and accuracy.
Hashing
A mapping of keys to values in an efficient manner. Data structure is referred to as a hash-map, hash-table, or dictionary. Choosing the proper hash function depends on the scenario.
search algorithms
Binary Search: Efficient search on sorted data with time complexity of O(log2N). Continue to divide the portion of the list that could have it in half until it is narrowed down to one possible item.
Depth/Breadth First Search: tree/graph traversing and searching.
Sorting Algorithms
Noteworthy:
Merge sort: When you need a stable, O(N log N) sort, this is about your only option. The only downsides to it are that it uses O(N) auxiliary space and has a slightly larger constant than a quick sort. There are some in-place merge sorts, but AFAIK they are all either not stable or worse than O(N log N). Even the O(N log N) in place sorts have so much larger a constant than the plain old merge sort that they’re more theoretical curiosities than useful algorithms.
Quick sort: When you don’t need a stable sort and average case performance matters more than worst case performance. A quick sort is O(N log N) on average, O(N^2) in the worst case. A good implementation uses O(log N) auxiliary storage in the form of stack space for recursion.
Bucket sort: When you can guarantee that your input is approximately uniformly distributed.
Heap sort: When you don’t need a stable sort and you care more about worst case performance than average case performance. It’s guaranteed to be O(N log N), and uses O(1) auxiliary space, meaning that you won’t unexpectedly run out of heap or stack space on very large inputs.
Counting sort: When you are sorting integers with a limited range.
Others:
Bubble sort, selection sort: When you’re doing something quick and dirty and for some reason you can’t just use the standard library’s sorting algorithm. The only advantage these have over insertion sort is being slightly easier to implement.
Bogosort: In computer science, bogosort (also known as permutation sort, stupid sort, slowsort, shotgun sort, or monkey sort) is a highly inefficient sorting algorithm based on the generate and test paradigm. The function successively generates permutations of its input until it finds one that is sorted. It is not useful for sorting, but may be used for educational purposes, to contrast it with more efficient algorithms.
[1] https://en.wikiversity.org/wiki/Dynamic_programming
[2] https://www.quora.com/What-are-some-real-world-problems-that-have-been-solved-with-dynamic-programming
[3] https://primes.utm.edu/glossary/page.php?sort=BinaryExponentiation