click below
click below
Normal Size Small Size show me how
algorithms
2.2 - algorithms (SLR 25 / 26)
Term | Definition |
---|---|
linear search | brute force method, searches every index in order to check if the item is there |
linear search pre-requisites | none |
Worst case scenario for linear search | the length of the array - O(n) |
binary search | divide and conquer method, checks the middle index, scraps the side the item is not on, repeats until the item is searched or if every item has been checked/scrapped |
binary search pre-requisites | the array has to be sorted before hand |
Worst case scenario for binary search | 1 + the power 2 needs to be raised to in order to match the length of the array, rounded down - O(log(n)) |
big O notation | describes how the complexity of a project changes as the input size increases, due to time (time complexity) or memory (space complexity) changes as the input size increases |
rules for big O notation | - only goes by the quickest evolving O (looks at the limit as n increases) - looks at the “general” or worst case scenario" for a given action - coefficients, unless they're other variables, are ignored - logs are generally base 2, unless specified |
general algorithm notice | there's more then 1 way to make an algorithm! As long as the end goal has been achieved, the actual code doesn't matter |
linear search advantages/disadvantages | it’s often the easiest algorithm to implement, but can be incredibly slow for bigger arrays. Can be useful for shorter arrays. |
binary search advantages/disadvantages | As the input size increases, the time taken to complete the search increases slowly, but can be complex to implement and the array needs to be sorted first. Can be slight issues for even lists |
GENERAL O(n) examples | O(1) - constant time e.g. checking a specific index O(n) - linear time e.g. checking a whole array O(n^2) - nested iterations O(log(n)) - binary search O(n!) - recursion (e.g a factorial function) |
bubble sort | brute force sort - 1 pass is defined to be checking the first and second index, swapping if needed. then the second and third etc until the first whole array is checked repeat passes until array is sorted (no swaps were made in a given pass) |
bubble sort advantages / disadvantages | can be easy to implement, but can be extremely inefficient for longer lists |
insertion sort | splits the array into two sections, a “sorted” and “unsorted” section. from the first index onwards, its placed in the “sorted” section in order. Repeats until array is sorted (there are no items in the “unsorted” section) |
insertion sort advantages / disadvantages | more efficient then bubble sort but can still be inefficient for longer lists. |
bubble sort O(n) | O(n^2) |
insertion sort O(n) | O(n^2) |
merge sort | divide and conquer sort divides the whole array into smaller arrays length 1, then each smaller array is combined in a way so that their items are in order. Combine terms until the length of the new array is the same as the original. |
merge sort advantages / disadvantages | without checking if the list is already sorted first, it can be more costly/redundant than other sorts due to its complexity. Is highly efficient. |
merge sort O(n) | O(nlogn) |