click below
click below
Normal Size Small Size show me how
Big-O notation
| Question | Answer |
|---|---|
| Precision of Complexity orders | Just need to know order of complexity, not precise, linear, quadratic, constant, logarithmic, nlogn, exponential etc. Look at largest factor in growth |
| Big O notation-definition | Describes worst case complexity, relative to input size |
| How to calculate Big-O/asymptotic complexity | 1. Count steps {2. discard additive and multiplicative constants} 3. Focus on dominant terms |
| Comparing nlogn with other complexities | Between linear and quadratic |
| Laws of addition for O(f(n)) | 1. Used with sequential statements 2. O(f(n)) + O(g(n)) is O(f(n) + g(n)) |
| Laws of multiplication for O() | 1. Used with nested statements/loops 2. O(f(n)) * O(g(n)) is O(f(n) * g(n)) |
| Nested loops complexity | Quadratic. |
| Constant complexity-Definition | Complexity independent of inputs. |
| Loops or recursion with constant complexity | Can have loops or recursion but only if independent of input size. |
| Log linear complexity-Types of algorithms | Many commonly used algorithms are log-linear, including merge sort. |
| Polynomial complexity-Most common leading term | Most often quadratic. |
| EXPONENTIAL complexity-Patterns of functions | 1. Recursive functions 2. where more than one call for each size of problem (e.g. Towers of Hanoi). |
| Complexity classes | 1. Constant 2. Logn 3. Linear 4. Polynomial 5. Exponential 6. Log-linear |
| Types of programs of Linear complexity | simple iterative or recursive programs |
| Logn complexity | problem reduces in size by half each iteration |
| T(n) = O(f(n)) | F(n) describes the upper bound of T(n) (no lower bound) |
| T(n) is omega f(n) | |
| Omega f(n) describes the lower bound of T(n) | |
| T(n) is θ f(n) | Means that the complexity of T(n) is between a lower and upper bound, both which look like f(n), but may differ by a constant factor. (e.g. X² and 1.2X²) |
| T(n) = o(f(n)) | f(n) is an upper bound for T but T will always be lower than f(n) |
| Possible solutions to exponential complexity | Can try approximate solutions to provide the answer more quickly. |
| Constant complexity-Applications | Not many interesting algorithms in this class but sometimes have pieces that fit. |
| Polynomial complexity-Common algorithm structure | Often seen with nested recursive function calls or nested loops. |
| Nested loop complexity-Explanation | For each outer loop, execute the inner loop. |
| Nature of problems with exponential algorithms | Many problems are inherently exponential. |
| Exponential v Quadratic | Quadratic is n² l, exponential is 2ⁿ |
| NP hard problem | No solution with less than exponential complexity |
| Relative cost of exponential problems | Cost can be high. |
| What impact do additive and multiplicative constants generally have on asymptotic complexity | They are typically ignored |
| Does the difference between θ and O matter? | Not in most cases. In most cases, don't care about how a program runs when most efficient, so lower bound is irrelevant. |
| For θ can two functions differ by an exponent (e.g. X² and X⁴) and still be considered similar looking for Theta? | No. Differing exponents don't count as differing by a constant factor. |
| How different are θ(n) and θ(logN) | As different as θ(n) and θ(2ⁿ) |
| Converting a log into exponenential terms | For any C > 0 logn is O(nᶜ) (because a logarithm grows asymptotically slower than any positive constant exponent, and O only cares about not overestimating). |
| How does the growth rate of a fractional exponent compare to linear time? | It is less. √(n) < n |
| How does the growth rate of √n compare to log(n) | It is greater. logn<√(n) |