What is an Algorithm?
An algorithm is a step-by-step procedure for solving a problem in a finite amount of time.
Running Time
How well does an algorithm perform when the input size grows?
-
The running time of an algorithm typically grows with the input size.
-
An algorithm can perform differently even if the input size is the same, depending on input content or order.
Best, Worst, and Average Cases
-
Best case: Minimum number of steps required.
-
Worst case: Maximum number of steps required.
-
Average case: Average number of steps, often difficult to determine.
Example:
| Case | Condition | Number of Steps |
|---|---|---|
| Best | e exists as the first element in A | 1 step |
| Worst | e does not exist in A | n steps |
| Average | e exists in the middle of A | n/2 steps |
boolean contains(int A[], int e) {
for (int i = 0; i < A.length; i++)
if (A[i] == e)
return true;
return false;
}Worst Case
-
We generally focus on the worst-case running time.
-
Easier to analyze.
-
Helps estimate the maximum time an algorithm may take, which is crucial for applications such as games, finance, and robotics.
Running Time Analysis
Two techniques:
-
Experimental Studies
-
Theoretical Analysis
Experimental Studies
-
Implement the algorithm as a program.
-
Run the program with different input sizes and compositions.
-
Use built-in timing functions to measure actual running time.
-
Plot the results.
Limitations:
-
Requires hardware to process large inputs.
-
Difficult to find representative inputs.
-
Hard to compare algorithms fairly across different hardware/software environments.
Theoretical Analysis
-
Uses pseudocode instead of actual code.
-
Characterizes running time as a function of input size
n. -
Advantages:
-
Considers all possible inputs.
-
Allows algorithm speed comparison independent of hardware/software.
-
Pseudocode Example: Finding Maximum Element
Algorithm arrayMax(A, n)
Input: array A of n integers
Output: maximum element of A
currentMax ← A[0]
for i ← 1 to n - 1 do
if A[i] > currentMax then
currentMax ← A[i]
return currentMaxPrimitive Operations
Basic computations performed by an algorithm, largely independent of programming language. Examples:
-
Arithmetic operations
-
Assigning values to variables
-
Indexing into an array
-
Calling a method
-
Returning from a method
-
Comparing two variables
Counting Primitive Operations
Example: arrayMax(A, n)
-
Worst case:
7n - 2operations -
Best case:
5noperations
Growth Rate of Running Time
- Fact: Changing the hardware/software affects
T(n)by a constant factor but does not alter its growth rate.
Growth Rates from Lowest to Highest:
| Growth Type | Notation |
|---|---|
| Constant | O(1) |
| Logarithmic | O(log n) |
| Linear | O(n) |
| N-Log-N | O(n log n) |
| Quadratic | O(n²) |
| Cubic | O(n³) |
| Exponential | O(2ⁿ) |
Big-O Notation
-
Definition:
f(n) is O(g(n))if there exist positive constantscandn₀such that:f(n) ≤ c * g(n) for all n ≥ n₀ -
Example:
2n + 10 is O(n)2n + 10 ≤ cn (Choose c = 3, n₀ = 10)
Big-O Rules:
-
Drop lower-order terms.
-
Drop constant factors.
-
Use the smallest possible function class.
Example
Say O(n), not O(n²), if possible.
Asymptotic Algorithm Analysis
-
Determines running time using Big-O notation.
-
Steps:
-
Find the worst-case number of primitive operations executed.
-
Express this function using Big-O notation.
-
Example
arrayMax(A, n) runs in O(n) time
Time Complexity Comparison
| Complexity | Speed |
|---|---|
O(1) | Fastest |
O(log n) | Very Fast |
O(n) | Moderate |
O(n log n) | Slower |
O(n²) | Slow |
O(n³) | Very Slow |
O(2ⁿ) | Extremely Slow |
Computing Prefix Sums
Algorithm prefixSum1 (Quadratic Time O(n²))
Algorithm prefixSum1(X, n)
Input: array X of n integers
Output: array A of prefix sums
A ← new array of n integers
for i ← 0 to n - 1 do
s ← X[0]
for j ← 1 to i do
s ← s + X[j]
A[i] ← s
return A-
Uses an arithmetic progression sum formula.
-
Runs in O(n²) time.
Algorithm prefixSum2 (Linear Time O(n))
Algorithm prefixSum2(X, n)
Input: array X of n integers
Output: array A of prefix sums
A ← new array of n integers
s ← 0
for i ← 0 to n - 1 do
s ← s + X[i]
A[i] ← s
return A- Uses a running sum to compute prefix sums in O(n) time.
Summary of Asymptotic Notation
| Notation | Meaning |
|---|---|
Big-O (O) | Upper bound on growth rate |
Big-Omega (Ω) | Lower bound on growth rate |
Big-Theta (Θ) | Exact growth rate |
Little-oh (o) | Strictly less than growth rate |
Little-omega (ω) | Strictly greater than growth rate |
Conclusion
-
Time complexity matters when choosing algorithms.
-
Asymptotic analysis helps compare algorithm efficiency.
-
Big-O notation simplifies complexity representation.
-
Faster algorithms enable processing larger inputs efficiently.