fix(curriculum): correct grammar in Big O notation lecture (#64715)

Co-authored-by: Huyen Nguyen <25715018+huyenltnguyen@users.noreply.github.com>
This commit is contained in:
Aaditya Raj Baniya
2025-12-17 13:40:55 -08:00
committed by GitHub
parent aa0b18e2e7
commit 4ec66689e5

View File

@@ -64,7 +64,7 @@ In Big O notation, we usually denote input size with the letter `n`. For example
Constant factors and lower-order terms are not taken into account to find the time complexity of an algorithm based on the number of operations. That's because as the size of `n` grows, the impact of these smaller terms in the total number of operations performed will become smaller and smaller. Constant factors and lower-order terms are not taken into account to find the time complexity of an algorithm based on the number of operations. That's because as the size of `n` grows, the impact of these smaller terms in the total number of operations performed will become smaller and smaller.
The term that will dominate the overall behavior of the algorithm will the term with `n`, the input size. The term that will dominate the overall behavior of the algorithm will be the highest order term with `n`, the input size.
For example, if an algorithm performs `7n + 20` operations to be completed, the impact of the constant `20` on the final result will be smaller and smaller as `n` grows. The term `7n` will tend to dominate and this will define the overall behavior and efficiency of the algorithm. For example, if an algorithm performs `7n + 20` operations to be completed, the impact of the constant `20` on the final result will be smaller and smaller as `n` grows. The term `7n` will tend to dominate and this will define the overall behavior and efficiency of the algorithm.