diff --git a/curriculum/challenges/english/blocks/lecture-working-with-common-data-structures/68420c314cdf5c6863ca8330.md b/curriculum/challenges/english/blocks/lecture-working-with-common-data-structures/68420c314cdf5c6863ca8330.md index 09c30d748fe..eb5b65f92ac 100644 --- a/curriculum/challenges/english/blocks/lecture-working-with-common-data-structures/68420c314cdf5c6863ca8330.md +++ b/curriculum/challenges/english/blocks/lecture-working-with-common-data-structures/68420c314cdf5c6863ca8330.md @@ -64,7 +64,7 @@ In Big O notation, we usually denote input size with the letter `n`. For example Constant factors and lower-order terms are not taken into account to find the time complexity of an algorithm based on the number of operations. That's because as the size of `n` grows, the impact of these smaller terms in the total number of operations performed will become smaller and smaller. -The term that will dominate the overall behavior of the algorithm will the term with `n`, the input size. +The term that will dominate the overall behavior of the algorithm will be the highest order term with `n`, the input size. For example, if an algorithm performs `7n + 20` operations to be completed, the impact of the constant `20` on the final result will be smaller and smaller as `n` grows. The term `7n` will tend to dominate and this will define the overall behavior and efficiency of the algorithm.