Almost right.
3) takes sqrt(n) steps to complete and that's in O(n)

For everybody who hasn't heard of O-Notation before...
From a mathematical point of view, it's a way to describe how something develops or grows.
When programming, O-Notation can be used to describe both the time- and space complexity of an algorithm.
Of course you could check how long an algorithm takes to complete with a stop watch (or by implementing something similar into your code), but that would give you a result that depends on the speed of your computer.
With O-Notation, the only "factor" in the equation is the length of the input value.

Algorithm 1) (see above) completes in one step and that step is independent from the input length. That's why it's in O(1) which is also called "constant time complexity". If it completed in 100 steps, it would also be in O(1) if those steps were independent from the length of the input word.

In algorithm 2) there is a for-loop that goes through the entire length of the input string. If the length of that string is "n", then the time complexity is in O(n). This is also called "linear time complexity".

In algorithm 2) the input length is the value of "x". Since the program calculates the integer-sqrt of the number x, it takes sqrt(x) steps to complete. The execution time depends on the length of x and even though sqrt(x) is < x, it's still in O(n) (because you don't normally write O(sqrt(n)).

In algorithm 3) you have two for-loops from 1 to N that are nested, so that already gives you n<sup>2</sup>. The second for loop from 2 to (N div I) takes <= log(n) steps to complete and since all three loops are nested, you get O(n<sup>2</sup>log(n)).

O-Notation is very useful for comparing algorithms, for example when comparing sorting algorithms.
Thought I'd share this with you... :lol: