Suppose you have an algorithm that does not know the amount of computation.
How should I evaluate the algorithm?
Specific examples include std::list<int>
calculations.
It's not just about running time or memory consumption, but about generalizing it as an expression.
As a prerequisite, the amount of algorithm calculated cannot be calculated without knowing the algorithm.Because the actual implementation results depend on specific input data and are influenced by various factors other than the algorithm itself (for example, if dynamic memory allocation is required, memory management overhead will appear in the execution results as well as the algorithm itself).
Then, if you want to take a rough look at the practical range of behavior when given a function without a source, you'll have to plot the execution time with input of various sizes in the actual range and roughly fit it with representative values such as n, nlogn, n^2.
It may be difficult to tell whether it is O(n^1.5) or O(nlogn) from the experimental results, but if you can't distinguish within the range of , you don't need to distinguish from .The difference between the two is important if n becomes much larger, but it does not give data that makes a difference in reality.Of course, the value obtained in this way cannot be used as the theoretical calculation of the algorithm.You need to distinguish them properly.
While "theoretically rigorous" calculations cannot be obtained experimentally, "practically necessary" calculations can be roughly estimated from the results of the experiment.
© 2024 OneMinuteCode. All rights reserved.