I often use cross validation for model evaluation.
For algorithms with large variances such as decision trees, I always wonder if cross-validation evaluation is meaningful.
In the first place, I think cross-validation is aimed at finding the best hyperparameters, but in order to evaluate the model as the final output, can we evaluate it only if we have a different validation dataset?
machine-learning
I think it depends on what you want to "evaluate".
If you only need learning data, don't cross-validate and end with a model suitable for learning data
If you want to apply a model created with learning data to new data,
It would be good to try with new data and compare it with the actual results.
In most cases, the latter wins, so it is generally cross-validated.
© 2024 OneMinuteCode. All rights reserved.