Understanding Meaningful Algorithms by Adapting Cross Validation

Asked 2 years ago, Updated 2 years ago, 56 views

I often use cross validation for model evaluation.
For algorithms with large variances such as decision trees, I always wonder if cross-validation evaluation is meaningful.

In the first place, I think cross-validation is aimed at finding the best hyperparameters, but in order to evaluate the model as the final output, can we evaluate it only if we have a different validation dataset?

machine-learning

2022-09-30 14:06

1 Answers

I think it depends on what you want to "evaluate".

If you only need learning data, don't cross-validate and end with a model suitable for learning data

If you want to apply a model created with learning data to new data,

  • Learning models without cross-validation
  • Cross-validated learning model

It would be good to try with new data and compare it with the actual results.
In most cases, the latter wins, so it is generally cross-validated.


2022-09-30 14:06

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.