In deep learning and metastatic learning, does the deeper the layer, the higher the prediction accuracy in principle?
Currently, we are fine tuning using keras' vgg16, but the accuracy is higher when we learn only one layer than when we study with three layers of all bond layers.I think the strength of deep learning and transition learning is that they can adapt to complex problems by deepening layers, but why is it that shallow layers are more accurate?
The deeper the layer, the better the performance.
That's what I thought when I was at the end of NN, but in fact, I found that the deeper the layer, the lower the performance rather than the higher the performance, and the boom went down.
However, depending on the conditions (model structure, learning data, and so on), we found that we could achieve multi-layered and high-performance models, which led to the recent deep learning boom.It depends on the conditions, but it doesn't mean that the performance has improved because the layers have been increased in the dark.
Also, even if you have a model, it's hard to have enough and quality learning data for the problem.However, we found that models that were properly learned showed high performance (with relatively small amounts of additional data and customization) in other issues.This is metastatic learning.This isn't all-powerful either, and it depends on the conditions of whether you can get enough performance.
© 2024 OneMinuteCode. All rights reserved.