I'm doing fine tuning, but I've been fiddling with all the binding layers of vgg16, while freezing the convolutional layers.However, once the convolutional layer was made available without freezing, the accuracy jumped from about 80 percent to about 90 percent at once.Is this a condition in which normal learning has been completed?I'm really worried.
Is it possible to increase accuracy by making the convolutional layer learnable in transition learning?
When learning data is insufficient, the convolutional layer is often fixed to prevent overlearning, and the more learning data you have, the more accurate it is.
If the accuracy of training and the accuracy of validation have improved, there should be no problem.
© 2024 OneMinuteCode. All rights reserved.