in deep learning, a method of making learning progress even when layers are deep

Asked 2 years ago, Updated 2 years ago, 79 views

Currently, we use a model that combines CNN and RNN to categorize videos.
Since the model is underfitting, I would like to increase the complexity of the model.
However, CNN currently has about four layers, but if we try to increase the number of layers further, learning will stagnate at the initial stage and training errors will not decrease at all.
What can be considered as a way to proceed with effective learning in such cases?
I've heard that batch normalization is enabled, but due to GPU memory, batch size is limited to 1, and batch normalization is probably not available.
I would appreciate it if you could tell me how to proceed with learning in a stable manner other than that.

machine-learning deep-learning

2022-09-30 15:32

1 Answers

Increasing the layers is one thing, but increasing the characteristics (CNN output) is one thing.This area is called "hyperparameter" and we have no choice but to go through trial and error at the moment.There is also an automatic adjustment tool called OPUTUNA, but what it does is "actually learn and see which combination is the most advanced."It's automated, but I'm still trying.


2022-09-30 15:32

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.