[Deep Learning] I have a question about the structure of Google Net's Inception.

Asked 2 years ago, Updated 2 years ago, 51 views

I'm asking you this question because I suddenly got suspicious while reading the Google Net v1 thesis review.

Google Net undergoes convolution of several sizes through Inception.

When a feature map goes through several sizes of convolution

The final feature map will also have different sizes

The feature maps are combined into one at the end of Inception to form a channel

Is there nothing wrong with configuring feature maps of different sizes in this way?

Or is there a process I don't know?

deep-learning

2022-09-20 19:22

1 Answers

If different operations are applied with different kernel_size for the same input, the output size will change, so it cannot be combined in general, but

Usually, add padding to adjust the size and combine it into one.


2022-09-20 19:22

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.