Number of input channels does not match correcting dimension of filter error in Tensorflow

Asked 2 years ago, Updated 2 years ago, 88 views

I'm writing a program on Tensorflow.

number of input channels does not match corresponding dimension of filter, 2!=11

The error appears.
The program is attached below.

def make_test(self):

    inputs=Input(shape=(28,28,11), name='label')
    x = Conv2D(64,(5,5), padding='same') (inputs)
    x = LeakyReLU()(x)
    x = Conv2D(128,(5,5), kernel_initializer='he_normal', strides=[2,2])(x)
    x = LeakyReLU()(x)
    x = Conv2D(128,(5,5), kernel_initializer='he_normal', padding='same', strides=[2,2])(x)
    x = LeakyReLU()(x)
    x = Flatten()(x)
    x = Dense(1024, kernel_initializer='he_normal')(x)
    x = LeakyReLU()(x)
    x = Dense(1, kernel_initializer='he_normal')(x)
    model=Model (inputs=[inputs], outputs=[x])

    return model

def make_hoge(self):

    noise_shape=(110,)

    inputs=Input(shape=noise_shape)
    x = Dense (1024) (inputs)
    x = LeakyReLU()(x)
    x = Dense (128*7*7) (x)
    x = BatchNormalization()(x)
    x = LeakyReLU()(x)
    if K.image_data_format() == 'channels_first':
        x = Reshape((128,7,7), input_shape=(128*7*7,))(x)
        bn_axis = 1
    else:
        x = Reshape(7,7,128), input_shape=(128*7*7,))(x)
        bn_axis=-1
    x = Conv2 DTranspose(128,(5,5), strides=2, padding='same')(x)
    x = BatchNormalization (axis=bn_axis)(x)
    x = LeakyReLU()(x)
    x = Conv2D(64,(5,5), padding='same')(x)
    x = BatchNormalization (axis=bn_axis)(x)
    x = LeakyReLU()(x)
    x = Conv2 DTranspose (64, (5, 5), strides = 2, padding = 'same') (x)
    x = BatchNormalization (axis=bn_axis)(x)
    x = LeakyReLU()(x)
    x = Conv2D(1,(5,5), padding='same', activation='tanh')(x)
    model=Model (inputs=[inputs], outputs=[x])

    return model

defloss(self, y_true, y_pred):

    return K.mean(y_true*y_pred)

def_init__():
    self.test=self.make_test()
    self.hoge=self.make_hoge()

    for layer in self.test.layer:
        layer.trainable=False
    self.test.trainable=False

    input_noise=Input(shape=(11,))
    generated_images=self.hoge(input_noise)
    input_label=Input(shape=(28,28,10,))
    inputs = concatenate([generated_images, input_label], axis=3)
    outputs = self.test(inputs)
    self.train_model=Model (inputs=[input_noise, input_label], outputs=[outputs])
    self.train_model.compile (optimizer=Adam (0.0001, beta_1=0.5, beta_2=0.9), loss=self.loss)

The content of the algorithm is GAN. I'm sorry if I made a mistake because I rewritten the variable name briefly.
The environment is python 2.7, tensorflow 1.12.
By the way, inputs, generated_images, and input_labels are (?, 28, 28, 11), (?, ?, ?, 1), (?, 28, 10), respectively.
Verifying that it works if you put generated_images into self.test without concatenating.input_label did not work.
Turn off the conv layer and it will work.

Also, the error statement is near line 836 of sensorflow's nn_ops.py, but when I output input_shape, filter_shape, num_spatial_dims, it was (?, 28, 28, 2, 2), (5, 5, 11, 64), 2.filter_shape [num_spatial_dims]!= input_shape [num_spatial_dims] is the problem.

I look forward to your kind cooperation.

python tensorflow keras

2022-09-30 19:37

1 Answers

I'm a questioner.
Input shape was incorrect.
I'm sorry to have troubled you.


2022-09-30 19:37

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.