Keras gets IndexError and AttributeError with the same code

Asked 2 years ago, Updated 2 years ago, 99 views

I tried my best to write a simple autoencoder by myself.
However,

C:\Users\yudai\Desktop\keras_AE.py:62:UserWarning:Update your`Model`call to the Keras2 API:`Model(in...,outputs=Tensor("de...)"
  autoencoder=Model (input=input_word, output=decoded)
Traceback (most recent call last):
  File "C:\Users\yudai\Desktop\keras_AE.py", line 70, in<module>
    shuffle=False)
  File "C:\Users\yudai\Anaconda3\envs\pyMLgpu\lib\site-packages\keras\engine\training.py", line 1039, infit
    validation_steps=validation_steps)
  File "C:\Users\yudai\Anaconda3\envs\pyMLgpu\lib\site-packages\keras\engine\training_arrays.py", line 139, fit_loop
    if isparse(ins[i]) and not K.is_sparse(feed[i]):
IndexError:list index out of range

appears.
If anyone knows the cause,
I look forward to your kind cooperation.
(I'm also asking questions on Terateil.)

Note:
https://github.com/keras-team/keras/issues/7602
More

autoencoder=Model (input=input_word, output=decoded)

autoencoder=Model (input=input_word, output=decoded)

I fixed it to .
However, I get the same error.
There were some other errors, but I corrected them.

In order to find out if it is a problem with input data, I tried Kanji space, no Kanji space, no Hiragana space, no Hiragana space, no Hiragana space, and all of them got the same error.

On a different Windows 10 PC,
python 3.6.5
tensorflow 1.8.0
keras 2.1.5

C:\Users\hoge\Desktop\keras_AE.py:62:UserWarning:Update your`Model`call to the Keras2 API:`Model(in...,outputs=Tensor("de...)"
  autoencoder=Model (input=input_word, output=decoded)
Traceback (most recent call last):
  File "C:\Users\hoge\Desktop\keras_AE.py", line 70, in<module>
    shuffle=False)
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\keras\engine\training.py", line 1630, infit
    batch_size=batch_size)
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\keras\engine\training.py", line 1487, in_standardize_user_data
    in zip(y,sample_weights,class_weights,self._feed_sample_weight_modes)]
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\keras\engine\training.py", line 1486, in<listcomp>
    for (ref, sw, cw, mode)
  File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Anaconda3_64\lib\site-packages\keras\engine\training.py", line540, in_standardize_weights
    return np.ones ((y.shape[0]), ), dtype = K.floatx())
AttributeError: 'NoneType' object has no attribute 'shape'

has the same code and a different error.

#-*-coding:utf-8-*-
from keras.layers import Input, Dense
from keras.models import Model
from keras.utils.data_utils import get_file
import numpy as np
import codecs

# Data preprocessing
# Loading Data
with codecs.open(r'C:\Users\yudai\Desktop\poem.txt', 'r', 'utf-8') asf:
    for text inf:
        text=text.strip()
# Length of corpus
print('corpus length:',len(text))
# Sort text to count characters
chars=sorted(list(set(text)))))
# Display Full Characters
print('total chars:',len(chars))
# ID conversion of characters
char_indices=dict(c,i)for i,cin enumerate(chars))
# ID to Character Conversion
indications_char=dict(i,c)for i,cin enumerate(chars))
# Load text 17 characters at a time
maxlen = 17
# Number of sample batches
step=3
sentences = [ ]
next_chars=[ ]
for i in range(0,len(text)-maxlen,step):
    sentences.append (text[i:i+maxlen])
    next_chars.append(text[i+maxlen])
# Display the number of characters to learn
print('Sequences:',len)

# vectorize
print('Vectorization...')
x=np.zeros(len(sentences), maxlen, len(chars))), dtype=np.bool)
y=np.zeros(len(sentences), len(chars))), dtype=np.bool)
for i, sentence in enumerate (sentences):
    fort, char in enumerate (sentence):
        x[i,t,char_indices[char]] = 1
    y[i,char_indices[next_chars[i]]]] = 1

# enter the process of building a model
print('Build model...')
# encoder dimension
encoding_dim = 128
# input variable
input_word=Input(shape=(x,y))
# store encoded words
encoded_h1 = Dense(128, activation='relu') (input_word)
encoded_h2 = Dense(64, activation='relu') (encoded_h1)
encoded_h3 = Dense(32, activation='relu') (encoded_h2)
# Potential variables (substantial main component analysis)
US>latent=Dense(8, activation='relu')(encoded_h3)
# Reconfigure encoded data
decoded_h1 = Dense(32, activation='relu') (latent)
decoded_h2 = Dense(64, activation='relu') (decoded_1)
decoded_h3 = Dense(128, activation='relu') (encoded_2)

output=Dense(100, activation)

autoencoder=Model (input=input_word, output=decoded)
# Adam optimizes loss function categorical_crossentropy
autoencoder.compile(optimizer='Adam', 
loss='categorical_crossentropy')

# Running the autoencoder
autoencoder.fit(x_train,
                epoch=1000,
                batch_size = 256,
                shuffle=False)
# Observe the progress of learning
defon_epoch_end(epoch):
    print()
    print('Epoch:%d'%epoch)

# Save model structure
model_json=autoencoder.to_json()
with open('keras_AE.json', 'w') as json_file:
    json_file.write(model_json)
# Save learned model weights
autoencoder.save_weights('AE.h5')

decoded_word=autoencoder.predict(word_test)

X_embedded=model.predict(X_train)
autoencoder.fit(X_embedded, X_embedded, epochs=10,
            batch_size=256, validation_split=.1)

C:\Users\hoge\Desktop\poem.txt extracts 29,000 haiku poems from the web one sentence at a time and performs morphological analysis on MeCab.
Example:
Nine steps in the morning fog. As soon as it rains warmly, the leaves wither. Vegetable flowers and bright towns
The sound of the tides flowing into the autumn wind and Iyo
You can see the sea in the quietness and the hole in the shoji. The young sweetfish went up in two hands
br/> When I finish the autumn I'm going to go to, I'll kick the deer.
Mushroom hunting became the wind of my own voice
It's cold every year at the beginning of the equinox.

Windows 10

python 3.7.0
tensorflow-gpu 1.9.0
keras 2.2.4

python3 machine-learning keras

2022-09-30 16:44

1 Answers

tiitoi answered in terail.

First of all, AutoEncoder learns input and output with the same data.
Therefore, 'NoneType' object has no attribute 'shape' error because input is also correct data and only input is passed as fit(x).

autoencoder.fit(x,x,
            epochs=1000,
            batch_size = 256,
            shuffle=False)

Next, I want to output the same data as the input, so I want to change the shape of the output layer
decoded=Dense(128, activation='relu')(encoded)
Instead, it should be the same as the input:
decoded=Dense(12, activation='relu')(encoded)

Full code.

import numpy as np
import codecs
from keras.layers import Activation, Dense, Input
from keras.models import Model

# Loading Data
with open(r'test.txt', encoding='utf-8') asf:
    poems=f.read().splitlines()
text=poems[0]#1st data
print(text)

# corpus length
print('corpus length:',len(text))

# Sort text to count characters
chars=sorted(list(set(text)))))

# Displaying the Total Number of Characters
print('total chars:',len(chars))

# ID conversion of characters
char_indices=dict(c,i)for i,cin enumerate(chars))

# ID to Character Conversion
indications_char=dict(i,c)for i,cin enumerate(chars))

# Load text 17 characters at a time
maxlen = 17
# Number of sample batches
step=3
sentences = [ ]
next_chars=[ ]
for i in range(0,len(text)-maxlen,step):
    sentences.append (text[i:i+maxlen])
    next_chars.append(text[i+maxlen])
# Display the number of characters to learn
print('Sequences:', sentences)
print('next_chars:', next_chars)

# vectorize
print('Vectorization...')
x=np.zeros(len(sentences), maxlen, len(chars))), dtype=np.bool)
y=np.zeros(len(sentences), len(chars))), dtype=np.bool)
for i, sentence in enumerate (sentences):
    fort, char in enumerate (sentence):
        x[i,t,char_indices[char]] = 1
    y[i,char_indices[next_chars[i]]]] = 1

# enter the process of building a model
print('Build model...')
# encoder dimension
encoding_dim = 128
# input variable
input_word=Input(shape=(maxlen,len(chars)))
# store encoded words
encoded = Dense(128, activation = 'relu') (input_word)
encoded = Dense (64, activation = 'relu') (encoded)
encoded = Dense(32, activation = 'relu') (encoded)
# Potential variables (substantial main component analysis)
US>latent=Dense(8, activation='relu') (encoded)
# Reconfigure encoded data
decoded=Dense(32, activation='relu')(latent)
decoded=Dense(64, activation='relu')(decoded)
decoded=Dense(12, activation='relu')(encoded)
autoencoder=Model (input=input_word, output=decoded)
# Adam optimizes loss function categorical_crossentropy
autoencoder.compile(optimizer='Adam', loss='categorical_crossentropy')
autoencoder.summary()

print(x.shape)
# Running the autoencoder
autoencoder.fit(x,x,
                epochs=1000,
                batch_size = 256,
                shuffle=False)

# Save model structure
model_json=autoencoder.to_json()
with open('keras_AE.json', 'w') as json_file:
    json_file.write(model_json)
# Save learned model weights
autoencoder.save_weights('AE.h5')


2022-09-30 16:44

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.