I was going to use the program that I'm using as a spider in Google Collaboration, but I got an error.
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.layers.current import LSTM
from tensorflow.keras.optimizers import Adam
from sklearn.model_selection import train_test_split
data = [ ]
data1 = [ ]
data2 = [ ]
target = [ ]
kairi=[ ]
jousyou=[]
maxlen = 60
day1 = [ ]
day2 = [ ]
day3 = [ ]
owarine=[ ]
・
・
・
・
・
'''
model setting
'''
n_in=len(X[0][0])
n_hidden = 100
n_out=len(Y[0])
def weight_variable (shape, name=None):
return np.random.normal(scale=.01, size=shape)
model=Sequential()
model.add(LSTM(n_hidden,
kernel_initializer = "random_uniform",
input_shape=(maxlen,n_in)))
model.add(Dense(n_hidden, kernel_initializer="random_uniform"))
model.add (Activation('sigmoid')
model.add(Dense(n_out, kernel_initializer="random_uniform"))
model.add (Activation('sigmoid')
optimizer=Adam (lr=0.001, beta_1=0.9, beta_2=0.999)
model.compile(loss='mean_squared_error',
optimizer= optimizer)
'''
model learning
'''
epochs=500
batch_size=1000
model.fit(X_train,Y_train,
batch_size = batch_size,
epochs = epochs,
validation_split = 0.25)
'''
That's it for learning.
'''
It looks like the model.fit part of this.
The error message is as follows:
UnimplementedError Traceback (most recent call last)
in()
173batch_size=batch_size,
174 epochs=epochs,
-->175 validation_split=0.25)
Thank you for your cooperation.
python tensorflow keras google-colaboratory
When I tried below, I was able to run it with Google colab (runtime type: GPU, Python version 3.6.9, Keras version 2.4.3).
I left the sizes X and Y below appropriately, but is it correct?If it's correct, it worked in my environment, so I thought it might be due to the development environment.
''
model setting
'''
'''
※ Add the following two lines for operation verification
'''
X = np.random.random ([700, 60, 200])
Y = np.random.random ([700,100])
n_in=len(X[0][0])
n_hidden = 100
n_out=len(Y[0])
def weight_variable (shape, name=None):
return np.random.normal(scale=.01, size=shape)
model=Sequential()
model.add(LSTM(n_hidden,
kernel_initializer = "random_uniform",
input_shape=(maxlen,n_in)))
model.add(Dense(n_hidden, kernel_initializer="random_uniform"))
model.add (Activation('sigmoid')
model.add(Dense(n_out, kernel_initializer="random_uniform"))
model.add (Activation('sigmoid')
optimizer=Adam (lr=0.001, beta_1=0.9, beta_2=0.999)
model.compile(loss='mean_squared_error',
optimizer= optimizer)
'''
※ Add the following two lines for operation verification
'''
X_train=np.random.random([700,60,200])
Y_train=np.random.random([700,100])
'''
model learning
'''
epochs=500
batch_size=1000
model.fit(X_train,Y_train,
batch_size = batch_size,
epochs = epochs,
validation_split = 0.25)
© 2024 OneMinuteCode. All rights reserved.