Please tell me how to execute the TensorFlow 1.x version code

Asked 2 years ago, Updated 2 years ago, 50 views

I received the following code, but it is a TensorFlow 1.0 version of the code, so I keep getting errors as I run it in 2.4.1 version.

I know it doesn't support session and placeholder, but I don't know how to change it....

Please give me some advice on that

#Create a session type variable as sess to process the value in Tensorflow
sess = tf.Session()

#Tensor for 9 pixel data
#shape=(None, 7): Unknown when there are many inputs (None), Number of features of data entered at once: 9 pixels -> 9
#Number of Input Layer nodes = 7
X = tf.placeholder(tf.float32, shape=([None,7]),name="X")

Tensor for the actual Label value of the pixel entered in #X
#shape=(None, 1) : Number of nodes in Output Layer = 1
Y = tf.placeholder(tf.int32, shape=([None,1]),name="Y")

#One-hot encoding is used to distinguish whether the number of Output Layer nodes is 0 or 1
# One-hot encoding for more than 2.5 million viewers
Y_one_hot = tf.one_hot(Y, 2, name="Y_one_hot")  

#Transformation to the form of [-1, 2] -> -1 : Meaning how many data to use, 2 : 0, 1 to distinguish 
Y_one_hot = tf.reshape(Y_one_hot, [-1, 2])

#Hidden1_Layer
#Weight for each node in the Input Layer
#W1 : 3 nodes of the first Hidden Layer
W1 = tf.Variable(tf.truncated_normal([7,3]),name='W1')

#Bias for each node in the Input Layer
#b1: Bias of each node in the first Hidden Layer
b1 = tf.Variable(tf.truncated_normal([3]),name='b1')

#Hidden1_Layer : Multiplication result of input data and weight + bias
H1_logits = tf.matmul(X, W1) + b1

#Hidden2_Layer: Hidden1_Layer result multiplication result of weight + bias
W2 = tf.Variable(tf.truncated_normal([3,2]),name="W2")

b2 = tf.Variable(tf.truncated_normal([2]), name='b2')

#Hidden2_Layer Calculation 바bat
logits = tf.matmul(H1_logits, W2) + b2

#A function to indicate the relationship or pattern between input data and output data: hypothesis
hypothesis = tf.nn.softmax(logits)

Calculate the error of the graph drawn through #Logits and the results determined through hypothesis
cost_i = tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits,labels=Y_one_hot)

#Average of total errors
cost = tf.reduce_mean(cost_i) 

#Learning using gradient-descent method
#Learning_rate: 0.05 -> Degree of width that lowers the slope in the learning process -> Smaller, narrower, wider, wider
optimization = tf.train.GradientDescentOptimizer(learning_rate=0.05).minimize(cost)

#The result of classification based on the hypothesis function we selected
prediction = tf.argmax(hypothesis, 1) 

#Whether the results of Prediction match the actual Label value
correct_prediction = tf.equal(prediction, tf.argmax(Y_one_hot, 1)) 

#Variants that store the accuracy of Prediction
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 

Initialize variables in #Tensorflow
sess.run(tf.global_variables_initializer()) 

tensorflow python deep-learning

2022-09-20 17:52

1 Answers

pip install "tensorflow<2"

ERROR: Could not find a version that satisfies the requirement tensorflow<2, ERROR: No matching distribution found for tensorflow<2 - dongahaeum

This error occurs because tensorflow 1.x was not prepared for the Python version being used. It's been a while since tensorflow 1.x has been transferred to 2.x, so this is a message that can come out if you're using the latest versions of Python 3.9, 3.8.

https://pypi.org/project/tensorflow/1.15.0/#files You can see which version of python is supported for which version of tensorflow.

Try installing Python 3.7 64-bit. (It is recommended that you delete the existing Python.))


2022-09-20 17:52

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.