I am currently learning deep learning and I am using Python tensorflow.
As for tensorflow, if you give the loss to the optimizer, it will be optimized on its own, but could you see and take out the value of w and gradient in the middle of optimization?
Also, general optimization is fine as long as you know the gradient of weight, but I want to find the gradient of input for loss. Optimizer probably won't, so I think I can manage it using the gradients() function, but I don't understand it well.
Thank you for your cooperation.
deep-learning tensorflow
optimizer=tf.train.AdamOptimizer(learning_rate=learning_rate)
grad,var=optimizer.compute_gradients(cost)
train_op=optimizer.apply_gradients([grad,var])
You can extract the gradient as grad.
params=tf.trainable_variables()
gradients = tf.gradients (loss, params)
tf.AdamOptimzer(0.1).apply_gradients(zip(gradients, params), global_step)
After finding the gradient like this, you can optimize it.
If you want to see weight or bias,
w=tf.get_variable("weight", [input_dim, output_dim], tf.float32)
b=tf.get_variable("bias", [output_dim], tf.float32)
print session.run([w,b])
You can see the weight and bias values as shown in .
© 2024 OneMinuteCode. All rights reserved.