Tensorflow, releasing during deep learning (GPU operation).

Asked 2 years ago, Updated 2 years ago, 137 views

We are currently learning deep learning with tensorflow GPU. We initialized Graph and Session and created a new one to learn several things at the same time as the GPU quota. I want to stop working with long learning time in the middle, but I can't find a solution.

The first thing I tried was to create a session ID and make it dafault for each session ID, but no matter how much I looked for it, it didn't seem like I was given a separate session ID.

When I press stop, I tried to create a log file on a specific path and end the learning with a break statement when there was a log file, but I couldn't because everything was learned in the history=fit() built-in function, and

I tried to put a variable in the callback function within keras (if the learning value is slow during deep learning, or if there is no change in learning), but it didn't work out well.

I would appreciate it if you could share the methodology of closing only the session you want during deep learning.

Executable.py

import tensorflow estf
import keras
from keras.backend.tensorflow_backend import clear_session

def IoT():

    g1 = tf.get_default_graph()
    s1 = tf.get_default_session()
    del g1, s1
    # Remove existing graph, session (to create a new graph, session<As a memory release issue>)
    g=tf.Graph() #Newly graph declaration to create a session and gpu operation.
    with g.as_default():
        s=tf.Session(config=config)
        with s.as_default
        ....
        #The part where you declare a session and spin artificial intelligence modeling
        mode11 = Model(x, fix_names([xpred, yfake, yreal], ["xpred", "yfake", "yreal"]))
        ....
        model = AdversarialModel(base_model=model11,
                                         player_params=[generative_params, discriminator.trainable_weights],
                                         player_names=["generator", "discriminator"])
        ....
        model.adversarial_compile(adversarial_optimizer=adversarial_optimizer,
                                          player_optimizers=[Adam(3e-4, decay=1e-4), Adam(1e-3, decay=1e-4)],
                                          loss={"yfake": "binary_crossentropy", "yreal": "binary_crossentropy",
                                                "xpred": "mean_squared_error"},
                                          player_compile_kwargs=[{"loss_weights": {"yfake": 1e-1, "yreal": 1e-1,
                                                                                   "xpred": 1e2}}] * 2)
        ....
        #The part that prints and stores history
        csv_logger = keras.callbacks.CSVLogger(log_path)
        history = fit(model, x=xtrain, y=y, validation_data=(xtest, ytest), nb_epoch=epoch,
                              batch_size=int(batch), callbacks=[csv_logger])

clear_session()
    s.close()
    del g, s

Stop working.py

def workstop():
    work stop

tensorflow gpu session

2022-09-22 18:43

1 Answers

Fixed by creating a user-defined function that can be accessed during Moding We created a user-defined CallBack function in the Model fit and solved it with a stop function that stops the session by giving conditions in the if statement when the conditions in the function are satisfied.


2022-09-22 18:43

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.