ResourceExhaustedError after using gpu in google collateral

Asked 1 years ago, Updated 1 years ago, 116 views

Run a notebook using Karas in Google colab.

No GPUs showed no errors.

ResourceExhaustedError: OOM when allocating sensor of shape [3,3,256,512] and type float
     [[Node:training_1/SGD/zeros_14=Const [dtype=DT_FLOAT, value=Tensor<type:floatshape:[3,3,256,512] values:[[[00]]]...>,_device="/job:localhost/replica:0/device:0"GPU]()]]

During handling of the above exception, another exception occurred:

ResourceExhaustedError Traceback (most recent call last)
<ipython-input-23-1e944e6043cb>in<module>()
---->1hist_obj=deconvNet.Train()

The following message is full of insufficient resources.

I look forward to your kind cooperation.

Source Code Notebook

!pip install pydot
! apt-get install-y-qq software-properties-common python-software-properties module-init-tools
!add-apt-repository-yppa —alessandro-strada/ppa2>&1>/dev/null
! apt-get update-qq2>&1>/dev/null
! apt-get-y install-qq google-drive-ocamlfuse use
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
credits=GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse-headless-id={creds.client_id}-secret= 
{creds.client_secret}</dev/null2>&1 | grep URL
vcode=getpass.getpass()
! echo {vcode} | google-drive-ocamlfuse-headless-id = {creds.client_id}-secret = {creds.client_secret}


:

import keras
from keras.layers import Input, Dense, Lambda, Flatten, Reshape, 
concatenate, activation
from keras.layers import concatenate
from keras.layers import Conv2D, Conv2DTranspose, ZeroPadding 2D, MaxPooling 2D, 
Cropping 2D, BatchNormalization
from keras.models import Model
from keras import metrics
from keras import backend as K
from keras import optimizers
from keras import losses
from keras.utils import plot_model
from keras.callbacks import Callback
from keras.applications.vgg16import VGG16
from keras.applications.vgg16 import preprocess_input
import tensorflow as tf
import random
import time
import numpy as np
import skimage.io asio
import matplotlib
import matplotlib.pyplot asplt
%matplotlib inline
import pydot
from pyemd import emd, emd_samples

keras gpu google-colaboratory

2022-09-30 21:33

1 Answers

I have questions and answers related to the English version.

Google Collaboration:misleading information about its GPU (only 5% RAM available to some users)

In summary, you often receive only 500MB of RAM from your GPU.That's probably why I got a message saying I'm out of resources.

The only way to deal with this is to modify the blog so that the GPU's RAM can run at 500MB.

I think it would be convenient to check the memory of the GPU first using the following code from the English version of the question.I tried this before writing this answer and found that I could use all 11GB of RAM in my GPU.

#memory footprint support libraries/code
!ln-sf/opt/bin/nvidia-smi/usr/bin/nvidia-smi
! pip install gputil
! pip install psutil
! pip install humanize
import psutil
import humanize
importos
import GPUtil as GPU
GPUs=GPU.getGPUs()
# XXX—only one GPU on Colab and isn't guaranteed
gpu=GPUs[0]
def printm():
    process=psutil.Process(os.getpid())
    print("Gen RAM Free:"+humanize.naturalsize(psutil.virtual_memory().available), "|Proc size:"+humanize.naturalsize(process.memory_info().rss))
    print("GPU RAM Free:{0:0f}MB | Used:{1:0f}MB | Util{2:3.0f}% | Total {3:0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()

Gen RAM Free: 12.6 GB | Proc size: 140.1 MB
GPU RAM Free: 11439MB | Used: 0MB | Util 0% | Total 11439MB


2022-09-30 21:33

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.