CUDA used to be an acronym for Compute Unified Device Architecture, but now it’s no longer an acronym. It’s just CUDA. CUDA is basically C for GPUs. Just like operations in NumPy use C and go much faster, the same is true for CUDA operations in GPUs.

Table of Contents

Check if GPUs are available

python -c "import tensorflow as tf; print('Num GPUs Available: ', len(tf.config.experimental.list_physical_devices('GPU')))"

Specify Which GPU to Use

You can specify which GPU to use. If you’re going to do this from the command line, you can do:

CUDA_VISIBLE_DEVICES="0" python -m my_trainer

Or you could do this within Python. If you do, be sure to do this before you import TensorFlow/PyTorch.

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"

If you want to do it in your VSCode launch.json file, it will look like this:

"env": {
        "CUDA_VISIBLE_DEVICES": "0",
        },

Test if Tensorflow is working on the GPU

You can see all your physical devices like so:

import tensorflow as tf
tf.config.experimental.list_physical_devices()

and you can limit them to the GPU:

tf.config.experimental.list_physical_devices('GPU')
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))

Accessing the GPU in PyTorch

torch.cuda.current_device()
0

How many are available?

torch.cuda.device_count()
1

What’s the name of the GPU I’m using?

torch.cuda.get_device_name(0)
'NVIDIA GeForce GTX 960'

Is a GPU available?

torch.cuda.is_available()
True

How much memory is being used?

print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
Allocated: 0.0 GB