Setting up my GPU for ML and DL (tensorflow)

Setting up my GPU for ML and DL (tensorflow)

·

3 min read

In this blog, I want to share my experience of how I set up my GPU to run ML (machine learning) and DL (deep learning) models. Please note that this is not a guide, and I don't recommend anyone to follow the same process to set up their GPU. In this blog, I will only describe the process that I followed to set up my GPU.

Introduction

CPUs are like the generalists in your computer; they can do lots of different tasks but not super fast. They run Python programs by default.

GPUs are like specialists, especially good at certain jobs. They can be really fast for tasks like graphics, and machine learning because they can do many things at once.

Imagine a CPU is good for everyday office work, while a GPU is like a powerhouse for tasks that require lots of math, like creating 3D graphics or training AI models.

So, when you have a task that needs lots of math and can be broken into smaller pieces, you tell Python to use the GPU because it can do the work faster with its many cores.

My system config

I have done all this in a separate Python environment, its always recommended to use separate Python environment for this setup.

  • My Python version 3.8.15

  • I downloaded cuda version 11.2 and CuDNN version 8.2 (cuda and cudnn are responsible for using GPU in Tensorflow)

  • My tensorflow version is 2.10.1

Note: Versions of Python, cuda, and cudnn, Tensorflow are compatible with each other, if you have Python 3.10 or 3.11 you need to install respective versions of cuda, cudnn, and TensorFlow.

https://www.tensorflow.org/install/pip - check this official tensorflow documentation for setup GPU.

Program

import tensorflow as tf
if tf.test.gpu_device_name():
    print("GPU is available.")
else:
    print("No GPU is available.")

This will return whether you correctly configured GPU for TensorFlow, if not check your configuration or try installing various Python versions.

gpus = tf.config.experimental.list_physical_devices('GPU')

if gpus:
    for gpu in gpus:
        print("GPU Name:", gpu.name)

this will return the name of your GPU.

Testing CPU and GPU


# Define the matrix sizes
matrix_size = 100  # You can adjust this based on your needs

# Create random matrices
matrix_a = tf.random.normal(shape=(matrix_size, matrix_size))
matrix_b = tf.random.normal(shape=(matrix_size, matrix_size))

# CPU multiplication
start_time = time.time()
with tf.device('/CPU:0'):
    result_cpu = tf.matmul(matrix_a, matrix_b)
cpu_time = time.time() - start_time

# GPU multiplication
start_time = time.time()
with tf.device('/GPU:0'):
    result_gpu = tf.matmul(matrix_a, matrix_b)
gpu_time = time.time() - start_time

# Check if the results match
if tf.reduce_all(tf.equal(result_cpu, result_gpu)):
    print("Results match!")

print(f"Matrix size: {matrix_size}x{matrix_size}")
print(f"CPU Time: {cpu_time} seconds")
print(f"GPU Time: {gpu_time} seconds")

Simple matrix multiplication of 100x100 to determine which one is faster cpu or gpu.

here is the output

Matrix size: 100x100
CPU Time: 0.0897984504699707 seconds
GPU Time: 0.0009963512420654297 seconds

Conclusion

We can see GPU is faster for this program. As I said before GPU is good at breaking down a problem or building model, for generalized purposes, CPU is better.

See you soon on the next blog...