close

How to clear Cuda memory in PyTorch

Hello Guys, How are you all? Hope You all Are Fine. Today We Are Going To learn about How to clear Cuda memory in PyTorch in Python. So Here I am Explain to you all the possible Methods here.

Without wasting your time, Let’s start This Article.

Table of Contents

How to clear Cuda memory in PyTorch?

  1. How to clear Cuda memory in PyTorch?

    I figured out where I was going wrong. I am posting the solution as an answer for others who might be struggling with the same problem.

  2. clear Cuda memory in PyTorch

    I figured out where I was going wrong. I am posting the solution as an answer for others who might be struggling with the same problem.

Method 1

I figured out where I was going wrong. I am posting the solution as an answer for others who might be struggling with the same problem.

Basically, what PyTorch does is that it creates a computational graph whenever I pass the data through my network and stores the computations on the GPU memory, in case I want to calculate the gradient during backpropagation. But since I only wanted to perform a forward propagation, I simply needed to specify torch.no_grad() for my model.

Thus, the for loop in my code could be rewritten as:

for i, left in enumerate(dataloader):
    print(i)
    with torch.no_grad():
        temp = model(left).view(-1, 1, 300, 300)
    right.append(temp.to('cpu'))
    del temp
    torch.cuda.empty_cache()

Specifying no_grad() to my model tells PyTorch that I don’t want to store any previous computations, thus freeing my GPU space.

Summery

It’s all About this issue. Hope all Methods helped you a lot. Comment below Your thoughts and your queries. Also, Comment below which Method worked for you? Thank You.

Also, Read