close

[Solved] Pytorch AssertionError: Torch not compiled with CUDA enabled

Hello Guys, How are you all? Hope You all Are Fine. Today I get the following error Pytorch AssertionError: Torch not compiled with CUDA enabled in python. So Here I am Explain to you all the possible solutions here.

Without wasting your time, Let’s start This Article to Solve This Error.

How Pytorch AssertionError: Torch not compiled with CUDA enabled Error Occurs?

Today I get the following error Pytorch AssertionError: Torch not compiled with CUDA enabled in python.

How To Solve Pytorch AssertionError: Torch not compiled with CUDA enabled Error ?

  1. How To Solve Pytorch AssertionError: Torch not compiled with CUDA enabled Error ?

    To Solve Pytorch AssertionError: Torch not compiled with CUDA enabled Error which is called twice in main.py file to get an iterator for the train and dev data. If you see the DataLoader class in pytorch, there is a parameter called:

  2. Pytorch AssertionError: Torch not compiled with CUDA enabled

    To Solve Pytorch AssertionError: Torch not compiled with CUDA enabled Error which is called twice in main.py file to get an iterator for the train and dev data. If you see the DataLoader class in pytorch, there is a parameter called:

Solution 1

If you look into the data.py file, you can see the function:

def get_iterator(data, batch_size=32, max_length=30, shuffle=True, num_workers=4, pin_memory=True):
    cap, vocab = data
    return torch.utils.data.DataLoader(
        cap,
        batch_size=batch_size, shuffle=shuffle,
        collate_fn=create_batches(vocab, max_length),
        num_workers=num_workers, pin_memory=pin_memory)

which is called twice in main.py file to get an iterator for the train and dev data. If you see the DataLoader class in pytorch, there is a parameter called:

pin_memory (bool, optional) – If True, the data loader will copy tensors into CUDA pinned memory before returning them.

which is by default True in the get_iterator function. And as a result you are getting this error. You can simply pass the pin_memory param value as False when you are calling get_iterator function as follows.

train_data = get_iterator(get_coco_data(vocab, train=True),
                          batch_size=args.batch_size,
                          ...,
                          ...,
                          ...,
                          pin_memory=False)

Solution 2

Removing .cuda() works for me on macOS.

Summery

It’s all About this issue. Hope all solution helped you a lot. Comment below Your thoughts and your queries. Also, Comment below which solution worked for you? Thank You.

Also, Read