Problem trying to use PyTorch with CUDA on Windows
I’m working on a code that uses Whisper, and I need PyTorch with CUDA to improve the speed of the model execution, I have CUDA installed (verified using nvidia-smi
command where it shows that I have CUDA 12.6) and I installed PyTorch using the command pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
, but when I try to import torch in Python (import torch
) I get the error:
Running PyTorch cuDNN and CUDA on MacOS metal
I am trying to run a python script with the below configuration on my MacBook M1 laptop with built-in GPU.
pytorch move nn.module to cuda including submodules and wrapper modules
I am new to use the pytorch. I would like to move my module to cuda.
There is a code example:
pytorch move nn.module to cuda including submodules
I am new to use the pytorch. I would like to move my module to cuda.
Whisper on CPU is 10x faster than CUDA
I’m facing a really weired situation where running whisper on CPU is 10x faster than CUDA.
Code on CUDA:
Assertion Error during Audio Generation: CUDA Error
I am getting this error : [I Error during audio generation: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasGemmStridedBatchedEx( handle, opa, opb, m, n, k, (void*)(&falpha), a, CUDA_R_16F, lda, stridea, b, CUDA_R_16F, ldb, strideb, (void*)(&fbeta), c, CUDA_R_16F, ldc, stridec, num_batches, CUDA_R_32F, CUBLAS_GEMM_DEFAULT_TENSOR_OP)
Error during audio generation: CUDA error: device-side assert triggered