I am trying to run a model using cuda on linux terminal with conda virtual environment. The model uses device_map = "auto"
for device assignment like the following:
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
The problem arises when the code gets to model.generate(...).to(cuda)
. Forgive me for not providing the full code, but I can promise no other part meddles with cuda or device at all.
Here are the things that I have checked:
nvdia-smi command shows that driver supports up to CUDA 12.0.
nvcc –version command shows I currently have CUDA 11.8.
I downloaded the corresponding packages from the command I got from https://pytorch.org/ in accordance to the system.
Yet when I run the code, it keeps emitting AssertionError: Torch not compiled with CUDA enabled
.
torch.cuda.is_available()
returns false as well.
I have tried a lot of other solutions, downloaded what everyone told to download, uninstalled packages and reinstalled them to no avail. Any suggestions on what I might have missed would be appreciated.