Programmatically detect GPU availability in llama-cpp-python
I’ve been using the llama-cpp-python
library for some time now, and in earlier versions, I could easily check the availability of a GPU by inspecting the GGML_USE_CUBLAS
variable or using the ggml_init_cublas
attribute of the llama
shared library as follows:
How to programatically determine CUDA-enabled GPU availability in the `llama-cpp-python` library
I’ve been using the llama-cpp-python
library for some time now, and in earlier versions, I could easily check the availability of a GPU by inspecting the GGML_USE_CUBLAS
variable or using the ggml_init_cublas
attribute of the llama
shared library as follows: