Relative Content

Tag Archive for pythoncudagpullama-cpp-pythonllamacpp

Programmatically detect GPU availability in llama-cpp-python

I’ve been using the llama-cpp-python library for some time now, and in earlier versions, I could easily check the availability of a GPU by inspecting the GGML_USE_CUBLAS variable or using the ggml_init_cublas attribute of the llama shared library as follows: