How to solve “Torch was not compiled with flash attention” warning?
I am using the Vision Transformer as part of the CLIP model and I keep getting the following warning:
I am using the Vision Transformer as part of the CLIP model and I keep getting the following warning: