GGUF model in LM Studio returns broken answer
I try to run LLM GGUF model QuantFactory/T-lite-instruct-0.1-GGUF specifically its quantized version T-lite-instruct-0.1.Q2_K.gguf in LM Studio.
Sometimes it works fine. But sometimes it returns “squares” in answer.
I assume that this is encoding problem but how to avoid it when using LM Studio? There is no model setting related with encoding. And I’m stuck.