HuggingFaceEmbeddings vs Specific Embeddings Instruction
I am new to LLM and is trying to get a local RAG model running on my machine. During my research, I came across this embedding model that I want to use: BAAI/bge-m3. The instruction provided says I should use FlagEmbedding to inference the model. However, I already have HuggingFaceEmbeddings setup with my LlamaIndex environment so I don’t want to change to Flag. Do note that I am able to use bge-m3 with HuggingFaceEmbeddings but my question is what is the difference between using HuggingFaceEmbeddings to inference the model compared to using Flag or any other type of specifically instructed software to inference embedding models? Do I get worst performance, less customization of the model parameters or something else?