how to install / pull Ollama models in Docker Container
I have created a chatbot application (based on python 3.10.10
, langchain_community==0.2.5
and ollama LLM model, Ollama Embeddings model). I run this application on my local computer (which does not have a GPU), it is working fine.