How to print input requests and output responses in Ollama server?

  Kiến thức lập trình

I’m working with Langchain and CrewAI libraries to gain an in-depth understanding of system prompting. Currently, I’m running the Ollama server manually (ollama serve) and trying to intercept the messages flowing through using a proxy server I’ve created.

The goal is to log or print the input requests and output responses for debugging and analysis purposes.

Can anyone suggest a better way to achieve this?

LEAVE A COMMENT