Passing Additional Information in LangChain abatch Calls
Given an abatch
call for a LangChain chain, I need to pass additional information, beyond just the content, to the function so that this information is available in the callback, specifically in the on_chat_model_start
method.
langchain: how to prevent the language model response from being prefixed with AI: or Assistant:
I have the following langchain prompt setup for a RAG based chat application:
How to Migrate Code from LangChain 0.0.145 to Version >0.1 (Breaking Changes) #22481
Question Content:
Langchain with Redis responding only to the previous question
I am using Langchain with Redis as the persistence layer. It works, but kind of—I have a strange behavior which is as follows:
How to use a finetuned LLM with langchain?
I’m working on finetuning an LLM like LLAMA2 or LLAMA3 on a specific task which is recommandation of questionnaires depending on the purpose of the study and the data researchers want to collect.
system prompt repeats in the response after upgrading lanchain to 0.1.20
I’m Using llama3 8b model in my RAG architecture. Before upgrading lagchain model response are good. But after upgrading lanchain to 0.1.20 system prompts and context are getting repeated in the response.
embedding(): argument ‘indices’ (position 2) must be Tensor, not ChatPromptValue
I want to add message history for my agent in langchain, and I’m following the langchain document.
LangChain – MessageHistory
how to invoke rag chain with two input?
I’m trying to invoke the rag chain with two input variable but I’m getting error.
which langchain’s chain should I use for to create json object from raw texts
I have invoice’s raw texts, I’m trying to create a desired json object using the that raw texts using any open source models or paid one.
I’m stuck where I have prompt but I don’t know which langchain’s chain should I use for this task.