How to properly use claude API Cache to store prompt
I am running the same instructions every time but on different user input, so I am using the beta cache feature to reduce the input token amount
Optimizing LLM-based Analysis of Multiple Interview Transcripts
I’m developing a project to analyze multiple interview transcripts using Large Language Models (LLMs), specifically Claude 3.5 via the Claude API. While I’ve successfully analyzed individual interviews, I’m facing challenges in optimizing the process and maintaining context across multiple interviews, particularly given the stateless nature of the API. Here are my main questions:
Optimizing LLM-based Analysis of Multiple Interview Transcripts
I’m developing a project to analyze multiple interview transcripts using Large Language Models (LLMs), specifically Claude 3.5 via the Claude API. While I’ve successfully analyzed individual interviews, I’m facing challenges in optimizing the process and maintaining context across multiple interviews, particularly given the stateless nature of the API. Here are my main questions:
Optimizing LLM-based Analysis of Multiple Interview Transcripts
I’m developing a project to analyze multiple interview transcripts using Large Language Models (LLMs), specifically Claude 3.5 via the Claude API. While I’ve successfully analyzed individual interviews, I’m facing challenges in optimizing the process and maintaining context across multiple interviews, particularly given the stateless nature of the API. Here are my main questions:
Optimizing LLM-based Analysis of Multiple Interview Transcripts
I’m developing a project to analyze multiple interview transcripts using Large Language Models (LLMs), specifically Claude 3.5 via the Claude API. While I’ve successfully analyzed individual interviews, I’m facing challenges in optimizing the process and maintaining context across multiple interviews, particularly given the stateless nature of the API. Here are my main questions:
Optimizing LLM-based Analysis of Multiple Interview Transcripts
I’m developing a project to analyze multiple interview transcripts using Large Language Models (LLMs), specifically Claude 3.5 via the Claude API. While I’ve successfully analyzed individual interviews, I’m facing challenges in optimizing the process and maintaining context across multiple interviews, particularly given the stateless nature of the API. Here are my main questions:
Multiple interview analysis using Claude API
I’m a newbie in the LLM world and doing some research of my own.
I have multiple text documents, all related to interviews I did with my colleagues.