Best practices to handle prompts that are too long for the LLM API (eg., Anthropic, OpenAi)?
I am working with the Anthropic API to process text prompts, but I keep encountering the following error when my prompt exceeds the maximum token limit:
I am working with the Anthropic API to process text prompts, but I keep encountering the following error when my prompt exceeds the maximum token limit: