How does Langchain AgentExecutor serving model LLM?
I followed [this tutorial](https://github.com/zenml-io/zenml-projects/tree/main/llm-agents)
Issue: Token Generation Halts with Emoji in Text Using LangChain Agents and Mistral Large
When I was writing a prompt for llm with “reply with emoji”, I noticed that when it is time to generate emoji tokens, llm stops its generation, if there is an emoji in the text and llm will be sent to that text, it will stop its generation when it sees the emoji.