Skip to main content

One doc tagged with "conversation memory"

View all tags

Token Limit Management

When working with large language model (LLM) APIs such as OpenAI’s gpt-4, each API call is subject to a maximum token limit (e.g., 8192 or 128k tokens). As conversations grow longer, especially when maintaining history for context, you may quickly approach or exceed these limits.