LLM Knowledge Base

Token Limit

Token limit refers to the maximum number of tokens that a Generative AI model can process or generate in a single operation. The token limit is a crucial factor in determining the complexity and length of the content that the AI can handle (see also "Context Window").

Different LLM providers treat token limits differently. Sometimes the token limit applies to the total number of input tokens plus output tokens, e.g. OpenAI. Other times, limits are applied to input and output independently, e.g. Gemini.

PROMPTMETHEUS © 2024