LLM Knowledge Base

Token

In the context of Large Language Models, a token is the smallest unit of data that a model can understand and process. It can be as short as a single character or as long as a word, depending on the language and the specific model. Tokens are used to break down input data into manageable pieces, enabling the AI to analyze, understand, and generate text.

For most state-of the art models, one token can be statistically approximated as ~4 characters.

PROMPTMETHEUS © 2024