Input tokens refer to the individual units of information that are fed into an AI model as part of the prompt for processing. These tokens can be words, characters, or subwords, depending on the granularity of the model. They serve as the basis for the model to understand, analyze, and generate output. The number of input tokens often determines the computational requirements of the AI model, as more tokens typically require more processing power.
Input tokens are subject to the total (input + output) or specific (input only) token limit.
The LLM Knowledge Base is a collection of bite-sized explanations for commonly used terms and abbreviations related to Large Language Models and Generative AI.
It's an educational resource that helps you stay up-to-date with the latest developments in AI research and its applications.