Bite-sized explanations for commonly used terms and abbreviations related to Large Language Models and Generative AI.
AI Agent
AI Programming Interface (AIPI)
AI Safety
Artificial General Intelligence (AGI)
Artificial Intelligence (AI)
Chatbot
Context
Context Window
Data Loader
Deep Learning
Evaluation
Few-shot Prompt
Fine-tuning
Foundation Model
Frequency Penalty
Generative AI
Generative Pre-trained Transformer (GPT)
Inference
Input Token
Integrated Prompting Environment (IPE)
Jailbreak
Language Processing Unit (LPU)
Large Language Model (LLM)
LLM API
LLM API Provider
LLM OS
LLM Priming
Logit
Machine Learning (ML)
Natural Language Processing (NLP)
One-shot Prompt
Open-source Model
Open-weights Model
Output Token
Perceptron
Playground
Presence Penalty
Prompt
Prompt Chaining
Prompt Engineering
Prompt IDE
Prompt Injection Attack
Prompt Optimization
Proprietary Model
Reinforcement Learning from Human Feedback (RLHF)
Retrieval-Augmented Generation (RAG)
System Message
Temperature
Tensor Processing Unit (TPU)
Token
Token Limit
Tokenization
Top P
Transformer
Vector Embedding
Zero-shot Prompt
Resources
Prompt Engineering Guide
Everything you need to know about Prompt Engineering on one page
LLM Index
Overview and information of supported providers and models
Prompt Engineering Demo
A short video introduction into how the Promptmetheus IDE works
Recent posts