Bite-sized explanations for commonly used terms and abbreviations related to Large Language Models and Generative AI.
AI Agent
AI Programming Interface (AIPI)
AI Safety
Artificial General Intelligence (AGI)
Artificial Intelligence (AI)
Atomic Prompt
Attention
Chain-of-thought prompting
Chatbot
Compound Prompt
Context
Context Window
Data Loader
Deep Learning
Direct Preference Optimization (DPO)
Evaluation
Evaluation-driven Development
Few-shot Prompt
Fine-tuning
Foundation Model
Frequency Penalty
Generative AI
Generative Pre-trained Transformer (GPT)
GPT Wrapper
Graph RAG
Inference
Input Token
Integrated Prompting Environment (IPE)
Jailbreak
Language Processing Unit (LPU)
Large Language Model (LLM)
LLM API
LLM API Provider
LLM OS
LLM Priming
Logit
Lost-in-the-Middle Effect
Machine Learning (ML)
Natural Language Processing (NLP)
Neural Network
One-shot Prompt
Open-source Model
Open-weights Model
Output Token
Perceptron
Playground
Presence Penalty
Prompt
Prompt Chaining
Prompt Engineering
Prompt IDE
Prompt Injection Attack
Prompt Optimization
Proprietary Model
Proximal Policy Optimization (PPO)
Reinforcement Learning from Human Feedback (RLHF)
Retrieval-Augmented Generation (RAG)
Role prompting
Supervised Fine-Tuning (SFT)
System Message
Temperature
Tensor Processing Unit (TPU)
Token
Token Limit
Tokenization
Top P
Transformer
Vector Embedding
Vector Search
Vector Store
Zero-shot Prompt