The goal for this guide is to distill everything you need to know to get started with Prompt Engineering down to a single page. There is no technical knowledge required and all the essential questions will be answered. Once you're done reading, you'll be ready to tackle your first Prompt Engineering project.
A few notes:
- All technical terms are deep-linked to respective definitions in our LLM Knowledge Base. If you don't know what a certain term means, just click on it and get a bite-sized explanation.
- ...
Let's start with the basics.
What is a GPT and how does it work?
A Generative Pre-trained Transformer (GPT) is a specific type of Large Language Model (LLM) that uses Deep Learning to generate human-like text.
The best way to understand how a GPT works under the hood is undoubtedly to stop right here and invest 27 minutes of your time in watching is excellent video by 3Blue1Brown:
What is Prompt Engineering and why do I need it?
Coming soon...
Which LLM API provider should I choose for my project?
Coming soon...
How to select the right model?
Take a look at our related post "How to choose the right LLM for your use case".
What is the difference between one-shot prompts and agents?
Coming soon...
What is RAG and when is it useful?
Coming soon...