LLM Knowledge Base

Prompt Chaining

Prompt chaining refers to the process of using the output from one LLM completion as the input or "prompt" for another completion in a sequence. This technique allows for the creation of more complex and nuanced responses, as each subsequent completion can build upon and refine the previous one. It's often used in Natural Language Processing tasks, such as text generation, to improve the coherence and relevance of the generated content. There is in theory no limit on how many LLM completions can be chained together and it is possible to use different models for each completion.