Prompting Methodology | Acronym | Description | Input Example | Landmark paper |
---|---|---|---|---|
Input–Output Prompting | IOP | The classic form of prompting: simple input, simple output | “Tell me what an LLM is.” | (P. Liu et al., 2021) |
Chain-of-Thought Prompting | CoT | The AI should slowly elaborate on how a given response is generated | “Take a deep breath and tell me step-by-step how to solve this problem.” | (Wei et al., 2023) |
Role-Play or Expert-Prompting | EP | The AI should assume the role of a person or an expert before providing an answer | “Imagine that you are a particle physicist knowing everything about quantum physics. Now give me an introduction to neutrinos.” | (Xu et al., 2023) |
Self-Consistency Prompting | SC | The AI should generate several responses and discern itself, which would be the best answer | “Provide me step-by-step with five ideal answers and discuss which would be the one. Explain why.” | (Wang et al., 2023) |
Automatic Prompt Engineer | APE | The AI model is provided with several examples and it should help us to find an ideal prompt to arrive at these examples (we can then further work with the resulting prompt) | “Here are some images. Please tell me how a good prompt would look like to generate pictures in this style.” | (Zhou et al., 2023) |
Generated Knowledge Prompting | GKn | Before prompting the AI with our actual task, we first let the model generate knowledge about the topic so that it already has set the right scene for its responses | “Provide me with ten facts about dolphins. Then, using these facts, write a poem about dolphins that would be actually true.” | (Liu et al., 2022) |
Tree-of-Thought Prompting | ToT | The AI is provided with a complex setting where it is prompted to use its arguments like a chess game, providing several lines of thoughts and go back again if there are inconsistencies, eventually to converge on the best response | There is no simple example of ToT-Prompting (see below): First, the ToT-context is provided Then, second, the task is provided that works within the confinements of the ToT-context | (Yao et al., 2023) |