Prompting techniques

Prompting techniques based on provided examples

The "x-shot" terminology in the context of prompting language models like GPT refers to the number of examples or shots provided to the model during the prompt. Let's break down the concept:

Zero-shot prompts

A zero-shot prompt is a way of interacting with GPT where you provide a prompt or instruction without explicit examples or training data for that particular task.

Prompt: Tell a joke.

Since no example is provided, the model would answer based on its pre-trained knowledge of what jokes are like. The answer might be something like this:

Output: What do you call fake spaghetti? An impasta! 🍝😄

Here are a few more examples of zero-shot prompts: Prompt: Summarize the following paragraph: [Insert paragraph here].

Prompt: Translate the text to French.

Prompt: Classify a movie into drama, comedy or romance. Movie: Titanic

Prompt: Tell me what type this pokémon: [Insert pokémon name] is.

One-shot prompts

A one-shot prompt is similar to a zero-shot prompt in that it involves instructing a language model like GPT to perform a task but with a slight difference. In a one-shot prompt, you provide the model with a single example or demonstration of the task you want it to perform.

Prompt: Tell a joke. For ex.: Knock, knock! Who's there? [...]

Since you provided an example, the model may have a better idea of what its task is: Output: Knock, knock! Who's there? Lettuce. Lettuce who? Lettuce in, it's cold out here! 😄

Here are more examples of one-shot prompts:

Prompt: Translate the following English text to French: ‘Hello, how are you?’ Example: ‘Good morning’ in French is ‘Bonjour’.

  • N-shot learning is based on the idea of providing the model with N examples or shots of a particular task or concept.

Prompt: Compose a poem about the beauty of a sunrise. For example, a poem about the moon might go: ‘Glowing orb in the night’s sky, casting shadows, shining bright.

Prompt: Classify the Pokémon into categories. Provide information such as its Generation, Type, and any additional relevant details.

Few-shots prompts

Few-shot prompting involves providing the model with a small number of examples to guide its understanding of a specific task. This additional context helps the model generalize more effectively.

Prompt
Here are some quotes from Shakespeare's plays.
1. "To be, or not to be, that is the question:"
2. "All the world's a stage, and all the men and women merely players."
3. "We are such stuff as dreams are made on, and our little life is rounded with a sleep."
Generate text on making breakfast in the style of Shakespeare.
Output
To cook, or not to cook, that is the morning quandary: Whether 'tis nobler in the mind to suffer the grumbles and rumbles of an empty stomach, Or to take arms against a sea of hunger, and by opposing, end it? To fry: to scramble; No more; and by a scramble, to say we end the heartburn and the thousand natural pangs That flesh is heir to; 'tis a consummation devoutly to be wished.

Here are more examples of few-shot prompts:

Prompt
If user asks you to wait (utterances like: A moment, please!/ Just a second./Hold on. etc.), answer with "Sure, take your time, no problem.".
Multi-shots prompts

Multi-shots, or N-shot learning is based on the idea of providing the model with N examples or shots of a particular task or concept.

The more diverse and representative your examples are, the better the model can grasp the underlying pattern.

Prompt
Classify the sentimental meaning of a given sentence. Focus on the nuances, for example:                                                
    The new game is pretty shit. Meaning: Negative                     
    Turn up the radio! That's my shit. Meaning: Positive               
    Girrrl, I live. I've almost shit myself. Meaning: Positive         
    What type of shit is this? Meaning: Ambiguous                      
    You must try these fries. They're the shit! Meaning: Positive      
    What a mess! Pick your shit and clean up! Meaning: Negative       
    DnD? That's that rolling dice, role-playing elves, orcs and shit,    right? Meaning: Neural                                              
Sentence to classify: I don't remember shit, bro! 
Output
The sentiment of the sentence "I don't remember shit, bro!" can be classified as Neutral. This classification aligns with the context of expressing a lack of memory without a strong positive or negative emotional tone.

Be aware that language models may struggle with ambiguous or poorly defined tasks. Provide enough context in your examples to guide the model in the right direction.

While providing multiple examples is a powerful strategy for improving the chances of getting the desired output, it's not necessarily a guarantee of 100% success.

The process often involves iterations and fine-tuning. Even with multiple examples, it might be necessary to adjust the prompt, refine the examples, or experiment with different approaches.

Purpose of x-shot Prompting

  • Flexibility:

    X-shot prompting provides a way to interact with language models at varying levels of specificity. Depending on the task and the complexity of the instruction, you can choose the appropriate number of shots.

  • Guidance for the Model:

    The number of shots helps guide the model's understanding of the task. Zero-shot relies solely on pre-existing knowledge, while one-shot and few-shot prompts provide specific examples to influence the model's behaviour.

  • Task Adaptability:

    X-shot prompting allows the model to adapt to a wide range of tasks without extensive task-specific training. It leverages the model's pre-trained knowledge and generalization abilities.

  • User Control:

    Users can control the level of specificity and guidance they provide to the model based on the task requirements. This gives users a versatile tool for various natural language processing tasks.


Prompting techniques based on chaining

Chaining techniques involve breaking down the task into smaller sub-steps of thought to be solved and linking them together as a series to guide the model's responses in a coherent and context-aware manner.

Chain-of-thoughts prompting

The Chain-of-Thoughts prompting technique is a powerful approach to guide the conversation coherently and logically. It involves building gradually on the previous parts of a response to create a flow of thoughts. It's about creating a natural flow within a single prompt, guiding the model to follow a coherent chain of thoughts in its response. It involves building on the information provided by the model in a single, continuous interaction.

Prompt:
You are a virtual assistant helping a user plan their day. . Once you have this information,  Additionally, inquire about  and how the user plans to 

🧠Here's a step-by-step guide how you can effectively use this technique:

  • Begin with a clear and concise introduction to set the context for the conversation. This helps both you and the model understand the focus.

Start of a prompt
Prompt:
You are a software developer working on a new project. Describe briefly the initial steps you would take to plan and organize the development process.
  • Use the initial part of the response as a foundation for generating coherent continuation.

Continuation #1
Once you have the project requirements, how would you prioritize tasks in the roadmap?
  • Elaborate. Encourage the model to provide more detailed and specific information by asking follow-up questions.

Continuation #2
Can you elaborate on how you identify dependencies and manage the critical path in a software development project?
  • Introduce scenarios or challenges. Incorporate realistic scenarios or challenges to test the model's problem-solving skills.

Continuation #3
Imagine you encounter a situation where a critical task is delayed. How would you adjust the project plan to mitigate the impact?
  • If the model provides a vague or unclear response, guide it by specifying the type of information you're looking for. This helps to get more precise answers.

Final prompt
Prompt:
You are a software developer working on a new project. Describe briefly the initial steps you would take to plan and organize the development process. Once you have the project requirements, how would you prioritize tasks in the roadmap? Can you elaborate on how you identify dependencies and manage the critical path in a software development project? Imagine you encounter a situation where a critical task is delayed. How would you adjust the project plan to mitigate the impact?

More chain-of-thought prompt exepmples:

Prompt:
You're pokédex expert tasked to provide info about pokémons.
Check user's utterance: {utterance}, if any pokémon(s) are mentioned.
Which of these types: fire, water, grass, electric, normal, rock, ground, fairy, bug, psychic, flying, steel, legendary, ice, fighting, poison, or ghost is pokémon mentioned? Respond only with name of the type (one word, lowercase). If no pokémon is mentioned, respond "none". If pokémon is more than one type, respond with <all-listed-comma-separated-types>. If pokémon is of other than listed types, respond "none".
Prompt:
You're a skilled translator and playful, creative copywriter who knows many languanges. Translate given input into French. Since output is meant for younger audience, feel free to tone it up a little. Considering sentiment of translated sentence, you may add some emoji to positive statements. 

Sentence to translate: {input}
Respond only with your translation.

Limitations:

  • CoT prompting heavily relies on the initial thought, and if that was off-kilter, subsequent thoughts followed suit.

  • Reaching dead-end, can't return a few steps back to reiterate the process and choose another route, e.g. if initial translation is wrong, output is doom to be unsatisfactory. Tweaking copywriting, adding emoji or any other afterwork won't lead to desired result.

Tree-of-thoughts prompting

Tree of Thoughts prompting is an innovative technique used in the realm of large language models (LLMs) to enhance their problem-solving capabilities.

🧠 Let's break down the basics for you:

  1. Thought Decomposition: ToT breaks down the problem-solving process into smaller thought steps. These thoughts should be substantial enough to evaluate their usefulness but small enough to generate diverse samples.

  2. Thought Generator: This part generates potential next thoughts for each state in the problem-solving tree. There are two strategies:

    • Independent Thoughts: Sample independent thoughts from a Chain of Thought (CoT) prompt. This works well for rich thought spaces like paragraphs.

    • Sequential Proposals: Propose thoughts sequentially using a “propose prompt.” This approach is better suited for constrained thought spaces like single words or lines.

  3. State Evaluator: Evaluate the progress made by each state in the tree. This serves as a heuristic for the search algorithm to decide which states to explore further. Two evaluation strategies are:

    • Value each state independently by reasoning about it and generating a scalar value or classification.

    • Vote across states by comparing different states and voting for the most promising one.

  4. Search Algorithm:

    • Breadth-First Search (BFS): Maintains a set of the most promising states per step. Useful for problems with limited tree depth.

    • Depth-First Search (DFS): Explores the most promising state first until the final output is reached or the state evaluator deems it impossible. DFS backtracks to the parent state for continued exploration.

Example ToT prompt
System prompt:
You're a travel agent. Your role is to help users brainstorm ideas for trips and help them pick up the right one for them and plan the trip. Be brief in your answers. Respond in English language.

User prompt:
Problem: You’re planning a weekend getaway in {general_area}. List some factors to consider when choosing a destination.
Step 1: For each factor, propose different destinations that align with it.
Step 2: Evaluate the pros and cons of each destination.
Step 3: Estimate and evaluate cost. Filter out the most expensive option.
Step 3: Rank the destinations based on fun/cost ratio and your preferences and priorities.
Step 4: Provide a brief resumé with recommendations of the top 3 destinations. 

Limitations:

  • Dependency on Initial Prompts: ToT heavily relies on well-crafted prompts; poorly designed ones result in suboptimal exploration.

  • State Evaluation Heuristics: Effective state evaluators are crucial; inaccurate ones can lead to suboptimal exploration.

  • Lack of Global Context: ToT evaluates states locally, struggling with long-term planning or coordination across multiple steps.

  • Interpretable State Representations: Understanding state meanings in the tree can be challenging; transparent representations are desirable.

  • Trade-offs in Exploration Strategies: Choosing between BFS and DFS involves trade-offs in exploration strategies.

Prompt chaining

Prompt chaining is a technique that leverages large language models to accomplish tasks by breaking them into multiple smaller prompts. The output of one prompt serves as the input for the next, streamlining the interaction with the AI model. Think of it as assembling a series of building blocks to construct a complete solution. How Is It Different from Chain-of-Thought Prompting? While both techniques involve multiple prompts, they serve different purposes:

Initial prompt 1
System prompt:
You are a librarian, and an expert on recommending books. Your role is to find the best book recommendation based on the user's interests.

User prompt:
This is user input: {user_input}
Based on the input, retrieve books from the database that aligns with the user's interest. Output is a list of book (name, author, year of publication).

The next step is to take the output of the initial prompt and use it as an input for the next in the chain.

Prompt 2
System prompt: [...]

User prompt:
Here's a list of recommended books: {list_of_book}
Arrange the list in alphabetical order by book name.

Let's add another prompt to the chain.

Prompt 3
System prompt: [...]

User prompt:
To each book in {arranched_list_of_book}, add from 1 to 5 stars indicating how good said book is (5 stars being the best).

And another one.

Prompt 4 (final)
System prompt: [...]

User prompt:
Write short resumé on recommended {arranched_list_of_books_with_stars}. Focus on what's common in all the books and ways in which they're unique. 
At the end, you make suggest other topics that may be interesting for the user to explore in a short paragraph.

📔 Here's what the final output of chained prompts looks like with the initial input being "books about artificial intelligence".

Output
Artificial Intelligence: A Modern Approach" by Stuart Russell and Peter Norvig - ★★★★☆
"Life 3.0: Being Human in the Age of Artificial Intelligence" by Max Tegmark - ★★★★★
"Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom - ★★★★☆
"The Age of Em: Work, Love, and Life when Robots Rule the Earth" by Robin Hanson - ★★★☆☆
"The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World" by Pedro Domingos - ★★★★☆
Short resumé on the recommended books:

These books collectively offer a comprehensive exploration of artificial intelligence, covering topics ranging from its fundamental principles and modern approaches to the societal and ethical implications of its advancement. "Life 3.0" by Max Tegmark stands out with its visionary perspective on the future of humanity in the AI age, while "The Master Algorithm" by Pedro Domingos provides insights into the transformative potential of machine learning. "Artificial Intelligence: A Modern Approach" serves as a foundational resource for understanding the core concepts, while "Superintelligence" and "The Age of Em" delve into the potential risks and strategies associated with advanced artificial intelligence. Together, they present a nuanced view of AI, acknowledging both its promises and challenges.

Suggestions for other topics to explore:

- Ethical Considerations in Technology: Delve deeper into the ethical aspects of emerging technologies beyond AI.
- Sci-Fi and AI: Explore science fiction novels that envision the impact of artificial intelligence on society and individuals.
- Robotics and Automation: Extend your exploration into the world of robotics and automation, examining their roles in various industries.
- Cognitive Science: Understand the interdisciplinary nature of AI by exploring books on cognitive science and its intersection with technology.
- Future Technologies: Broaden your horizon by exploring books on other cutting-edge technologies shaping the future.

Limitations:

  • Complexity Management: Balancing prompt granularity can be challenging. Overcomplicating or oversimplifying prompts affects performance. Each prompt in the chain increases the overall time needed to get to the final output, impacting latention in responses.

  • Dependency on Previous Outputs: Errors in early prompts propagate downstream. Requires careful design and testing.

Designed with prompt chaining in mind, our AI NODE modal allows you to effortlessly string together a system prompt and several user/assistant prompts. Start with a contextual system message, then seamlessly add prompts to create a fluid dialogue series, or step-by-step process. Dive in and elevate your chatbot conversations with simplicity and finesse! 🚀 Click here to learn all that is to know about Generative AI node!

Last updated