Basics

For large language models (LLMs), prompting serves as a critical interface. It allows developers to provide these sophisticated AI systems with tailored instructions, enabling them to generate narratives, answer queries, and perform a wide range of linguistic tasks.

The accuracy of a large language model's output is highly dependent on the quality of the prompt provided. Well-designed prompts act as clear instructions that define the desired content. A concise and effective prompt goes beyond simply conveying instructions; it serves as a strategic roadmap, guiding the model to generate results that meet specific stylistic, tonal, and contextual requirements.

Mastering the art of prompting is a game-changer, allowing developers to fine-tune responsiveness, creativity, and task relevance.

What is a prompt

💡 In the context of language models, a "prompt" serves as the input or instruction given to the model to generate a specific output. It acts as the catalyst for the model's response, guiding it toward producing text that aligns with the user's expectations.

A prompt can take various forms, ranging from a simple sentence to a detailed set of instructions. It is the user's means of conveying the desired task or information to the language model. A well-crafted prompt is crucial for achieving accurate and contextually relevant results. In the Flow Editor, you can leverage the AI Node to harness the power of generative language models. Within the modal, you can customize the behaviour of the generative model by providing specific instructions or prompts. These custom instructions guide the model in generating content tailored to your requirements.


Crafting effective prompts

In the realm of prompting, the art lies in constructing instructions that yield precise and relevant model outputs. This section explores key principles for crafting effective prompts, ensuring developers can harness the full potential of language models.

Here are some general guidelines:

Focus on clarity and conciseness

Prompts should be clear, concise, and devoid of unnecessary complexity. Precision in language enhances the model's ability to interpret and generate accurate responses. Avoid ambiguity and aim for straightforward instructions that align with the desired task.

Unclear: "Build a conversational script."

Clear: "Develop a chatbot script for assisting users with product inquiries."

Provide contextual information

Provide the necessary context for the model to understand the task. Context enhances the relevance and coherence of generated content. Include essential details that set the stage for the desired output, guiding the model in the right direction.

Insufficient context: "Answer questions."

Sufficient context: "Design responses for a chatbot to address customer queries about account management."

Take time balancing specificity

Strike a balance between specificity and generality. While specific prompts yield focused outputs, overly constraining the model might limit creativity. Tailor prompts to guide the model without stifling its ability to generate diverse and contextually relevant content.

Overly Constrained: "Create a scenario for handling customer complaints about a specific product with a red logo."

Balanced: "Develop a scenario capable of addressing customer concerns and feedback related to our product line."

Don't fear experimentation

Foster a culture of experimentation by urging developers to explore diverse prompt formulations. Variations in wording, structure, and length open the door to a nuanced understanding of the model's responsiveness. This iterative approach serves as a dynamic tool for developers to uncover the most effective prompts tailored to specific tasks. Try variations like "Craft a conversation about..." and "Construct a dialogue for..." to observe how the chatbot adapts to different user inputs and scenarios.

Adjust temperature and max tokens

Experiment with the temperature parameter to control the randomness of the output. Higher values (e.g., 0.8) encourage more randomness, while lower values (e.g., 0.2) produce more focused responses.

🌡️Higher Temperature (e.g., 0.8):

  • Encourages more randomness and creativity.

  • Yields diverse and imaginative outputs.

  • May result in less coherent or unexpected responsess

  • Best use for: Creative writing, brainstorming

❄️Lower Temperature (e.g., 0.2):

  • Produces more focused and deterministic responses.

  • Generates contextually aligned and predictable outputs.

  • Best use for: Tasks requiring consistency, precise responses

Be mindful of the max tokens parameter to control the length of the generated content. Adjust it based on your desired response length.

Higher Max Tokens:

  • Generates longer responses with more details.

  • This may result in verbose outputs.

Lower Max Tokens:

  • Produces shorter, concise responses.

  • Ensures content remains within specified length limits.

Try different prompt types

The effectiveness of a prompt significantly influences the output of a language model. GPT responds differently to various prompt structures, making it essential to explore diverse types based on your specific task.

Instructional Prompts:

  • Definition: Instructional prompts guide the model with specific instructions or directives.

  • Use Case: Suitable for tasks requiring precise and structured responses.

  • Example: "Provide a step-by-step guide on how to troubleshoot a software issue."

Question-Answer Prompts:

  • Definition: Question-answer prompts involve posing questions to the model for informative responses.

  • Use Case: Ideal for tasks where obtaining specific information or insights is the primary goal.

  • Example: "What are the key benefits of using renewable energy sources?"

Completion Prompts:

  • Definition: Completion prompts involve incomplete sentences or phrases that the model completes.

  • Use Case: Useful for tasks requiring the model to generate coherent and contextually appropriate content.

  • Example: "The sun sets, and the stars begin to..."

Scenario-Based Prompts:

  • Definition: Scenario-based prompts present a hypothetical situation for the model to respond to.

  • Use Case: Effective for generating narrative or creative content based on given scenarios.

  • Example: "Describe a futuristic city where humans coexist with advanced AI."

Conversation-Style Prompts:

  • Definition: Conversation-style prompts simulate an ongoing dialogue with the model, often involving back-and-forth exchanges.

  • Use Case: Valuable for tasks requiring a conversational tone or interactions.

  • Example: "You are a virtual assistant. A user asks, 'What's the weather like today?' Respond accordingly."


Prompt template for beginners

Start small and simple.

System prompt
You are a {role/persona}. Your role is/you are tasked with {general task}. I need you to {needs to fullfill}. Be brief in your answers. Always respond in {language/format}.
Assistant prompt (opt.)
Here is some additional information:
- {context}
- {knowledge}
- {specific details}

Here's what you gonna do: 
- {task's steps, details}, eg. {example}
Please DON'T {forbidden steps}.
In case you {cannot fullfil the task}, respond with "This is a fallback message".

Otherwise, respond with {output format}.
User prompt
This is user's input: {input_variable}


Veteran prompter field notes

  • DO NOT use "be helpful" in your prompt for chatbot behaviour, since it often leads to jailbreaks, as the virtual assistant engages in off-topic conversations in order to comply.

  • When defining desired output, it is better to use the verb "respond" than "answer", since the latter leads to lengthy, wordy output. Respond with two or three sentences maximum. Respond with "Sorry, I cannot provide a relevant answer". Respond just with a whole number.

Last updated