Nothing beats a good conversation. Afterwards, you feel nourished both emotionally and intellectually.
However, for it to work, both parties need to know when to talk, and listen. And how to keep the conversion flowing. Sometimes you have to double back, add a little context (Oh, I forget to mention…) so the other understands.
Interacting with an AI is a bit like this. It’s not a search engine, though it now has this feature.
That’s why crafting prompts is like setting the stage for a meaningful conversation. It’s not about dictating every word but about providing a clear direction that guides the AI toward insightful responses.
If you take the time to articulate your intent (super specific), you invite the AI to engage thoughtfully, leading to outputs that are both relevant and often unexpected.
Because of this, when you’re learning to use AI, such as ChatGPT, you need to understand when and where to craft different types of prompts.
Note that I say ‘craft’ as there is a skill involved in refining your prompt until it gets the results you want.
For this reason, the term prompt engineering is slightly misleading. You’re not engineering in the tradition sense, rather you’re learning to tailor each prompt so that the response is one step closer to meeting your requirements.
This is also why you need to practice crafting your prompts. Just as like when you learn to play an instrument – for instance, jazz guitar – the initial stages may involve a lot of iteration and drills. By then, it does begin to click.
5 Characteristics of Prompt Types
In the following course on learning how to write prompts, we’ll learn at each prompt one at a time, so that you know when and how to apply each type. Ok, let’s begin.
1. Task Specification
Different prompt types allow you to define tasks with precision. For instance, instructional prompts provide clear directives, while exploratory prompts encourage open-ended responses. This specificity ensures that the AI (ChatGPT or Claude) understands the exact nature of the task, leading to more accurate and relevant outputs.
2. Contextual Relevance
Prompts such as contextual or role-based prompts provide background information or assign a specific role to the AI, ensuring that responses are appropriate to the given scenario. This contextualization is crucial for generating outputs that are coherent and pertinent to the user’s needs.
If you take away one thing from this course, it’s that the more context you can give ChatGPT, the more depth you’ll get in the response. Ai isn’t a mind reader. You have to paint a picture of what you’re hoping to achieve.
3. Complexity Management
Certain tasks require ChatGPT to perform multi-step reasoning or handle complex information.
Techniques like chain-of-thought prompting guide it through a logical sequence of steps, enhancing its problem-solving capabilities. This approach is particularly useful for tasks that involve intricate reasoning or calculations.
4. Data Modality Handling
With the advent of models capable of processing various data types, multi-modal prompts enable the integration of text (ChatGPT), images (DALL-E), and other data forms (Whisper). This versatility allows AI systems to perform tasks that require understanding and generating content across different modalities, expanding their applicability.
5. Iterative Refinement
Iterative prompts facilitate a process where the AI’s initial response is refined through subsequent prompts. This iterative approach is beneficial for tasks that require continuous improvement or elaboration, ensuring that the final output meets the desired quality and detail.
I hope that gives you some context on why we have different prompt types. Essentially, it reflects the multifaceted nature of human-AI interactions.
By employing the appropriate prompt type, you can effectively harness its capabilities to achieve specific goals, whether generating text-based content, solving complex math problems, or providing detailed explanations for reports.
Matrix of Prompt Types
In this matrix, we show when to use each of the 15 prompt types, categorized by task complexity, output style, and typical use cases:
Prompt Type |
Best For |
Complexity |
Examples |
Simple Prompts |
Quick facts or single tasks |
Low |
“What is the capital of France?” |
Complex Prompts |
Multi-step or detailed tasks |
High |
“Summarize this report and suggest three action points.” |
Instructional Prompts |
Giving clear directions |
Medium |
“Rewrite this email in a friendly tone suitable for a client.” |
Few-Shot Prompts |
Teaching tone or style |
Medium |
“Rewrite this text in a formal tone. Example: ‘Casual: I’ll do it later. Formal: I will address it promptly.’” |
Zero-Shot Prompts |
Asking for outputs without examples |
Low |
“Write a motivational quote.” |
Chain-of-Thought Prompts |
Step-by-step reasoning |
High |
“Explain the pros and cons of hybrid workplaces, considering productivity, morale, and logistics.” |
Contextual Prompts |
Providing background for tailored results |
Medium |
“Draft an email to a client explaining a delay in delivery due to unforeseen circumstances.” |
Multi-Modal Prompts |
Combining text, visuals, or other data types |
High |
“Analyze this chart and summarize the trends.” |
Iterative Prompts |
Refining outputs over multiple steps |
High |
“Make this paragraph simpler.” → “Now make it more engaging.” |
Role-Based Prompts |
Simulating expertise or perspectives |
Medium |
“Act as a recruiter and write a job description for a product manager.” |
Comparative Prompts |
Evaluating or comparing options |
Medium |
“Compare these two headlines and suggest which is more engaging.” |
Conditional Prompts |
Handling if/then scenarios |
High |
“If the meeting is canceled, draft an apology email. Otherwise, send a confirmation email.” |
Embedded Code Prompts |
Generating or debugging code |
High |
“Write a Python script to calculate the average sales of the last 12 months.” |
Stylized Prompts |
Specifying a particular tone or style |
Medium |
“Write this article in the style of a New York Times opinion piece.” |
Exploratory Prompts |
Brainstorming or generating ideas |
Medium |
“List 10 creative ways to increase engagement on our LinkedIn page.” |
This matrix provides a quick reference to match each prompt type to its ideal use case, helping users decide which one to employ based on their specific task or goal.
Common Mistakes to Avoid in Prompt Engineering
As I’m fairly new to AI, I’ve made all of the rookie mistakes you can make. Here’s some examples of the mistakes I made and how I’ve gotten around these. See if this helps.
As someone relatively new to AI, I’ve encountered my fair share of rookie mistakes in prompt engineering. Reflecting on these experiences has been invaluable, and I hope sharing them can help you navigate similar challenges.
1. Crafting Vague Prompts
In my early attempts, I often provided the AI with broad instructions, expecting it to infer my intentions. For instance, I once asked, “Write about our new product.” The AI’s response was generic and lacked the specific features and benefits I wanted to highlight.
Lesson Learned: Being explicit in prompts is crucial. Instead of a broad request, specifying details such as the target audience, key features, and desired tone leads to more relevant outputs.
2. Overloading Prompts with Information
Trying to be thorough, I sometimes packed too much information into a single prompt.
For example, I combined multiple instructions: “Greet the user, ask how their day is going, and offer assistance with our services.”
The AI’s responses were often jumbled or missed key elements.
Lesson Learned: Breaking down complex instructions into simpler, sequential prompts allows the AI to process each task effectively, resulting in clearer and more accurate responses.
3. Neglecting to Provide Context
Assuming ChatGPT would understand implicit context led to misunderstandings. While working on a project to generate technical documentation, I asked, “Explain the installation process.”
The AI’s response was too generic, lacking specifics about the software in question.
Lesson Learned: Providing necessary background information ensures the AI tailors its responses appropriately. Including details like the software name and its environment leads to more precise guidance.
4. Ignoring the AI’s Limitations
I once asked the AI to generate real-time data analysis for a financial report. The outputs were inaccurate because the AI lacked access to current data.
Lesson Learned: Understanding the AI’s capabilities and limitations is essential. For tasks requiring real-time information, integrating external data sources or using specialized tools is necessary.
5. Failing to Iterate and Refine Prompts
In a content creation project, I noticed its outputs were repetitive and lacked creativity. Initially, I didn’t adjust my prompts, hoping for better results over time.
Lesson Learned: Iteratively refining prompts based on the AI’s responses enhances output quality. Experimenting with different phrasings and structures can lead to more engaging and diverse content.
So, what did I learn?
Non-ambiguous, super specific, and context-rich prompts are fundamental to effective prompt engineering.
By recognizing and adapting to the AI’s strengths and limitations, you can continuously refine your prompts, leading to more successful outcomes.
To learn more, go to Klariti’s Prompt Engineering 101 course for Non-Technical Users.