I think for those of us involved in software testing, there’s been some real concern on whether or not to use AI for software testing. After all, it does come with risks, such as how to ensure that data you feed it isn’t share or used by the LLM (and while you can get around this by installing it locally, not everyone may have the smarts to do this) and also the accuracy of the material it generates.
I say ‘generates’ as in this Klariti Primer we’re going to look at how to create prompts for software testing using a variety of generative AI models. The aim of this Klariti primer is to show you how to craft simple prompts that will give you a foothold into prompt writing skills, then extend these to craft more sophisticated prompts.
By the end of this tutorial, you will know how to craft a simple, advanced and complex prompt. Once you understand this, you can then adapt these for different scenarios.
Understanding ChatGPT Prompts for Software Testing: Simple Queries to Strategic Insights
Let’s adopt a framework for thinking about prompts, similar to one I’ve used for other prompt writing articles:
- Simple Prompts (Quick Starts): These are your go-to for getting a quick answer, a baseline list, or validating something you already suspect. They require minimal input and deliver straightforward outputs.
- Advanced Prompts (Detailed Analysis): These demand more context from you and ask the AI, such as Google Gemini, to perform more detailed analysis, explore dependencies, justify priorities, or align with specific testing principles or standards.
- Complex Prompts (Strategic Insights): These are for when you need the AI model to synthesize information from more than one angle, analyze potential cascading effects, take risks into account, or provide input for test planning.
Now, let’s see how this applies to three common scenarios we face daily: Writing Test Cases, Developing a Test Plan, and managing an Issue Log.
Scenario 1: Crafting Prompts for Writing Test Cases
Test cases are the foundation of our verification efforts. I’m going to show you how AI can help you generate, refine, and expand them.
1. Simple Prompts (Getting Started with Test Cases)
Use these for initial drafts, brainstorming basic scenarios, or generating test data.
What they achieve: Quickly generate foundational test case elements.
Example Prompts:
"Generate 5 positive test case titles for a user login feature with username and password fields."
Rationale: Gets a quick list of happy path scenarios."List 3 common negative test scenarios for an email address input field."
Rationale: Helps quickly identify basic negative paths."Suggest 3 sets of valid test data (First Name, Last Name, Email) for a user registration form."
Rationale: Useful for populating test cases or manual execution.
Expected AI Output: Bulleted lists of titles, scenarios, or data sets.
2. Advanced Prompts (Building Robust Test Cases)
These prompts provide more context and ask for more detailed test case components, including preconditions, steps, and expected results. The more specific you are with your prompts, the more precise the results.
What they achieve: Develop more complete and specific test cases, considering edge cases and requirements.
Example Prompts:
"Given the requirement: 'The system shall allow users to reset their password via a link sent to their registered email. The link must expire in 1 hour.' Draft 3 detailed test cases (including preconditions, test steps, and expected results) to verify this functionality, covering a successful reset, an expired link, and an invalid link."
Rationale: Ties test cases directly to a requirement and asks for full structure."For a file upload feature that accepts only PDF files up to 5MB, generate test cases covering boundary value analysis for file size (e.g., 4.9MB, 5.0MB, 5.1MB) and file type validation (PDF, JPG, TXT)."
Rationale: Focuses on specific test design techniques for thoroughness."Review the following test steps for 'adding an item to a shopping cart' and suggest improvements for clarity, testability, and any missing edge cases: [Paste your draft test steps here]."
Rationale: Uses AI as a reviewer to enhance existing work.
Expected AI Output: Formatted test cases, suggestions for specific boundary values, or critiques of existing test steps with recommendations.
3. Complex Prompts (Strategic Test Case Generation)
This is where it gets very interesting. You can use these prompts to ask the AI model to think more broadly about test coverage, risk, and integration with other system aspects. Think of this as ‘brainstorming’ with a very experienced software tester.
What they achieve: Identify higher-level test objectives, consider non-functional aspects, and link test cases to potential risks.
Example Prompts:
"Consider a new online payment module integrating with Stripe. Analyze the primary user flows (e.g., successful payment, failed payment, refund request) and identify key areas where security vulnerabilities (like XSS, CSRF, data exposure) might occur. For each area, suggest high-level test objectives that should be covered by detailed test cases."
Rationale: Asks AI to think about risk and map it to test objectives for a complex feature."For an e-commerce product search feature with filtering options (price range, brand, category), outline a test strategy that incorporates functional testing, usability testing (considering ease of filter application), and performance testing (response time with multiple filters). For each testing type, suggest 3 critical test case scenarios."
Rationale: Combines different testing types and asks for scenarios that address broader quality attributes.
Expected AI Output: Analysis of features with potential vulnerabilities, suggested test objectives, or a multi-faceted test approach outline with example scenarios.
Scenario 2: Crafting Prompts for Developing a Test Plan
The Test Plan is our roadmap. You can use ChatGPT, Gemini, Claude or DeepSeek to help you draft sections, identify risks, and ensure comprehensive coverage.
1. Simple Prompts (Kickstarting Your Test Plan)
Ideal for outlining the basic structure or generating initial content for common sections.
What they achieve: Quickly create a skeleton or first draft of test plan sections.
Example Prompts:
"List the standard sections of a Test Plan document based on IEEE 829."
Rationale: Good for a quick structural reminder or starting point."Define 3 key test objectives for testing a new mobile banking application's fund transfer feature."
Rationale: Helps articulate the ‘why’ of testing for a specific feature."What are 5 common entry criteria for System Integration Testing?"
Rationale: Useful for defining prerequisites.
Expected AI Output: Bulleted lists of sections, objectives, or criteria.
2. Advanced Prompts (Detailing Your Test Plan)
These prompts help flesh out specific areas of the test plan with more detailed considerations.
What they achieve: Develop more specific content for sections like scope, risks, resources, and schedule.
Example Prompts:
"For a project migrating a legacy HR system to a cloud-based SaaS solution, identify 5 potential project risks relevant to the testing phase. For each risk, suggest a mitigation strategy to include in the Test Plan's 'Risks and Contingencies' section."
Rationale: Proactively identifies and plans for potential testing roadblocks."Draft the 'Test Environment Requirements' section for testing a web application that needs to support Chrome, Firefox, and Edge browsers on Windows and macOS, and Safari on iOS. Specify browser versions and OS versions where applicable."
Rationale: Helps detail specific environmental needs."Outline the 'Test Deliverables' section for a UAT phase, including types of reports, logs, and sign-off documents expected."
Rationale: Clarifies what outputs are expected from a specific test phase.
Expected AI Output: Detailed paragraphs or lists for specific test plan sections, including risk matrices, environment specifications, or lists of deliverables.
3. Complex Prompts (Strategic Test Planning)
These prompts engage the AI in higher-level strategic thinking about the overall test approach and its alignment with business goals.
What they achieve: Develop a more holistic test strategy, consider resource allocation trade-offs, and align testing efforts with broader project objectives.
Example Prompts:
"Develop a high-level test strategy for a new AI-powered customer service chatbot. The strategy should address: key quality attributes to test (accuracy, responsiveness, empathy, security), the mix of automated vs. manual testing, data requirements for training and testing the AI, and metrics to evaluate its effectiveness. Consider the business goal of reducing human agent workload by 30%."
Rationale: Requires AI to consider unique aspects of AI testing and link strategy to business outcomes."Given a fixed testing budget of $50,000 and a 3-month timeline for a complex financial reporting application, analyze the trade-offs between investing more in test automation upfront versus hiring more manual testers. What factors should be considered in the Test Plan to justify the chosen resource allocation strategy, ensuring adequate coverage of critical functionalities and compliance requirements (e.g., SOX)?"
Rationale: Asks AI to weigh strategic options and justify them within constraints.
Expected AI Output: A structured test strategy document, analysis of trade-offs with justifications, or considerations for resource allocation.
Scenario 3: Crafting Prompts for an Issue Log
Effectively managing issues found during testing is crucial. AI can help in summarizing, analyzing, and even suggesting next steps.
1. Simple Prompts (Basic Issue Log Tasks)
Good for quick summaries or categorizations.
What they achieve: Quickly process or categorize individual issue reports.
Example Prompts:
"Summarize this detailed bug report into a concise title (max 10 words) and a one-sentence description: [Paste detailed bug report]."
Rationale: Helps in creating clear, brief entries for an issue log."Given the bug title 'Application crashes when user clicks Submit on empty form', suggest 3 potential severity levels (e.g., Critical, Major, Minor) and a brief justification for each."
Rationale: Provides initial thoughts on impact."List common fields found in a software testing Issue Log."
Rationale: Quick reminder of what information to capture.
Expected AI Output: Concise summaries, suggested categorizations, or lists of fields.
2. Advanced Prompts (In-Depth Issue Analysis)
These prompts ask for more analysis of issue data or help in drafting more detailed issue-related communications.
What they achieve: Analyze trends, assist in prioritization discussions, or help draft clearer bug reports.
Example Prompts:
"Analyze the following set of defect summaries from our current sprint: [Paste 5-10 defect summaries, each with a component like UI, API, DB]. Identify any potential defect clusters or recurring problem areas."
Rationale: Helps spot patterns in logged issues."Draft a polite but firm email to the development lead to follow up on critical bug #478 (Login fails for new users), which is blocking further testing and was reported 3 days ago. Request an ETA for the fix."
Rationale: Assists in communication related to issue resolution."Given this issue: 'Payment confirmation email not received by user after successful order.' Describe the expected behavior, actual behavior, steps to reproduce, and potential business impact for an Issue Log entry."
Rationale: Helps structure a complete and informative bug report from a brief description.
Expected AI Output: Trend analysis, draft emails, or more detailed and structured bug report content.
3. Complex Prompts (Strategic Issue Management)
These prompts leverage issue log data for broader insights, root cause analysis suggestions, or process improvement ideas.
What they achieve: Extract strategic insights from issue data to improve quality and processes.
Example Prompts:
"Our Issue Log for the past release shows a high concentration of defects in the 'User Profile Management' module, particularly around data validation. Suggest 3 potential root causes for this pattern and recommend 3 preventative actions (process changes, developer training, new static analysis rules) the team could implement to reduce similar defects in future releases."
Rationale: Moves from just logging issues to thinking about prevention and root cause."Analyze the trend of defect discovery rate versus defect resolution rate over the last 4 sprints based on this data: [Provide data points for discovered/resolved defects per sprint]. What does this trend suggest about our development and testing process efficiency? What questions should I ask the team to understand any bottlenecks?"
Rationale: Uses issue data for process analysis and improvement.
Expected AI Output: Suggestions for root causes and preventative actions, trend analysis with interpretations, or questions to guide process improvement discussions.
General Tips for Crafting Effective Prompts:
- Be Specific and Clear: Avoid ambiguity. The more precise your request, the better the AI can understand and respond appropriately.
- Provide Context: Give the AI relevant background information. For test cases, this might be requirements or user stories. For test plans, project goals or constraints.
- Define the Role (Persona): Sometimes it helps to tell the AI to act as a specific persona (e.g.,
"Act as an experienced QA Lead..."
). - Specify the Format: If you want a list, a table, or a specific document section, ask for it (e.g.,
"Provide the answer in a bulleted list,"
"Draft this as a section for a formal test plan."
). - Iterate and Refine: Your first prompt might not be perfect. Don’t be afraid to tweak it, add more detail, or ask follow-up questions based on the AI’s response. This is a conversation.
- Break Down Complex Requests: If a task is very large, consider breaking it into smaller, more manageable prompts.
- Use AI as an Assistant, Not a Replacement: Always review and validate AI-generated content. It’s a powerful tool to augment your skills, not replace your critical thinking and expertise.
By intentionally practicing and refining your prompt engineering skills, you’ll start to get a feel for how to use AI and make it your ‘partner’ in software testing. I’d recommend that you experiment with these approaches, so you can get a sense of what works for your environment.
Learn More
I hope you found this primer useful. It explored how to craft ChatGPT prompts—from simple queries to complex strategic inquiries—so you can weave AI int your software testing process. As we’ve seen, prompt engineering can lead to unlock significant efficiencies and deeper insights across various testing activities, from test case generation to test planning and issue management.
As you integrate these techniques into your daily workflows, I’d encourage you to explore the wealth of resources available here on Klariti.com. Explore our guides, download our professionally designed templates, and refine your understanding of the core disciplines that underpin successful software quality assurance.