(Part 1 of the Klariti Primer on AI for Software Testing)
Welcome to the first installment of Klariti’s Primer on using AI for Software Testing!
This series is designed for hands-on Software Testers like you and me. We’ll explore how Artificial Intelligence can help us streamline our documentation processes, making them faster, smarter, and more accurate. We’ll tackle common templates one by one, and I’ll share practical ways I’ve learned to apply AI.
When I first started hearing about using AI for tasks I’d done manually for years, I was a bit reluctant. Maybe you feel the same? My concerns were probably similar to yours: Would the quality be good enough? Could I trust an AI not to miss crucial details? Was it just another complex tool I’d have to spend ages learning? And, let’s face it, there’s always that little voice wondering about job security.
But I decided to experiment, starting small, and I found the more I use it, the more I discover other ways it can genuinely help me tackle the tedious parts of my job, freeing me up for more complex testing challenges. This series is about sharing those practical discoveries with you.
The Problem: Are Your Acceptance Criteria Clear Enough?
Let’s talk about Acceptance Criteria (AC).
How often have you received a user story or requirement that felt… vague? You understand the general idea, but translating that into specific, measurable, testable ACs for your Acceptance Criteria Log can feel like pulling teeth. I know I’ve spent countless hours chasing down Product Owners or BAs for clarification, or worse, writing ACs myself hoping they captured the real intent.
This ambiguity is a classic trap: developers build based on assumptions, our tests don’t truly validate the requirement, and stakeholders end up wondering why the feature isn’t quite right.
Trying to maintain consistency and thoroughness across a growing list of criteria becomes a manual slog, prone to errors and oversights. Does that hit close to home?
Scenario/Context: The Ripple Effect of Weak Acceptance Criteria
Think about this common scenario: a user story simply states, “As a user, I want the login process to be faster.” What does “faster” actually mean? Without concrete AC, the developer might make a minor tweak they believe suffices.
As the tester, I might test if it feels faster subjectively, or maybe run a quick comparison against the old version. The Product Owner, however, might have envisioned a specific sub-second response time under peak load conditions.
When the feature ships, users complain it’s still sluggish. Suddenly, we’re dealing with rework, urgent re-testing cycles, potential release delays, and unhappy stakeholders – all stemming from those initial vague ACs in the log.
A poorly defined Acceptance Criteria Log isn’t just a documentation issue; it directly undermines development quality, testing effectiveness, and project success.
The AI Solution: Your Partner in Precision
This is where AI (like ChatGPT, Claude, Gemini, or similar Large Language Models) has become a valuable assistant for me. Here’s how you can use it to significantly improve your Acceptance Criteria Log process:
-
Brainstorming ACs: Feed the AI the user story or requirement. It’s surprisingly good at generating a solid first draft.
Prompt:
"Given this user story: 'As a registered user, I want to be able to reset my password via email so that I can regain access to my account if I forget my password.' Generate 5-7 specific, measurable, and testable acceptance criteria for this story. Include positive and negative scenarios."
AI Output (Example):
- Given the user is on the login page, When they click the ‘Forgot Password?’ link, Then they are redirected to the password reset request page.
- Given the user is on the password reset request page, When they enter a valid registered email address and click ‘Submit’, Then they see a confirmation message indicating an email has been sent.
- Given the user enters an invalid or unregistered email address, When they click ‘Submit’, Then they see an error message stating the email was not found.
- Given the user has requested a password reset, When they check their email inbox, Then they receive an email containing a unique password reset link within 5 minutes.
- Given the user clicks the password reset link from the email, When the link is valid and not expired, Then they are directed to a page to enter a new password.
- Given the user is on the new password page, When they enter a new password meeting complexity requirements (e.g., 8+ chars, upper/lower case, number, symbol) and confirm it, Then their password is updated, and they are redirected to the login page with a success message.
- Given the user clicks an expired or invalid password reset link, Then they are shown an error page suggesting they request a new reset link.
-
Refining Existing ACs: Sometimes my initial draft ACs feel a bit weak. I paste them into the AI and ask for improvements.
Prompt:
"Review these acceptance criteria for the 'User Profile Update' feature. Are they SMART (Specific, Measurable, Achievable, Relevant, Time-bound)? Suggest improvements: [Paste your draft ACs here]."
-
Generating Different Formats: Need ACs in Gherkin (Given/When/Then) for BDD or just a simple checklist? AI handles this conversion quickly.
Prompt:
"Convert the following acceptance criteria into Gherkin format: [Paste your ACs]."
Prompt:
"Create a simple checklist format for these acceptance criteria: [Paste your ACs]."
-
Ensuring Coverage: One area I find AI particularly helpful is identifying gaps. I ask it to think about edge cases.
Prompt:
"Based on this user story [Paste User Story] and these acceptance criteria [Paste ACs], what potential edge cases or negative scenarios should also be considered for testing?"
How I Manage This in My Log:
- I use AI to generate the initial drafts for criteria entries directly in my log (whether it’s Excel, Jira, or a dedicated test management tool).
- I copy and paste the AI’s suggestions.
- This is crucial: I always review, refine, and validate the AI’s output. I discuss it with the Product Owner/BA and apply my own testing expertise. Think of AI as a helpful junior team member – it provides a great starting point, but needs experienced oversight.
- I also use AI periodically to scan my log entries for consistency in language and terminology.
Next Steps
So, we’ve seen how AI can take the often frustrating task of crafting solid Acceptance Criteria and turn it into a more collaborative and efficient process. Using AI as a brainstorming and refinement partner helps ensure clarity right from the start, preventing those downstream headaches we all want to avoid. It helps me personally save time and reduce ambiguity in my Acceptance Criteria Log.
My advice? Start small. Pick one user story for your next sprint and try using these prompts. See how it feels and adapt the process to your team’s workflow.
Want more tips like this delivered to your inbox? Sign up for the Klariti Newsletter to stay updated on the latest templates, AI techniques, and best practices!
Next up: Ever finish a testing session or a defect triage meeting and struggle to remember exactly who agreed to do what? We’re diving into the Action Item Log next, exploring how AI can help you capture, assign, track, and follow up on those critical tasks, ensuring nothing falls through the cracks. Stay tuned!
Templates (Free and Paid)
Here are some resources that might be helpful:
- Software Testing Template Pack (MS Word+Excel)
- Acceptance Test Plan
- Installation Plan Template
- Quality Assurance Plan
- Release Notes Templates