Writing Test Cases
Writing Test Cases for Error Messages and Edge Cases
This prompt helps engineering and QA teams write test cases to validate error messages and handle edge cases. It ensures that the application provides meaningful feedback and performs reliably under extreme or unexpected scenarios.
Responsible:
Engineering/IT
Accountable, Informed or Consulted:
Engineering, QA
THE PREP
Creating effective prompts involves tailoring them with detailed, relevant information and uploading documents that provide the best context. Prompts act as a framework to guide the response, but specificity and customization ensure the most accurate and helpful results. Use these prep tips to get the most out of this prompt:
Gather documentation on expected error messages and system constraints.
Define edge cases and input boundaries relevant to the application.
Prepare tools for testing system limits, such as stress-testing or fuzzing tools.
THE PROMPT
Help create detailed test cases to validate error messages and edge case handling for [specific application or feature]. Focus on:
Error Message Accuracy: Recommending validations, such as, ‘Verify that error messages are clear, actionable, and correspond accurately to the encountered issue (e.g., validation errors, server errors).’
Boundary Testing: Suggesting edge case scenarios, like, ‘Test inputs at the boundary of acceptable ranges, such as maximum string length or minimum numeric values, to ensure proper handling.’
Invalid Input Handling: Including failure scenarios, such as, ‘Provide test cases for invalid or unexpected inputs, like special characters or unsupported file formats.’
System Resource Limits: Proposing tests, such as, ‘Simulate resource exhaustion (e.g., memory or disk limits) to validate system behavior under constrained conditions.’
Fallback Mechanisms: Recommending reliability checks, such as, ‘Test how the application handles service unavailability or dependency failures by providing appropriate fallbacks or retries.’
Provide a set of comprehensive test cases to validate error handling and ensure robust performance under edge cases. If additional details about user inputs, system constraints, or expected behavior are needed, ask clarifying questions to refine the test cases.
Bonus Add-On Prompts
Propose strategies for testing error-handling logic in multi-service or distributed systems.
Suggest methods for validating user-facing error messages for clarity and accessibility.
Highlight techniques for incorporating edge case scenarios into automated test suites.
Use AI responsibly by verifying its outputs, as it may occasionally generate inaccurate or incomplete information. Treat AI as a tool to support your decision-making, ensuring human oversight and professional judgment for critical or sensitive use cases.
SUGGESTIONS TO IMPROVE
Focus on error handling for specific features, like form validation or API requests.
Include tips for validating internationalized error messages in multi-language systems.
Propose ways to document recurring edge cases to improve development practices.
Highlight tools like Postman or FuzzDB for automating error and edge-case testing.
Add suggestions for testing error messages in both frontend and backend layers.
WHEN TO USE
During development to ensure reliable handling of unexpected scenarios.
To validate user feedback mechanisms for application errors.
When testing critical systems that must maintain reliability under stress.
WHEN NOT TO USE
For non-user-facing systems with minimal error handling requirements.
If error-handling requirements are undefined or incomplete.