Writing Test Cases
Writing Test Cases for Load and Performance Testing
This prompt helps engineering and QA teams create test cases for evaluating the performance of applications under different levels of load. It focuses on identifying performance bottlenecks, ensuring stability, and validating the system's behavior under peak and sustained conditions.
Responsible:
Engineering/IT
Accountable, Informed or Consulted:
Engineering, QA
THE PREP
Creating effective prompts involves tailoring them with detailed, relevant information and uploading documents that provide the best context. Prompts act as a framework to guide the response, but specificity and customization ensure the most accurate and helpful results. Use these prep tips to get the most out of this prompt:
Define expected traffic patterns and peak usage scenarios.
Set up monitoring tools to measure system performance metrics during tests.
Configure a testing environment that closely mirrors production settings.
THE PROMPT
Help create detailed test cases to evaluate the load and performance of [specific application or feature]. Focus on:
Baseline Performance: Recommending initial tests, such as, ‘Measure response times, throughput, and resource usage under normal operating conditions to establish a baseline.’
Stress Testing: Suggesting high-load scenarios, like, ‘Simulate peak user loads or data input to identify the point at which performance begins to degrade.’
Scalability Checks: Including validation steps, such as, ‘Test the system’s ability to scale resources dynamically in response to increased loads using horizontal or vertical scaling.’
Recovery Testing: Proposing resilience assessments, such as, ‘Evaluate the system’s ability to recover gracefully after a crash or high-load event without data loss.’
Long-Term Stability: Recommending endurance testing, such as, ‘Simulate sustained usage over extended periods to detect memory leaks, resource exhaustion, or slowdowns.’
Provide a structured set of performance test cases that help identify bottlenecks and ensure the system meets performance requirements under all conditions. If additional details about expected loads or infrastructure are needed, ask clarifying questions to refine the test cases.
Bonus Add-On Prompts
Propose strategies for automating load testing using tools like JMeter or Gatling.
Suggest methods for testing performance across different environments, such as staging and production.
Highlight techniques for incorporating performance metrics into CI/CD pipelines.
Use AI responsibly by verifying its outputs, as it may occasionally generate inaccurate or incomplete information. Treat AI as a tool to support your decision-making, ensuring human oversight and professional judgment for critical or sensitive use cases.
SUGGESTIONS TO IMPROVE
Focus on performance testing for specific subsystems, such as databases or APIs.
Include tips for testing geographic load balancing with users in different regions.
Propose ways to simulate user behavior more realistically in load tests.
Highlight tools like LoadRunner, Apache JMeter, or Locust for automating tests.
Add suggestions for documenting test results to track performance trends over time.
WHEN TO USE
To validate system performance before large-scale deployments or feature releases.
When troubleshooting slowdowns or failures during high-traffic events.
During regular performance reviews to ensure system reliability and scalability.
WHEN NOT TO USE
For applications with minimal load or performance requirements.
If testing environments lack sufficient resources to simulate expected loads.