Performance Test Result Analysis

Performance Test Result Analysis

In contrast to scripting and execution, performance test result analysis is the most crucial and technically complex aspect of performance testing. To complete the test result analysis phase and wrap up the testing, a performance tester needs to use his genuine skills. This phase is crucial since a performance tester must do a number of related activities throughout it, including graph analysis, metrics verification, bottleneck discovery, result comparison with specified NFRs, and test conclusion. A minor error in the result analysis might have catastrophic effects in a real-world setting. Lack of experience might be to blame for the error. Consequently, before completing and out, a senior performance tester or management must examine the performance test data.

Purpose:

Performance testers identify bottlenecks and possible fixes at this phase, highlighting them in the test report at the relevant level (business, middleware, application, infrastructure, network, etc.).

The next topic is how to begin analyzing performance test results so that a thorough test report can be written.

To make this subject straightforward and easy to comprehend, it has been separated into three stages. Levels include:

  1. The basic Level describes the common graphs of performance testing.
  2. The intermediate Level shows how to analyze client, server, and network side graphs.
  3. Advanced Level describes the different approaches to the analysis of performance test result

Before starting the Performance Test Result Analysis there are some Important Points which has to be kept in mind:

  • “Think Time” should be removed from the graph and statistics.
  • If the tool is taken into account, you should remove “Pacing” from the graph and statistics.
  • There shouldn’t be any tool-specific errors, such as memory problems or load generator failure.
  • During the test, there shouldn’t be any network-related problems like network failure, LGs disconnecting from the network, etc.
  • The test must run continuously for the allotted amount of time.
  • At the conclusion of the test, the results should be correctly compiled.
  • The proportion of CPU and memory use should be recorded for the pre-test (for at least 1 hour), post-test (for at least 1 hour), and test itself.
  • Use the right level of granularity to identify peaks and lows.
  • The tool should feature a graph merging option, which is optional yet useful. The analysis process is complicated and time-consuming since graph analysis is done in distinct windows. The tool should provide a graph merging option to make the process simple. By combining multiple graphs, you may examine bottlenecks’ underlying causes.
  • Do not attempt to extrapolate the outcome based on skewed numbers.
Scroll to Top