Performance Test Execution

What is Performance Test Execution?

The term “performance test execution” describes the process of utilising a performance testing tool to conduct tests that are specialised to performance testing, such as load tests, soak tests, stress tests, spike tests, etc. All of these tests that must be run inside the performance testing timeframe are covered in depth in the performance test plan. This phase essentially consists of two sub-phases:

  • Executing the intended performance tests is a test.
  • Analysing the results to evaluate the test outcome and create a preliminary test report.

Purpose:

The following tasks must be accomplished during the Performance Test execution phase:

  • Execute the performance test that has been recorded and approved.
  • Examine the performance test outcome.
  • Check the outcome against the specified NFRs.
  • Create a report for the interim performance test.
  • Depending on the results of the intermediate tests, decide whether to carry out or redo the test cycle.

Accountability:

The tests are run in accordance with the testing timeframes by a performance test analyst or engineer. A performance test lead or manager is responsible for reviewing the test findings and developing a strategy based on them. A performance test analyst or engineer can examine the results if they have the necessary experience to comprehend the performance testing points. In this situation, it is the duty of the performance test lead or manager to check the report before providing it to the project team.

Approach for Test Execution:

Pre-execution activities:

There are a few prerequisites that a performance tester should fulfil before beginning the test execution. He must be mindful of the following:

  1. Check locally all the scripts for the performance tests.
  2. Verify each case.
  3. Verify all of the test script’s external file paths. The file path must correspond to the location of the file at the load generator.
  4. Verify the amount of disc space on the controller and load generator.
  5. Restart all of the load generators and the controllers, if possible.
  6. The application’s most recent code should be used for all of the performance tests.
  7. To ensure that the application is free of any functional bugs, only the version code that passed the QA (functional test) should be used in the performance test environment.
  8. Before beginning the actual load test, conduct a smoke test to confirm the script on the load generator.
  9. Verify all test data, if possible, to ensure that the test does not fail owing to a problem with the test data.
  10. Before beginning a test, restart the web, application, and database servers.
  11. Delete server records as necessary.
  12. Check the environment’s stability with a fast health check and make sure all the necessary monitors are operational.
  13. Verify the run-time parameter and setting files.
  14. If the test is scheduled, make sure the system time and the testing tool time are in sync in order for the test to start at the appropriate time.
  15. A performance tester can click the “Run” button to launch the test after making sure all the checkpoints are accurate.

Test Execution:

Once the test has begun, review the graphs and statistics on the testing tool’s live monitors. Some fundamental parameters, such as active users, transactions per second, hits per second, throughput, error count, error kind, etc., must be monitored by a performance tester. Additionally, he must compare user conduct to the specified workload. The test should finally come to a suitable conclusion, and the results should be accurately compiled at the designated site.

Post-execution activities: A performance tester begins the result analysis after the performance test has finished, which serves as a post-execution activity. To analyse the results of a performance test, a performance tester should do the following steps.

Approach for Test Result Analysis:

Result Analysis is the second sub-phase of the performance test execution stage, as was indicated at the beginning of this essay. Test of Performance Performance testing includes a crucial and more technical component called result analysis. To identify the bottleneck and potential solutions at the right level of the software system—business, middleware, application, infrastructure, network, etc.—performance test result analysis demands knowledge.

Pre-result analysis activities:

A performance tester should remember these crucial points before beginning the examination of the performance test results:

  1. The test needs to run for the allotted amount of time.
  2. Remove the duration of the ramp-up and ramp-down
  3. Remove “Think Time” from the graphs and statistics.
  4. Remove “Pacing” from the graphs and statistics (if the tool counts it).
  5. There shouldn’t be any tool-specific errors, such memory problems or load generator failure.
  6. During the test, there shouldn’t be any network-related difficulties like network failure, LGs disconnecting from the network, etc.
  7. The testing device must compile the findings from each load generator and create a comprehensive test report.
  8. The proportion of CPU and memory use should be recorded for the pre-test (for at least 1 hour), post-test (for at least 1 hour), and test itself.
  9. Use the right granularity to find the right peaks and valleys.
  10. If there are any undesirable transactions, use the filter option to get rid of them.

Result Analysis:

Introduce some fundamental metrics to the analysis, such as the number of users: The user load should be met by the steady-state real load. NFR

Response Period The real reaction should satisfy the response time NFR at steady state. Individual transaction response time and end-to-end response time should be used to gauge reaction times. NFRs should match the following criteria if they are available for both levels: Transactions per second / Iterations per hour: The real result should match the stated figure if any of these metrics is defined.

Throughput: For the same set of tests, throughput should be comparable (not identical).

Error: The number of errors should not exceed the specified error tolerance level.

Counts of passed transactions The initial transaction’s passed count should, ideally, equal the last transaction’s passed count. Determine the unsuccessful transactions if that is not the case.

Analyse the graphs:

Set the appropriate level of granularity for each graph.

Carefully review the graph and make note of the points.

Check the graph’s highs and lows.

Combine the several graphs to find the problem’s underlying source.

Note the moment the problem occurred and sync the graphs produced by both programmes if the performance testing tool and monitoring tool are not integrated.

Do not extrapolate the outcome based on faulty statistical data.

Analyse the other reports:

During the test, create a heap dump and inspect the Java heap.

Analyse thread dumps to look for blocked or deadlock threads.

Review the garbage collector’s logs.

Look for a DB query that takes a lengthy time by analysing the AWR report.

Post-result analysis activities:

Client-side and server-side statistics are gathered by a performance tester, who then begins an analysis of the data. He compares the outcomes to the established NFRs. The performance tester creates an interim test report following each test, which is examined by a Performance Test Lead or Manager.

Some key points for reporting:

Making a separate test report for each test is a recommended practise.

Create a template for the test report and produce it using that template.

Highlight the test report’s observations and flaws.

Prepare an interim test report if the performance testing tool lacks reporting capabilities (a template link is provided in the post’s deliverables section).

Attach the intermediate test report to all pertinent documents, such as the heap dump analysis report and the AWR report.

Describe the defect and provide the defect ID.

Decide if the outcome is a Pass or a Fail.

If a performance tester finds a performance bottleneck along with the result, he raises the problem and sends it to the appropriate team for additional root-cause analysis. Performance testers, system administrators, technical specialists, and DBA all have important roles to play in the root cause analysis process, which is really a team effort. Both the bottleneck analysis and test execution processes follow a cyclical pattern.

The same test is run once again to confirm the application’s performance once it has been tuned. If the problem still exists, the application is once more delayed so that the tuning can meet the NFRs.

Example:

In the previous two steps, Perftest developed the test scripts and scenarios, and it is now prepared to begin the test execution cycle. He verifies all of the prerequisites before beginning the test. He must run several tests on the application, including load tests, stress tests, soak tests, and spike tests, in accordance with the Performance Test Plan.

Cycle#Round#Test TypeTest ID
Cycle 01Round 01Load TestC1R1Load
Round 01Stress TestC1R1Stress
Round 01Soak TestC1R1Soak
Round 01Spike TestC1R1Spike
Round 02Load TestC1R2Load
Round 02Stress TestC1R2Stress
Round 02Soak TestC1R2Soak
Round 02Spike TestC1R2Spike
Cycle 02Round 01Load TestC2R1Load
Round 01Stress TestC2R1Stress
Round 01Soak TestC2R1Soak
Round 01Spike TestC2R1Spike
Round 02Load TestC2R2Load
Round 02Stress TestC2R2Stress
Round 02Soak TestC2R2Soak
Round 02Spike TestC2R2Spike

He evaluates the application performance in relation to the established NFRs when each test is complete. Additionally, he examines the bottlenecks and records any found problems in the defect management application.

Scroll to Top