Steps to Design Workload Model

Steps to Design Workload Model in Performance Testing

The last post (Performance Test Workload Modelling) covered the fundamentals of workload modeling as well as the tasks involved in this stage. The design of Performance Test Scenarios is another name for this stage. Let’s use some real-world examples to try to comprehend this phase. This article will outline the procedures for creating a workload model for performance testing.

NFRs are pasted below:

NFR IDCategoryDescriptionImpact to
NFR01ApplicationThe solution must be able to support 500 active users.
1. Admin: 4 (2 for seller and 2 for product approval)
2. Seller: 50
3. Buyer: 438
4. Call Center: 8
1. Admin
2. Seller
3. Buyer
4. Call Center
NFR02ApplicationThe solution must be able to support the future volume of active users i.e. 2634
1. Admin: 10
2. Seller: 100
3. Buyer: 2500
4. Call Center: 24
1. Admin
2. Seller
3. Buyer
4. Call Center
NFR03ApplicationThe solution must be performed well during a long period of time with average volume. i.e. 304
1. Admin: 3
2. Seller: 15
3. Buyer: 278
4. Call Center: 8
1. Admin
2. Seller
3. Buyer
4. Call Center
NFR04ApplicationThe solution must be able to support the spike load of the buyer and seller during the sale period.
1. Admin: 3
2. Seller: 23
3. Buyer: 834
4. Call Center: 8
1. Admin
2. Seller
3. Buyer
4. Call Center
NFR05ApplicationAdmin gets an average of 200 requests per hour every time.1. Admin
NFR06ApplicationThe number of orders:
1. Peak Hour Volume: 1340
2. Sale Hour Volume: 2830
3. Future Volume: 7500
4. Average Volume: 600
Note: 4% of the users cancel the order in every scenario.
1. Buyer
NFR07ApplicationSellers add an average of 180 products per hour and delete 18 existing products every hour1. Seller
NFR08ApplicationThe call center employees get 40 complaints per hour1. Call Center
NFR09ApplicationThe response time of any page must not exceed 3 seconds except stress test1. Admin
2. Seller
3. Buyer
4. Call Center
NFR10ApplicationThe error rate of transactions must not exceed 1%1. Admin
2. Seller
3. Buyer
4. Call Center
NFR11ServerThe CPU Utilization must not exceed 60%1. Web
2. App
3. DB
NFR12ServerThe disk Utilization must not exceed 15% (Compare pre, post, and steady-state memory status)1. Web
2. App
3. DB
NFR13ServerThere must not be any memory leakage1. Web
2. App
3. DB
NFR14ServerThere must not any memory leakage1. Web
2. App
3. DB
NFR15ApplicationBuyers order at the average rate of 1. Peak Hour Rate: 3.06 products per hour 2. Sale Hour Rate: 3.39 products per hour 3. Future Volume: 3 products per hour 4. Average Volume Rate: 2.15 products per hour1. Buyer

Steps to Design an efficient workload model in Performance Testing:

Load Test must have 500 users load. Hence the load distribution among the scripts as per NFR01 is:

RoleUser CountScript NameUser Distribution
Admin4adm_seller_request2
adm_product_request2
Seller50slr_add_product45
slr_delete_product5
Buyer438byr_buy_product420
byr_cancel_order18
Call Center8cce_register_complain8

Calculation of User distribution:

  • According to NFR01, two administrators are in charge of product approval and two administrators are in charge of seller approval. As a result, the user count is split 50/50. Two users will run the adm_seller_request script during the test, and two more users will run the adm_product_request script.
  • According to NFR07, 10% of all sellers will delete products, thus 5 merchants (or 10% of 50) will do this using the slr_delete_product script, while the other 45 sellers will use the slr_add_product script to add new products.
  • According to NFR06, 4% of all customers cancel orders. Therefore, 18 buyers (rounded up to 4% of 438) will use the byr_cancel_order script to cancel their purchases, and the remaining 420 buyers will use the byr_buy_product script to place their orders.
  • There will be no distribution in the contact center scenario since all 8 call center employees would perform the same duty.

Get the Iterations per second metric:

Obtaining the iterations per second value is the following step. The iteration count, which is accessible in NFR05, NFR06, NFR07, and NFR08, can be provided by the request or order count. Divide the iteration count by 3600 to obtain the iterations per second statistic (only when the iteration count is expressed in hours).

RoleUser CountScript NameUser DistributionRequests/Orders per hour (Iterations per hour)Iteration per second = Iteration per hour / 3600
Admin4adm_seller_request21000.028
adm_product_request21000.028
Seller50slr_add_product451800.05
slr_delete_product5180.005
Buyer438byr_buy_product42013400.372
byr_cancel_order18540.015
Call Center8cce_register_complain8400.011
Total500 50018320.51

Get the number of transactions metric for each script:

RoleUser CountScript NameUser DistributionRequest/Order per hourNo. of transactions in each iteration
Admin4adm_seller_request21007
adm_product_request21007
Seller50slr_add_product451807
slr_delete_product5187
Buyer438byr_buy_product42013408
byr_cancel_order18546
Call Center8cce_register_complain8407
Total500 5001832 

Get the End-to-End Response Time metric:

The end-to-end response time is the next thing to determine. End-to-end response time in this context refers to the time needed to finish one loop. The response time measure is initially undefined or intended rather than actual. The performance tester must now run each script for a single user to determine the response time. The reaction time recorded by playing the script again might not match the response time recorded during the test. If yes, this could affect the metrics for the anticipated number of iterations. A performance tester must devise 1 sanity test to record the real reaction time in order to avoid such a predicament. The likelihood of over- or under-hitting the server increases if he executes the script without accounting for response time.

The main purpose of the sanity test is just to get the end-to-end response time. This test could be run without Think Time or pacing.

RoleScript NameEnd to End Response Time (in seconds)
Adminadm_seller_request15
adm_product_request15
Sellerslr_add_product18
slr_delete_product10
Buyerbyr_buy_product51
byr_cancel_order34
Call Centercce_register_complain29

Calculation of Think Time metric:

PerfMate executed the scripts and calculated the total response time for each test scenario. The scripts were performed without the use of Think Time, hence they did not represent actual events. PerfMate must add thought time between each step (transaction) in order to make the scenario real-time. He can make assumptions about time based on the information or actions taken by users on a given page. The amount of time between two pages indicates that a user paused on the previous page to read the content, fill out the form, or wait for the entire page to load, among other things. Depending on the parameters of the performance testing instrument, it could be random or fixed.

 So the total think time will be:

RoleScript NameNo. of TransactionsThink Time (in seconds)Total Think Time = Think Time*(No. of Transactions-1)
Adminadm_seller_request7318
adm_product_request7318
Sellerslr_add_product7318
slr_delete_product7318
Buyerbyr_buy_product8321
byr_cancel_order6315
Call Centercce_register_complain7318

Total Think Time = Individual Think Time * (No. of Transactions – 1)

If the think time values are different between each transaction then simply sum up all the think time values to get the total think time.

Total Think Time = (Think Time 1) + (Think Time 2) + (Think Time 3) + …….. + (Think Time N)

Calculation of Pacing metric:

Pacing = (No. of Users / Iterations per second) – (End to End Response Time + Total Think Time)

RoleScript NameNo. of UsersIterations per secondEnd-to-End Response TimeTotal Think TimePacing
Adminadm_seller_request20.028151838.42
adm_product_request20.028151838.42
Sellerslr_add_product450.051818864
slr_delete_product50.0051018972
Buyerbyr_buy_product4200.37251211057
byr_cancel_order180.01534151151
Call Centercce_register_complain80.0112918680.27

Decide Steady State, Ramp-up & Ramp-down Time:

According to PerfProject’s Performance Test Plan, a load test’s steady state period lasts an hour when all users are ramped up. The number of users in a certain situation is used to determine the ramp-up time. PerfMate takes into account the following delay, ramp-up, steady state, and ramp-down times in this case. The only thing to bear in mind is that, in a situation involving several scripts, each script should enter a steady state simultaneously in order to apply the required load at the same time and assess the application’s genuine performance.

Script NameNo. of UsersInitial Delay (in minutes)Ramp-upSteady state (in minutes)Ramp-down
adm_seller_request291 User per minute601 User per minute
adm_product_request291 User per minute601 User per minute
slr_add_product4569 Users per minute6010 Users per minute
slr_delete_product582 Users per minute601 User per minute
byr_buy_product420020 Users per 30 sec6030 Users per 10 sec
byr_cancel_order1864 Users per minute606 Users per minute
cce_register_complain862 Users per minute604 Users per minute

PerfProject’s workload model for Load Testing:

PerfProject’s workload model for Stress Testing:

PerfProject’s workload model for SpikeTesting:

PerfProject’s workload model for Endurance Testing:

Scroll to Top