Calculate target virtual users and ramp-up plans for load tests. Size your performance tests from peak concurrent users with safety factors.
Properly sizing a load test is critical for meaningful performance testing results. Too few virtual users (VUs) won't reveal bottlenecks; too many will overwhelm your system before you can identify specific issues. This calculator determines the optimal number of VUs based on your expected peak concurrent users and a safety factor.
The calculator also generates a ramp-up plan, gradually increasing load to help identify the specific threshold where performance degrades. This approach provides far more actionable data than simply hitting the system with maximum load from the start.
Whether you use k6, Locust, JMeter, or Gatling, this calculator provides the target VU count and ramp-up stages you need to configure meaningful load tests that reveal real-world performance characteristics.
This measurement provides a critical foundation for capacity planning and performance budgeting, helping teams align infrastructure resources with application requirements and growth projections. Integrating this calculation into monitoring and reporting workflows ensures that engineering decisions are grounded in real data rather than assumptions about system behavior.
Load tests that are improperly sized waste time and resources. This calculator provides a structured approach to determining VU counts and ramp-up schedules from production data, ensuring your performance tests generate actionable insights. Having accurate metrics readily available streamlines incident postmortems, architecture reviews, and technology roadmap discussions with engineering leadership and product teams.
Target VUs = Peak Concurrent Users × Safety Factor. Ramp-up stages: 25% → 50% → 75% → 100% of target VUs over the ramp-up period.
Result: 1,500 target VUs with 4-stage ramp-up
With 1,000 peak concurrent users and 1.5x safety factor, the target is 1,500 VUs. The ramp-up plan: Stage 1 (0–2.5 min) = 375 VUs, Stage 2 (2.5–5 min) = 750 VUs, Stage 3 (5–7.5 min) = 1,125 VUs, Stage 4 (7.5–10 min) = 1,500 VUs.
A load test with too few virtual users confirms what you already know: the system works under light load. A test with too many VUs from the start overwhelms the system before you can identify which component fails first. Proper sizing with gradual ramp-up reveals the exact load level where degradation begins.
Linear ramp-up increases VUs at a constant rate. Step ramp-up holds VU count steady at each level before increasing. Step ramp-up provides clearer data at each load level but takes longer. Choose based on what data you need from the test.
VU count alone does not determine test realism. Each VU should simulate realistic user behavior: variable think times (3–10 seconds between actions), natural navigation patterns, and a mix of user journeys. Unrealistic scripts produce unreliable results.
Look for the "knee" in the performance curve: the load level where response times start increasing exponentially or error rates begin climbing. This is your system's practical capacity limit and the data point that drives infrastructure decisions.
1.5x is standard for load tests validating capacity. 2x is appropriate for stress tests finding breaking points. 3x+ is used for extreme stress tests or preparing for known traffic spikes like product launches.
Typically 5–15 minutes for most tests. Shorter ramp-ups (2–5 min) may miss gradual degradation. Longer ramp-ups (15–30 min) provide more data at each load level but extend test duration. Match ramp-up to your real-world traffic growth rate.
A virtual user (VU) simulates one active user session. Each VU executes a scripted scenario with realistic think times and actions. VUs maintain session state (cookies, tokens) just like real users.
Check your analytics platform for real-time user metrics, query load balancer connection counts, or estimate from total sessions using the Concurrent Users Calculator. Use the highest value from the past month as your peak.
Load testing in production gives the most accurate results but carries risk. Many teams test in staging with production-like data and infra. If testing in production, use canary testing, test during low-traffic periods, and have circuit breakers in place.
Track response times (p50, p95, p99), error rates, throughput (RPS), and resource utilization (CPU, memory, I/O, network). Look for inflection points where performance degrades as load increases. These points indicate bottlenecks.