Estimate Ansible playbook execution time based on number of hosts, tasks, SSH connection overhead, and parallelism.
Ansible playbook execution time depends on several factors: the number of target hosts, the number of tasks, SSH connection overhead per host, task execution time, and the forks (parallelism) setting. For large fleets of hundreds or thousands of hosts, playbook execution can take minutes to hours.
This calculator estimates playbook runtime based on your infrastructure parameters. It considers Ansible's execution model: tasks execute sequentially within each play, while hosts are processed in parallel batches determined by the forks setting.
Understanding execution time helps plan maintenance windows, set CI/CD timeouts, and determine when to optimize with strategies like free strategy, pipelining, or mitogen acceleration.
This measurement provides a critical foundation for capacity planning and performance budgeting, helping teams align infrastructure resources with application requirements and growth projections. Integrating this calculation into monitoring and reporting workflows ensures that engineering decisions are grounded in real data rather than assumptions about system behavior.
This measurement provides a critical foundation for capacity planning and performance budgeting, helping teams align infrastructure resources with application requirements and growth projections.
Long playbook runs block maintenance windows and CI/CD pipelines. This estimator helps predict execution time and identify when to increase forks, enable pipelining, or split playbooks. Having accurate metrics readily available streamlines incident postmortems, architecture reviews, and technology roadmap discussions with engineering leadership and product teams. Consistent measurement creates a reliable baseline for tracking system health over time and identifying degradation before it impacts users or triggers costly production outages.
Batches = ceil(hosts / forks) Per-batch time = (tasks × avg_task_time) + ssh_overhead Total Time = batches × per_batch_time
Result: ~310 seconds (5.2 minutes)
Batches: ceil(50 / 10) = 5 batches. Per-batch: (20 × 3s) + 2s = 62 seconds. Total: 5 × 62 = 310 seconds (5.2 minutes).
Ansible processes hosts in batches (determined by forks). Within each batch, tasks run sequentially per play. The total time is batch_count × per_task_time × task_count plus connection overhead. Understanding this model is key to optimization.
From highest to lowest impact: (1) Enable pipelining to reduce SSH connections, (2) Increase forks to process more hosts in parallel, (3) Use mitogen for 2–7x speedup, (4) Enable fact caching, (5) Use free strategy for heterogeneous fleets, (6) Optimize individual tasks.
For 1,000+ hosts, consider Ansible Tower/AWX for job distribution, or use pull-based tools (ansible-pull) for self-service configuration. At this scale, execution time becomes a primary constraint and creative batching strategies are essential.
Ansible defaults to 5 forks, meaning it processes 5 hosts in parallel. This is conservative for most use cases. Increase to 20–50 for large fleets. Set it in ansible.cfg or with the -f flag. Higher values use more local CPU and memory.
By default, Ansible creates a new SSH connection for each task on each host. With 20 tasks and 50 hosts, that's 1,000 SSH connections. Each connection has handshake overhead (0.5–2 seconds). Enable pipelining to reuse connections across tasks.
The default 'linear' strategy waits for all hosts to complete a task before moving to the next. The 'free' strategy lets each host proceed independently, so fast hosts don't wait for slow ones. This can reduce total time by 20–40% for heterogeneous fleets.
Mitogen replaces Ansible's default SSH+shell execution with a persistent Python connection. This eliminates per-task SSH overhead and reduces data transfer. Typical speedups are 2–7x. It's a drop-in replacement with minimal configuration.
Run the playbook on a small subset (5–10 hosts) with verbose timing: ANSIBLE_CALLBACK_WHITELIST=profile_tasks. This profiles each task. Average the results for your per-task estimate. Cache-hit tasks (already configured) are much faster.
Split when: (1) total runtime exceeds your maintenance window, (2) different host groups need different tasks, (3) some tasks are idempotent and can run independently, (4) you want to parallelize across CI/CD jobs. Monitoring trends in this area over successive periods will highlight improvement opportunities and confirm whether changes are producing the desired effect.