Skip to main content

Choosing What to Automate: ROI, Stability, and Frequency of Execution

Overview

Automated testing can significantly speed up QA cycles and improve product quality — but not everything should be automated. A strategic approach is essential to determine what is worth automating, based on factors such as:

  • Return on Investment (ROI)
  • Test Stability
  • Frequency of Execution

This ensures your automation efforts deliver maximum value with minimal maintenance.


1. ROI (Return on Investment)

What It Means:

ROI measures how much value or cost savings you gain by automating a test compared to the effort (time, tools, maintenance) spent building and maintaining it.

Consider Automating When:

  • The test is executed frequently
  • Manual execution is time-consuming
  • Defect prevention in this area would reduce rework or production incidents
  • Automation will free up manual testers for exploratory or complex testing

Example Use Cases:

  • Regression tests that take 2+ hours manually and run every sprint
  • Smoke tests for critical user flows (login, payments)
  • API contract validation that breaks often with backend changes

When NOT to Automate:

  • One-off or rarely executed tests
  • Tests that require significant UI changes between builds (high maintenance)
  • Exploratory or subjective validations (e.g., visual aesthetics, UX)

2. Test Stability

What It Means:

A test is considered stable if it interacts with predictable, consistent functionality and produces deterministic results.

Choose Stable Targets for Automation:

  • Well-defined APIs with stable schemas
  • Core UI flows that don’t change frequently
  • Backend services with minimal third-party dependencies

Avoid Automating:

  • Flaky or unstable features (e.g., under development)
  • Components that are undergoing constant redesign (e.g., frontend A/B tests)
  • Tests with frequent false positives (creates alert fatigue)

Pro Tip:

If a test fails often but the feature is stable, consider improving the test script (e.g., wait logic, better selectors) before abandoning automation.


3. Frequency of Execution

What It Means:

How often the test case is run across the development lifecycle — per build, daily, weekly, or only once.

Automate High-Frequency Tests:

  • Regression test suites
  • Smoke tests for deployment pipelines
  • Sanity checks post-deployment
  • Health checks for production APIs

Don’t Automate:

  • Setup or migration scripts used once or very rarely
  • Ad hoc validations for marketing or campaigns

Rule of Thumb:

If a test runs more than 3 times per release cycle and takes more than a few minutes, it’s likely a good automation candidate.


Prioritization Matrix

CriteriaHigh Priority to AutomateLow Priority to Automate
ROISaves time and effortHigh effort, low reusability
StabilityStable featureRapidly changing or unstable
FrequencyRun often (CI/CD)One-time or rarely executed
Business ValueHigh impact on usersLow-risk, internal only

Practical Examples

Test ScenarioAutomate?Reason
Login workflowHigh frequency, stable, core business flow
Visual layout alignmentHard to automate reliably, better for manual validation
REST API response formatStable interface, easy to validate
Marketing popup displayUI changes frequently, limited ROI
Checkout processBusiness-critical, repeated across sprints
Bug reproduction stepsOne-time use, not reusable

Summary

Choosing what to automate isn’t just about what can be automated — it's about what should be.

  • Prioritize automation based on ROI, feature stability, and execution frequency
  • Avoid automating volatile, low-value, or rarely executed scenarios
  • Let automation cover the routine, and manual testers focus on intelligence-driven testing

“Test automation is a strategy, not a silver bullet. Choose wisely, and it will pay dividends over time.”