About the author
Albert Row is a senior software engineer from the San Francisco Bay Area with over 12 years of experience as an individual contributor, technical lead, architect, open-source contributor, and manager.
Albert has been a certified reviewer on PullRequest since December 2019.
It is your build system’s job to make a go/no-go decision on the build. Have your builds fail at the first sign of trouble. Developers should be able to run the test suites locally if they need to see a full list of failures, and killing builds early when they aren’t green frees up resources for the next build.
Fix your flakes
Nothing kills your build times like having to re-run due to a flaky test suite. Make a concerted effort to fix your flakes quickly so that everyone has faith in the test suite. This will avoid unnecessary and wasteful reruns of the same build. It’s often a good idea to track your failures over time so that you can surface flakes without relying on individual engineers to see patterns on their own.
Look for hot spots
You probably track your code coverage on your test suite to ensure sufficient coverage of your codebase. But, you can also use coverage metrics to find hotspots - places in your codebase that are run thousands of times on each test run. These are good areas for optimization or mocking to speed up your test suite.
Only run the relevant tests
There should be no need to run unit tests on files that haven’t changed between builds - if you’ve been disciplined in structuring your test files appropriately, you can reap the rewards of lower build times by excluding files that you know will pass from your run list for certain builds. This is particularly helpful for automated code checks - if lint passed on a previous run, and the file is the same as it was previously, then you’re good!
Remove useless tests
Some organizations, particularly those with devoted TDD acolytes, will have a number of tests that were useful in the development process but are much less valuable as regression tests. Consider removing tests that are too closely tied to their implementation, or that are so specific that they are unlikely to ever catch a regression.
There are many different testing philosophies out there, so this may or may not be relevant to your organization, and it may or may not be an acceptable strategy in the context of your team.
Favor unit tests over integration tests
Integration tests help you feel confident that your whole system will work well together but they can be really slow, as they frequently fall through to datastores and cause much more IO. You can frequently reach a similar level of confidence in many code paths with unit tests that verify components in isolation.
These tests are frequently more comprehensive than integration tests and it’s much easier to avoid expensive operations by isolating the component under tests. When you do this, make sure to…
Check your test data
One of the slowest operations in most codebases is loading data into your database to support testing, and cleaning it out between tests to keep things idempotent. This process can be sped up by only loading in the data that’s truly required for each test rather than a giant dataset that will suit all tests, and by cleaning up test data by rolling back transactions rather than running new delete or truncate statements.
Whenever possible, insert your data, test it, then roll back the transaction rather than doing an insert, commit, test, commit, then delete. If you’re currently using global fixtures, consider using fixtures scoped to each individual test to reduce the overall number of database writes that your test suite needs to perform.
Mock or stub your writes
When you’re writing unit tests, make sure to mock out or stub database writes and other IO operations. This avoids expensive operations and speeds up your overall suite. It also avoids expensive data resets between tests, which can save even more time.
Generally, the code that performs database or IO writes comes from another system - either an ORM or HTTP library - that is extremely well tested on its own and needs no additional coverage from your system’s test suite. Save those expensive test cycles for your own business and validation logic.
Avoid testing external services and components
Network calls are some of the most expensive operations in your test suite. If you’re testing anything that crosses a network boundary, stop! Either record the result of HTTP calls and re-play those results to fill in the request or stub out the calls entirely. Network calls are anathema to fast build times.
Run jobs and tests in parallel
Most test suites can be broken up into portions and run in parallel. Sometimes this means spinning up a different database or other persistence layer per shard of the suite, so a balance is needed here. Most suites will hit diminishing returns from parallelization at some point based on the relative cost of running the test shard vs. the cost of spinning up the environment, but some parallelization is almost always helpful and useful.
Consider some steps as prerequisites - if you’re working in a compiled language, for instance, it would be better to compile once and then run your tests in parallel than to compile your code in each test shard.
Throw hardware at the problem
Once you’ve parallelized, make sure that you’ve provisioned enough hardware to manage the load. If you’ve saturated the CPU on your build machines, greater parallelization won’t help.
There’s an obvious balance to be had here between fast builds and excessive expense - think about provisioning more build machines during work hours, and monitor usage. Tune your scaling rules to match the needs and budget of your team.
Break up monoliths
Monolithic codebases have advantages and disadvantages - one of the disadvantages is really long build times. If you find that you have clear chunks of your codebase that could be broken out into separate services, you can potentially save a lot of build time by splitting those services out. Smaller projects tend to build more quickly and are frequently easier to change.
Beware the migration cost though - frequently there are huge costs associated with splitting out a new service. Make sure this strategy makes sense in the context of your organizations’ overall technical strategy.
Also be sure to check out: