Testcontainers has become an essential tool in many Java projects to run integration and system tests in realistic environments. However, when test scenarios grow complex, developers often run into limitations—especially around configuration, parallelization, and resource usage. An open-source framework built on top of Testcontainers addresses some of these challenges with powerful extensions.
Testcontainers Infrastructure (TCI) Framework
1. Improve customizability and parallelization
By using the factory pattern when creating containers, the goal is to make it easier for developers to adjust the containers as needed.
| Without TCI | With TCI |
static final MySQLContainer MY_SQL_CONTAINER; static { MY_SQL_CONTAINER = new MySQLContainer(); MY_SQL_CONTAINER.start(); } | static final DBTCIFactory DB_INFRA_FACTORY = new DBTCIFactory(); void startInfra() { this.dbInfra = DB_INFRA_FACTORY.getNew(...); } |
For each “infrastructure” (abbreviated TCI), there is a factory to create a new one. This factory can be easily configured and handles such things as Container creation, PreStarting, and tracking the created TCI for the additional features described below.
Customizing containers/infrastructure looks like this in the TCI_FACTORY-Class
@Override
public void start(final String containerName) {
super.start(containerName);
if(doMigrate) {
this.migrateDatabase(BASELINE_FOR_TESTS);
}
}
void migrateDatabase(String version) {
// Migrate database with e.g. Flyway
}
This means it is now possible to add additional non-container-related code, like e.g. clients or common methods (e.g. createUser) without modifying the container itself. This follows the composition over inheritance design principle.
By using factories the framework can also improve performance through handling parallelization and PreStarting.
2. Running tests as fast as possible
Why is this important in the first place?
Running test as fast as possible has multiple advantages:
- When run by a developer:
Usually, there is not much else you can do when running tests – except maybe getting some coffee. It’s also possible to start another task but then you might lose focus on your original one and have to “rethink” back into the topic later.
- When run by a CI:
- If you are paying for computing on demand (e.g. minute-based billing for something like Spot-Instances) running test faster (without enlarging the used machine) can save a lot of money due to lower rental times.
- If you are paying for a fixed amount of computing running test faster means that there is more time available for other jobs to be executed on the CI. If the amount of saved time is high enough you can also think about scaling down the required computing power.
- Faster test feedback: When e.g. requiring full integration test success before doing a release this can cut the time for shipping the release
- If you are paying for computing on demand (e.g. minute-based billing for something like Spot-Instances) running test faster (without enlarging the used machine) can save a lot of money due to lower rental times.
The framework is explicitly designed for parallelization and provides multiple features to speed up tests:
2.1. PreStarting mechanism
When running tests usually there are certain times when the available resources are barely utilized:

PreStarting uses a cached pool of infrastructure and tries to utilize these idle times to fill/replenish this pool. So that when new infrastructure is requested there is no need to wait for the creation of it and the already started infrastructure from this pool can be used – if it’s available.
Additional performance
When implemented correctly, this approach can significantly reduce test duration. See the performance comparison for more details.
There is also a live example (using GitHub Actions), which yields the following results:
| Case | Parallelization | PreStarting enabled? | Time to run all test |
|---|---|---|---|
| A | – | ❌ | 8m 50s |
| B | – | ✔ | 5m 30s |
| C | 2 | ❌ | 6m |
| D | 2 | ✔ | 4m 50s |
As shown above, the best configuration (D) achieves nearly a 50% speed improvement compared to the baseline (A).
2.2. Optimized Testcontainers Network
An optimized implementation of Testcontainers Network is used:
| Before | After |
| NetworkImpl code from testcontainers 1.20 – see below: | LazyNetworkPool provides a pool of networks that are created in the background. No time is lost waiting for network creation. |
@Override
public synchronized String getId() {
if (initialized.compareAndSet(false, true)) {
boolean success = false;
try {
// Network is created when id is accessed
// Takes a moment
id = create();
success = true;
} finally {
...
}
}
return id;
}
2.3. Container Leak detection
This detects if containers that have been started by a test are also terminated and prevents the test machine from running out of resources.
In the following example, the Testcontainer is created, but never terminated.
@Test
void test() {
DummyTCI tci = DUMMY_FACTORY.getNew(...);
...
}
After running these tests with the framework the following error will show up in the logs:
ERROR s.x.tci.leakdetection.TCILeakAgent - !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
ERROR s.x.tci.leakdetection.TCILeakAgent - ! PANIC: DETECTED CONTAINER INFRASTRUCTURE LEAK !
ERROR s.x.tci.leakdetection.TCILeakAgent - !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
ERROR s.x.tci.leakdetection.TCILeakAgent - All test are finished but some infrastructure is still marked as in use:
DummyTCIFactory leaked 1x [container-ids=[c1b6be852fac3bf65ac8f2739ab161d7f95bc4c62699c698ccc8b74da1be8a3d]]
3. Quality of Life
The framework also provides some minor enhancements:
3.1. Human-readable names for containers
All started containers have a unique human-readable name, which makes identification easier when tracing or debugging
| Before | After |
docker statsNAMEeager_rubin vigilant_archimedespractical_haibtecstatic_sandersonserene_einsteingreat_saha agitated_dhawan strange_montalcini | docker stats |
3.2. Test run time statistics
A tracing mechanism that makes finding bottlenecks and similar problems easier
Example:
[main] [i.tracing.TCITracingAgent] === Test Tracing Info ===
Duration: 2m 43.608s
Tests: 20.656s / 15 / 5m 9.84s
BrowserTCIFactory-firefox:
bootNew - 1ms / 6 / 5ms
connectToNetwork - 515ms / 5 / 2.575s
getNew - 574ms / 5 / 2.87s
infraStart(async) - 14.575s / 6 / 1m 27.448s
postProcessNew - 54ms / 5 / 270ms
warmUp - 2.448s / 1 / 2.448s
...
Further reading
The Testcontainers Infrastructure (TCI) Framework is available on GitHub. Visit the “Usage” section to get started. Several demo projects are also provided.
Questions or feedback can be submitted via the GitHub issue tracker.