When implementing the fundamentals of my.typo3.org, an API based on Symfony was built to feed data to applications in the TYPO3 universe, e.g. the Certification Platform. Since this API needs to be rock-solid, it has a decent test coverage for each single piece gluing the application together which involves controllers, services and "decorative accessories". With development going further, the amount of tests increased including API endpoints which get their data from a connected database. In out test scenarios, we use a sqlite database as this needs no setup.

I'm developing on a 2019 Dell XPS 15 7590 with hexacore CPU Intel i7-9750H, 32 GB RAM, Ubuntu 20.10, where every project runs in ddev.

At some point in the development process, executing tests became slower and slower as the amount of test cases and the respective amount of fixtures increased:

Time: 10:06.275, Memory: 916.00 MB

OK (771 tests, 3120 assertions)

The full test run needs ~10 minutes and consumes over 900 MB of RAM. Of course I don't have to re-run the whole test suite when changing a single controller, but some changes are more low-level and trial & error by pushing to Github until all tests are successful is not really feasible.

Use the RAM, Luke

Scraping data from and pushing data back to the used M.2 NMVe SSD shouldn't be that time consuming, but I'm not that deep into Symfony and sqlite internals to properly explain what's going on here. Luckily, Symfony allows to store the database in RAM very easily by setting the database URL to sqlite:///:memory: instead. However, the first run didn't go well:

Tests: 771, Assertions: 1575, Errors: 39, Failures: 299.

All test fail with the exception Doctrine\DBAL\Exception\TableNotFoundException, right after priming the database the tables are not available anymore after importing the fixtures. After some research I found out that Symfony keeps the database in RAM until its kernel gets shutdown, either on purpose or when a new kernel is created. This happened at three specific places:

  • before priming the database
  • after importing the fixtures
  • starting a client to call the API endpoints in the tests

A typical setUp() looked like this:

protected function setUp(): void
    $this->prime(); // calls static::bootKernel() as well


Once the issue has been identified, solving it was relatively easy. The kernel is now booted at first in the test's setUp() methods and the primer demands an already booted kernel. If the primer doesn't find a kernel, a \LogicException is thrown which reveals non-adopted test classes. The primer is a trait being imported in the test classes extending \Symfony\Bundle\FrameworkBundle\Test\KernelTestCase, checking for a booted kernel is straight forward:

public function prime(): void
    if (!self::$booted) {
        throw new \LogicException('Could not find a booted kernel');

    // ...

The fixtures are imported by Doctrine's EntityManager calling its persist() and flush() method. This revealed another issue, the imported records could not be found in the database right away. The reason is that Doctrine maintains an identity map about the records which needs to be reset by calling clear() at the end of the process. This solution is described in the Doctrine documentation as well:

Sometimes you want to clear the identity map of an EntityManager to start over. We use this regularly in our unit-tests to enforce loading objects from the database again instead of serving them from the identity map. You can call EntityManager#clear() to achieve this result.

For reference here's the reduced importFixture() method:

public function importFixture(string $fileName): void
    // ...

    $fixtureConfiguration = require $file->getRealPath();
    foreach ($fixtureConfiguration as $model => $records) {
        foreach ($records as $record) {

Call API

The last nut was tough to crack as it affected the way how API endpoints are called in the test scope. Remember, the Symfony kernel never must get shutdown in order to keep the database in RAM. However, the official Symfony documentation recommends using static::createClient() which does exactly this. The aforementioned method gets the client from the named service test.client from the dependency injection container and does some assertions - we skip that and get the client by calling static::$kernel->getContainer()->get('test.client') only. Additionally, it was required to boot all bundles per web request again in a simple loop:

public function execute(Instruction $testInstruction): Response
    $kernel = $this->client->getKernel();
    foreach ($kernel->getBundles() as $bundle) {

    $request = Request::create(
        // ...

    // ...

Final tests

Once these changes were done, it was time for a new test run. I've expected a performance gain, but not that gain:

Time: 00:26.634, Memory: 230.00 MB

OK (771 tests, 3120 assertions)

The previous runs with a on-disk database always took roughly 10 minutes, now the runs need less than 30 seconds, this is an improvement of ~95%, the RAM usage went down by ~75%. However, this change has a major drawback: it is quite impossible to debug the database in the middle of a test run in case something is off with the records. In that case switching back to the on-disk variants seems to be without alternative, yet.

Previous Post