Adrián Ferrera

Slice Tests

Introduction

A lot of times, when we want to test our application we have to face the same problems and questions: How can I test it? Is this a good approach? Should I use mocks? Is it good to make tests directly over the database?

As usual the answer is: It depends. Not all the challenges can be passed in the same way with exactly the same success. However, we are going to provide you a few concepts that might help you make better decisions for testing your projects.

We will assume that you know the difference between unit tests and e2e tests. However, when you have to check your database queries, how did you do it until right now?

On our experience, lots of developers consider that e2e tests are a good approach for checking the database queries because they don't need to consider the domain factors from the application, and just base the tests on data snapshots from tables that return different application statuses.

Another approach is creating unit tests for the repository consumer using a mock of it and trust that it works, because it is not part of our domain. It is actually part of the infrastructure that we have chosen.

Both options are valid. However, each of them have their own we have different implications:

  • Following the e2e tests approach, we will notice that this kind of tests are too slow, breakable and don't contemplate all the possible cases. (By the way, if you introduce more test cases, you will spend more time maintaining and running them.)
  • At unit level you will see that our domain is working, but we are not covering any possible issue introduced by the database query. This is a big risk in projects where the database is one of the important factors of the business. Whether this is a good approach or not is a completely different discussion.

Then... what can we do? Well, today we will introduce the Slicing concept.

What is an Slice test?

A slice test is a kind of integration test. As the name reflects, it follows an strategy of slicing.

How? We will take the most isolated part of the application that we need to test and we will add an specific environment as closest as possible to the real factors where the code will be run.

When you have read it, probably you have thought something like: "Okay, so you are suggesting to have a specific database for testing proposal". Yes, you are right, however, it can be done in different ways. You can:

  • Have an isolated database for testing proposal (with different resources and simple configurations)
  • Have a memory database different from the real one. However, that probably means using different technologies or including changes to the query system so, this is not a solution.
  • Use the real database, adding some additional column to identify if the data was added with testing purpose in order to remove it later.

Yes, you can use all of this if it works for you, nevertheless, we have a different suggestion for you:

What happens if in your dockerfile for the database, you are able to create an schema with testing purpose in order to not increase the space at memory disk:

  • Will use your same migrations system
  • Has the same resources as your application
  • Will be used on the given environment
  • All the testing data will not affect the real one and
  • All the data stored in it will disappear when your tests finish?

That is possible and we are going to see how to do it.

Requirements

For the next example we will use a PostgreSQL database with a backend in Kotlin using SpringBoot with Liquibase as a migrations tool. However the concepts that we will explain are very similar to other tools and languages like MongoDB, Typescript or Express.

Creating the database

Firstly, assuming that you are using a docker-compose.yml file for setting up your local environment, you will need to find some strategy to create multiple schemes for the database creation.

With Postgresql we will use the approach suggested on this Github repository. The main idea is to provide a.sh file that will be executed the first time to create automatically many databases.

To do this, you will need to provide a new environment variable called POSTGRES_MULTIPLE_DATABASES . You can join by commas multiple values for the names of these databases. In our case, we will define two: the real one, and the same one adding the suffix _test:

version: '3'
services:
  database:
    image: "postgres"
    ports:
      - "5432:5432"
    env_file:
      - docker/database/database.env
    volumes:
      - database-data:/var/lib/postgresql/data/
      - ./docker/database/init-scripts:/docker-entrypoint-initdb.d

Given the database.env file:

POSTGRES_USER=afergon
POSTGRES_PASSWORD=afergon
POSTGRES_MULTIPLE_DATABASES=comic_libraries,comic_libraries_test

Get the script provided on the Github repository.

💡 For sure if you are not using PostgreSQL, you can try to research how to do it with your current database.

Configuring the migrations

This is an optional step, however if you are using migrations, you will appreciate to execute it on your new database for testing too. We have chosen a SpringBoot application, in which we are using Liquibase for migrations.

You can find how to do it following this post.

The only thing that you will need to consider is to create an application.property or application.yml for our tests too.

Connecting to the test database

For connecting to the test database you will need to override the configuration file provided to our application. In Spring this is very simple. You will only need to create a new application.yml file in your test resources directory with the following content:

spring:
  test:
    database:
      replace: none

  datasource:
    driver-class-name: org.postgresql.Driver
    password: afergon
    url: jdbc:postgresql://localhost:5432/comic_libraries_test
    username: afergon

  liquibase:
    change-log: classpath:database/liquibase-changelog.xml

Now all your tests will point to the test database.

Removing data

The strategy for removing data is to keep each test in a transaction, and make a roll-back after each of them. This will keep the consistency of the data and a clean database all the time because your test will never write on your tables.

With JPA this is something very simple, you only need to add the @DataJpaTest annotation at the test class, and it will make the work for you. If not, you can always implement your own beforeEach and afterEach methods.

Creating the test

The final step is create a test, so we will make it simple. Let's check the method find all that comes by default with Hibernate and see that everything is working:

package dev.afergon.kotlinhibernateperformance.application.repositories

import dev.afergon.kotlinhibernateperformance.application.entities.Library
import org.junit.jupiter.api.Assertions.assertEquals
import org.junit.jupiter.api.Test
import org.springframework.beans.factory.annotation.Autowired
import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest

@DataJpaTest
internal class LibraryRepositorySliceTest {

    @Autowired
    lateinit var libraryRepository: LibraryRepository

    @Test
    fun `should returns list of libraries`() {

        libraryRepository.save(Library(name = "irrelevant name"))

        val actual = libraryRepository.findAll()

        assertEquals(actual.size, 1)
    }

}

And that's all, right now you can implement complex queries and look at all the edge cases on the queries without being afraid of crashing the database.

Conclusion

Following the strategy divide and conquer always allows us to reduce the complexity and the maintenance of the applications. It can be applied to the tests too, and it will help us to keep the scope of them as simple as posible.

Isolating our tests queries from our domain layer will allow the team to keep the focus on the business layers without being a guru of the database technology.

You can take a look at the code (and more examples) on the Github repository.

I hope this post will help you to resolve some doubts and keep improving the scope of the test in your application.

Thank you a lot for reading it and don't forget to share it if you found it helpful!


Thanks to Mireia Scholz and Maria Soria for helping my with the translation and with the revision process, is a pleasure to work every day with this amazing persons at Lean Mind

Photo by Ivan Torres on Unsplash