47 Degrees joins forces with Xebia read more

Database Persistence in Kotlin

Database Persistence in Kotlin

Kotlin is bringing a whole new world of solutions to the backend side. And, in this post, I’m going to guide you through the features of two popular database frameworks: SqlDelight and Exposed. In the end, evaluating their features can help us choose between the two.

There are many database frameworks we can use in our stack to solve the persistence problem. So, in order to pick the right one, we need to know what needs to be covered, and the dependencies we have.

In the case of Spring Boot with Kotlin, the documentation states:

The Spring Framework provides extensive support for working with SQL databases, from direct JDBC access using JdbcTemplate to complete “object relational mapping” technologies such as Hibernate. Spring Data provides an additional level of functionality: creating Repository implementations directly from interfaces and using conventions to generate queries from your method names.

So yes, Spring has many different solutions. But in case we change our mind and decide to build our backend with Ktor instead, we would prefer to use a more flexible solution not attached to the framework. In that regard, two of the persistence frameworks that we’ve found interesting are SqlDelight and Exposed, which we are detailing in this blog post.

John Travolta confused meme - SqlDelight vs Exposed

Let’s jump into the setups! The business domain for modeling the data layer is storing technologies along with tags, so in both frameworks I’m using the same entity model, TechnologyEntity, which contains information such as the id of the entity, the name, description, and a list of keywords related to that technology.

Also, both setups are the same (meaning the same Controller and the same Service (algebra), with only a few changes). They even have the same database engine (PostgreSQL), only changing the database framework.

SqlDelight

Developed by Square, SQLDelight generates typesafe Kotlin APIs from your SQL statements. It verifies your schema, statements, and migrations at compile-time and provides IDE features like autocomplete and refactoring, which make writing and maintaining SQL simple (documentation).

It’s SQL first, meaning that, by defining the entities and queries in SQL format, the framework will automatically generate for us all data classes (entity models) and interfaces to access the SQL statements defined.

In order to use the framework, we need to add some dependencies. The PostgreSQL dependency is added as well, since it’s the database engine we are using in this example (this dependency will be used in both setups).

Along with the dependencies, we can proceed to add the database configuration to name the database schema and to define the package name, the dialect, and some handy task-dependency, so whenever we change our code and compile, our code will be generated again:

build.gradle.kts

plugins {
  id("com.squareup.sqldelight")
}

sqldelight {
  database("BlogpostDb") {
    packageName = "com.fortysevendegrees.blogpost.db"
    dialect = "postgresql"
  }
}

// Generates database interface at compile-time
tasks.withType<KotlinCompile> {
  dependsOn("generateMainBlogpostDbInterface")
}

dependencies {
  implementation("com.squareup.sqldelight:jdbc-driver:${Version.sqlDelight}")
  runtimeOnly("org.postgresql:postgresql")
}

That would be enough for the configuration. Next, we should create our entity and its queries. As already mentioned, we are going to be writing SQL. This is how it works:

We need to create our table with its queries in a sq file in:

yourModule -> src -> main -> sqldelight -> yourPathForEntities -> TechnologyEntity.sq

import kotlin.collections.List;

CREATE TABLE IF NOT EXISTS technologyEntity (
    id SERIAL PRIMARY KEY,
    name VARCHAR NOT NULL UNIQUE,
    description VARCHAR NOT NULL,
    keywords VARCHAR AS List<String> NOT NULL
);

insert:
INSERT INTO technologyEntity(name, description, keywords)
VALUES (?, ?, ?);

By writing that SQL, the SqlDelight compiler will autogenerate the entities as data classes and the interface with its queries, like:

TechnologyEntity.kt

data class TechnologyEntity(
  val id: Int,
  val name: String,
  val description: String,
  val keywords: List<String>
) {
  override fun toString(): String = """
  |TechnologyEntity [
  |  id: $id
  |  name: $name
  |  description: $description
  |  keywords: $keywords
  |]
  """.trimMargin()

  class Adapter(
    val keywordsAdapter: ColumnAdapter<List<String>, String>
  )
}

TechnologyEntityQueries.kt

interface TechnologyEntityQueries : Transacter {
  fun insert(
    name: String,
    description: String,
    keywords: List<String>
  )
}

But it will also autogenerate the database schema because of the configuration shown before:

BlogpostDb.kt

interface BlogpostDb : Transacter {
  val technologyEntityQueries: TechnologyEntityQueries

  companion object {
    val Schema: SqlDriver.Schema
      get() = BlogpostDb::class.schema

    operator fun invoke(driver: SqlDriver, technologyEntityAdapter: TechnologyEntity.Adapter):
        BlogpostDb = BlogpostDb::class.newInstance(driver, technologyEntityAdapter)}
}

That looks appropriate, right? Then let’s continue with the setup 😁

Oh wait there one second! What’s that TechnologyEntity.Adapter? Do you have a row for storing a list of tags? Yeah, we can directly store the list of items or any other type by providing the expected adapter/mapper to go from one type to another. In this case, the adapter would look like:

  @Bean
  fun provideStringListColumnAdapter(): ColumnAdapter<List<String>, String> =
    adapter(
      encode = { it.joinToString(separator = ",") },
      decode = { it.takeIf { it.trim().isNotEmpty() }?.split(",") ?: emptyList() }
    )

  @Bean
  fun provideTechnologyKeywordAdapter(keywords: ColumnAdapter<List<String>, String>): TechnologyEntity.Adapter =
    TechnologyEntity.Adapter(keywords)

Doing that is pretty common, so we’ve created this function to help with encoding-decoding from one type to another:

inline fun <A : Any, B> adapter(
  crossinline encode: (A) -> B,
  crossinline decode: (B) -> A
): ColumnAdapter<A, B> =
  object : ColumnAdapter<A, B> {
    override fun decode(databaseValue: B): A =
      decode(databaseValue)

    override fun encode(value: A): B =
      encode(value)
  }

Finally, we need to provide our database and create the schema with all the tables. Of course, the adapter we just talked about also needs to be provided:

  @Bean
  fun provideDatabase(dataSource: DataSource, techAdapter: TechnologyEntity.Adapter): BlogpostDb =
    BlogpostDb(
      driver = dataSource.asJdbcDriver().apply {
        BlogpostDb.Schema.create(this)
      },
      technologyEntityAdapter = techAdapter
    )

And that would be everything for setting it up. Then, to make use of it, we’ve created an algebra to interact with the generated database queries by using suspend functions. For example:

interface TechnologyAlgebra {

  suspend fun save(element: PostBodyTechnology): Unit

  companion object {
    operator fun invoke(database: BlogpostDb) = object : TechnologyAlgebra {
      override suspend fun save(element: PostBodyTechnology): Unit =
        database.technologyEntityQueries.insert(
          name = element.name,
          description = element.description,
          keywords = element.keywords
        )
    }
  }
}

The algebra in this case is acting as a Service, and you can use it within your controller to achieve what you want. Simple, right?

Alright, so having the controller with an endpoint and a service (TechnologyAlgebra) to manage the database, we should be ready to go!

Oh wait, just one more thing! Before running the app, be sure we have PostgreSQL installed and running on the local machine, and that the database exists. Otherwise, the app will crash 🙆🏻‍♂️. Here is the link to install it. And remember, to create the database, you will have to execute the sql statement for doing so, just by opening any of the terminals in the postgres dashboard and executing:

create database blogpostdb;

Exposed

Exposed, developed by JetBrains, is an ORM framework for Kotlin, and it offers two levels of database access: typesafe SQL wrapping DSL and lightweight data access objects. It can mimic a variety of database engines and help you build database applications without hard dependencies on any specific database engine, and switch between them with very little or no changes in your code. In this case, we care about PostgreSQL, which is the database engine we are using in this example too (documentation).

Let’s see how to set it up, and how to define the database schema. First, we need to add the gradle dependencies:

build.gradle.kts

dependencies {
  implementation("org.jetbrains.exposed:exposed-core:${Version.exposed}")
  implementation("org.jetbrains.exposed:exposed-dao:${Version.exposed}")
  implementation("org.jetbrains.exposed:exposed-jdbc:${Version.exposed}")
  runtimeOnly("org.postgresql:postgresql")
}

Once that is done, we can already use the framework to create our first table and its columns. The main difference with SqlDelight is that, instead of writing SQL, we now define our entities differently.

A DB table is represented by an object inherited from org.jetbrains.exposed.sql.Table like this:

object TechnologyEntities : Table() {
  val id = integer("id").autoIncrement()
  val name = varchar("name", 50).uniqueIndex()
  val description = varchar("description", 150)
  val keywords = text("keywords")
  override val primaryKey = PrimaryKey(id, name = "PK_TechnologyEntities_Id") // PK_TechnologyEntities_Id is optional here
}

Well, we can rework that a little because tables that contain an Int id with the name id can be declared like this:

object TechnologyEntities : IntIdTable() {
  val name = varchar("name", 50).uniqueIndex()
  val description = varchar("description", 150)
  val keywords = text("keywords")
}

Simple, right? That is how we define a table. Now let’s see how to define the rows for that table. An entity instance in the table is defined as a class instance like:

class TechnologyEntity(id: EntityID<Int>) : IntEntity(id) {
  companion object : IntEntityClass<TechnologyEntity>(TechnologyEntities) {
    const val SEPARATOR = ","
  }

  var name by TechnologyEntities.name
  var description by TechnologyEntities.description
  var keywords by TechnologyEntities.keywords.transform(
    toColumn = { it.joinToString(SEPARATOR) },
    toReal = { it.takeIf { it.trim().isNotEmpty() }?.split(SEPARATOR) ?: emptyList() }
  )
}

Remember how we were storing the list of keywords using SqlDelight with an adapter? For Exposed, we are doing the same. If we look at the example above, it’s mostly the same, right? But there is also another option to define a list of values, which is defining our own ArrayColumnType.

Okay, so we now know how to create tables and how rows are defined. Next, if we want to insert a new TechnologyEntity in the table, we have two options:

  • Using the DSL API: The DSL (Domain Specific Language) API of Exposed is similar to actual SQL statements with type safety that Kotlin offers. Using this API looks very similar compared to the SqlDelight insert function.
  val technologyId = TechnologyEntities.insertAndGetId {
    it[name] = element.name
    it[description] = element.description
    it[keywords] = adapter.toString(element.keywords)
  }
  • Using the DAO API: The DAO (Data Access Object) API of Exposed is similar to ORM frameworks like Hibernate with a Kotlin-specific API.
  val technology = TechnologyEntity.new {
    name = element.name
    description = element.description
    keywords = element.keywords
  }

Using the DAO API here, we don’t need to define the insert statement as we did for SqlDelight. This is because we get all the basic CRUD operations for free out of the box, since it’s part of the API from EntityClass.

After this, we need to provide the database in our configuration so we can create the database connection and also create the database schema:

  @Bean
  fun provideDatabase(dataSource: DataSource): Database =
    Database.connect(dataSource)
      .also { transaction { SchemaUtils.create(TechnologyEntities) } }

Alright, with all that, we should have everything ready to be used in our service (TechnologyAlgebra):

interface TechnologyAlgebra {

  suspend fun save(element: PostBodyTechnology): Technology

  @Component
  companion object : TechnologyAlgebra {
    override suspend fun save(element: PostBodyTechnology): Technology =
      newSuspendedTransaction(Dispatchers.IO) {
        TechnologyEntity.new {
          name = element.name
          description = element.description
          keywords = element.keywords
        }.toDomain()
      }
  }
}

After obtaining a connection, as shown in the configuration when the database is being provided, do not forget to place the SQL statements inside a transaction ☝️. CRUD operations in Exposed must be called from within a transaction. Transactions encapsulate a set of DSL operations, and, to create and execute a transaction with default parameters, simply pass a function block to the transaction function (as shown above with newSuspendedTransaction).

Transactions are executed synchronously on the current thread, so they will block other parts of your application! If you need to execute a transaction asynchronously, consider running it on a separate thread, or, if you are working with coroutines, there are bridge functions that will give you a safe way to interact with Exposed within suspend blocks: newSuspendedTransaction/Transaction.suspendedTransaction have the same parameters as a blocking transaction function, but will allow you to provide CoroutineDispatcher in which function will be executed. If context is not provided, your code will be executed within current coroutineContext.

❗️Not providing a CoroutineDispatcher within the transaction would be bad practice, so we should keep in mind that such calls need to be called on Dispatchers.IO.

Conclusions

With this simple use-case, we’ve only grasped a few features of these two nice frameworks. If you want to expand on this knowledge, I would recommend you go through the documentation of both for further details 🧐.

Given both setups, which one did you like the most? After all, I think it’s really hard to definitively say: Yes, this framework is so much better than the other. We found both frameworks really nice to work with. By using either, we get:

  • Kotlin first
  • Typesafe SQL
  • As opposed to using Spring Data, we won’t be attached to the Spring framework, since both database frameworks can be used for Ktor as well.
  • We can use our preferred database engine (PorstgreSQL, MySQL, H2, SQLite…)
  • Easy to set up.

The main difference would be that, with SqlDelight, we need to create our tables/entities by writing SQL. But with Exposed, we just define the table and the instances of the rows using objects and classes. They also differ on the usage; Exposed has two levels of abstraction, using DSL or DAO APIs, meanwhile we operate in SqlDelight by using the generated interfaces.

To choose between one or the other, since both are good, I would point out two things; if there is any deadline to accomplish, and if the team has any previous experience with any of the frameworks. However, we found some limitations that could serve as another deciding factor:

  • One benefit of SqlDelight over Exposed is the ability to go multiplatform, because it is quickly becoming the defacto library for persisting data locally in Kotlin Multiplatform (KMP) projects ♻️.
  • One benefit of Exposed over SqlDelight is that it comes with lots of features that you can use out of the box, like CRUD operations. When inserting a new row with Exposed, the new item is returned without requiring anything else. But when using SqlDelight, we can’t make use of the RETURNING clause (either with PostgreSQL or MySQL). So it’s not possible to insert a new row and return it. We would need an extra query or create our own custom query using JDBC. Evidently, any limitation we come across with the API, we can solve it by executing our plain SQL ☺️:
  override suspend fun save(element: PostBodyTechnology): Technology? =
    saveReturnQuery(element).executeAsOneOrNull()?.toDomain()

  private fun saveReturnQuery(element: PostBodyTechnology): Query<TechnologyEntity> =
    technologyQuery(
      """
      |INSERT INTO technologyEntity(name, description, keywords)
      |VALUES (?, ?, ?)
      |RETURNING *
      """.trimMargin(),
      parameterCount = 3
    ) {
      element.run {
        bindString(1, name)
        bindString(2, description)
        bindString(3, stringListAdapter.encode(keywords))
      }
    }

  private fun technologyQuery(
    sqlQuery: String,
    parameterCount: Int,
    bind: SqlPreparedStatement.() -> Unit
  ): Query<TechnologyEntity> =
    driver.query(
      sqlQuery,
      parameterCount = parameterCount,
      map = {
        nullable.eager {
          TechnologyEntity(
            getLong(0)?.toInt().bind(),
            getString(1).bind(),
            getString(2).bind(),
            stringListAdapter.decode(getString(3).bind())
          )
        }!!
      },
      bind = bind
    )

  fun <T : Any> SqlDriver.query(
    sqlQuery: String,
    identifier: Int? = null,
    queries: MutableList<Query<*>> = copyOnWriteList(),
    parameterCount: Int,
    map: SqlCursor.() -> T,
    bind: SqlPreparedStatement.() -> Unit
  ): Query<T> =
    object : Query<T>(queries, map) {
      override fun execute(): SqlCursor =
        executeQuery(identifier, sqlQuery, parameterCount, bind)

      override fun toString(): String = sqlQuery
    }
  • About migrations: There is no method in Exposed to perform migrations. This option exists for SqlDelight, but, for example, it couldn’t have a working solution for defining a simple renaming of a column in the sqm files. The postgreSQL statement was not compiling (let’s keep in mind the postgress dialect is still under construction). So this would not be a deciding factor, and we’ll see how to solve this problem anyway with one of the most used tools to perform migrations: Flyway.

I hope this helps you to decide which framework you want to use. And stay tuned for the following posts in this series, because, for our next topics, we will be talking about migrations and testing (with docker and testcontainers).

Ensure the success of your project

47 Degrees can work with you to help manage the risks of technology evolution, develop a team of top-tier engaged developers, improve productivity, lower maintenance cost, increase hardware utilization, and improve product quality; all while using the best technologies.