How to create a jdbc connection pool based on HikariCP & cache using Guava Cache in SpringBoot ?
Updated: Sep 18, 2021
Contents
Hikari connection pool is the best in terms of performance and flexibility in configuration at the moment. Hikari creates JDBC connection pools and manages the life cycle of them.Today I will show how we can use Hikari to create JDBC connection pools and how we can create multiple connection pools and cache them in a google guava cache.
In Cloud native development, you can have a central micro service which is responsible for maintaining c
communication with multiple databases where each database corresponds to a particular customer. This blog post shows how we can create Hikari Data Sources at runtime and manage several Hikari Datasource in a data source cache. for this purpose use google guava cache. This is because we can use a Hikari data source to connect with a particular database. This solution provides a way to connect multiple clients to connect to multiple databases. You can imagine it as a middleware that manages Hikari Connection Pool for N clients to M databases (N * M).
Why HikariCP ?
Some of the good features in HikariCP is
We can change the few of the configuration parameters dynamically after a Hikari Data Source is created, such as the number of JDBC connections.
We can change the JDBC connection credentials (user, pass). Hikari will change the credentials in the existing connections after the existing connections finish working.
This blog discusses in depth analysis of HikariCP and also shows how to create a JDBC connection pool using HikariCP.
How to create a jdbc connection pool using HikariCP ?
Hikari is a highly configurable API. Using HIkari we can configure the JDBC connections from how the connection pool will be created to how long a connection is kept open. There are many configurations possible. First we need to create a Hikari configuration. Using the configuration we can create a Hikari Data Source. A Data source wraps a pool of connection in it. Once a Data source is created we can simply retrieve a connection from the data source. We don't even realize that Hikari is managing a pool of connections and giving back an open and free connection object from the pool.
Hikari Configuration
First we need to create a Hikari Configuration that tells Hikari how to manage each JDBC connection and the JDBC connection pool as a whole. Note that each pool provides connection to one database. Here in Hikari we call it a data source.
We can define these parameters in the application.properties file as shown below.
application.properties file
hikari.cp.conf.autoCommit=true
hikari.cp.conf.connectionTimeout=300000
hikari.cp.conf.idleTimeout=600000
hikari.cp.conf.maxLifetime=1800000
hikari.cp.conf.minimumIdle=2
hikari.cp.conf.maximumPoolSize=12
Then we can initialize the configuration in a spring configuration file. The configuration will be used by the application at run time to create a JDBC connection pool using Hikari.
import java.unit.Properties;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.beans.factory.annotation.Bean;
import org.springframework.beans.factory.annotation.Configuration;
import org.springframework.beans.factory.annotation.Profile;
@Configuration
@Profile("cloud")
public class MyAppConfiguration {
@Bean(name = "hikariDsConfiguration")
public Properties hikariDsConfiguration(
@Value("${hikari.cp.conf.autoCommit}") String autoCommit ,
@Value("${hikari.cp.conf.connectionTimeout}") String connectionTimeout ,
@Value("${hikari.cp.conf.idleTimeout}") String idleTimeout,
@Value("${hikari.cp.conf.maxLifetime}") String maxLifetime,
@Value("${hikari.cp.conf.minimumIdle}") String minimumIdle,
@Value("${hikari.cp.conf.maximumPoolSize}") String maximumPoolSize) {
Properties properties = new Properties();
properties.setProperty("autoCommit", autoCommit);
properties.setProperty("connectionTimeout", connectionTimeout);
properties.setProperty("idleTimeout", idleTimeout);
properties.setProperty("maxLifetime", maxLifetime);
properties.setProperty("minimumIdle", minimumIdle);
properties.setProperty("maximumPoolSize", maximumPoolSize);
return properties;
}
}
Using this Hikari configuration we can create a Hikari data source. At the time of data source creation the HikariCP will establish a minimum number of JDBC connections as defined in the configuration. After a Hikari data source is created we can ask for a JDBC connection to the data source. We will create data sources in the JdbcManager class.
Cache Configuration
Similarly to provide the google guava cache configuration we can define the parameter values in application. Properties
application.properties file
datasource.cache.maximumSize=1000
datasource.cache.expireAfterWrite=10
datasource.cache.timeUnit=HOURS
In the same spring configuration file we are initializing the guava cache.
import java.unit.concurrent.TimeUnit;
import javax.sql.Datasource;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.beans.factory.annotation.Bean;
import org.springframework.beans.factory.annotation.Configuration;
import org.springframework.beans.factory.annotation.Profile;
import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;
@Configuration
@Profile("cloud")
public class MyAppConfiguration {
@Bean(name= "hikariDataSourceCache")
public cache<DataSourceKey, DataSource> hikariDataSourceCache (
@Value("${datasource.cache.maximumSize}") int maximumSize ,
@Value(" ${datasource.cache.expireAfterWrite}") long expireAfterWrite,
@Value(" ${datasource.cache.timeUnit}") String timeUnit) {
return CacheBuilder.newBuilder()
.maximumSize(maximumSize)
.expireAfterWrite(expireAfterWrite, TimeUnit.valueOf( timeUnit))
.build();