top of page

How to create a Jdbc connection pool based on Hikari CP and cache them using Guava Cache ?

Updated: Sep 18, 2021


 

Contents

 

Hikari connection pool is the best in terms of performance and flexibility in configuration at the moment. Hikari creates JDBC connection pools and manages the life cycle of them.Today I will show how we can use Hikari to create JDBC connection pools and how we can create multiple connection pools and cache them in a google guava cache.

In Cloud native development, you can have a central micro service which is responsible for maintaining communication with multiple databases where each database corresponds to a particular customer. This blog post shows how we can create Hikari Data Sources at runtime and manage several Hikari Datasource in a data source cache. for this purpose use google guava cache. This is because we can use a Hikari data source to connect with a particular database. This solution provides a way to connect multiple clients to connect to multiple databases. You can imagine it as a middleware that manages Hikari Connection Pool for N clients to M databases (N * M).


Why HikariCP ?

Some of the good features in HikariCP is

1. We can change the few of the configuration parameters dynamically after a Hikari Data Source is created, such as the number of JDBC connections.

2. We can change the JDBC connection credentials (user, pass). Hikari will change the credentials in the existing connections after the existing connections finish working.


This blog discusses in depth analysis of HikariCP and also shows how to create a JDBC connection pool using HikariCP.


How to create a Jdbc connection pool using HikariCP ?

Hikari is a highly configurable API. Using HIkari we can configure the JDBC connections from how the connection pool will be created to how long a connection is kept open. There are many configurations possible. First we need to create a Hikari configuration. Using the configuration we can create a Hikari Data Source. A Data source wraps a pool of connection in it. Once a Datasource is created we can simply retrieve a connection from the data source. We don't even realize that Hikari is managing a pool of connections and giving back an open and free connection object from the pool.


Initializing Hikari Configuration

First we need to create a Hikari Configuration that tells Hikari how to manage each JDBC connection and the JDBC connection pool as a whole. Note that each pool provides connection to one database. Here in Hikari we call it a data source. In the DataSourceConfigurator class we fetch the configuration properties that can be provided while deploying the application like below.

java -jar appname -DautoCommit="true" -DconnTimeout="500000"

These properties are retrieved using the System.getProperty(),if any of the properties are not provided the default value will be used.

import java.unit.Properties;

public class DataSourceConfigurator {

    Properties static final properties = new  Properties();
 
    private String autoCommit = System.getProperty(“autoCommit”, “true”);
    private String connTimeout = System.getProperty(“connTimeout”, “300000”);
    private String idleTimeout = System.getProperty(“idleTimeout”, “600000”);
    private String maxLifetime = System.getProperty(“maxLifetime”, “1800000”);
    private String minimumIdle = System.getProperty(“minimumIdle”, “2”);
    private static String maxPoolSize = System.getProperty(“maxPoolSize”, “12”);

    public DataSourceConfigurator() {
       properties.setProperty("autoCommit", autoCommit);
       properties.setProperty("connectionTimeout", connTimeout);
       properties.setProperty("idleTimeout", idleTimeout); 
       properties.setProperty("maxLifetime", maxLifetime); 
       properties.setProperty("minimumIdle", minimumIdle); 
       properties.setProperty("maximumPoolSize", maxPoolSize);
    } 
 
    public Properties getConfiguration() {
       return properties;   
    }
}

Using this Hikari configuration we can create a Hikari DataSource. At the time of data source creation the HikariCP will establish a minimum number of JDBC connections as defined in the configuration. After a Hikari data source is created we can ask for a JDBC connection to the data source. We will create data sources in the JdbcManager class.


Initializing Cache Configuration

Similarly to provide the google guava cache configuration we are using the CacheConfigurator class.

public class CacheConfigurator {

    private String cacheMaxSize = System.getProperty(“cacheMaxSize”, “true”);
    private String cacheExpTime = System.getProperty(“cacheExpTime”, “300000”);
    private String cacheExpTimeUnit = System
.getProperty(“cacheExpTimeUnit”, “HOURS”);
 
    private int maximumSize;
    private long expireAfterWrite;

    public CacheConfigurator() {
      maximumSize = Integer.parseInt(cacheMaxSize);
      expireAfterWrite= Long.parseLong(cacheExpTime);
    }  
    public int getMaximumSize() {return maximumSize;}
    public long getExpireAfterWrite() {return expireAfterWrite;}
    public String getTimeUnit() {return cacheExpTimeUnit ;}
}

As we want to store many data sources in the cache the data source objects will be the value of the key value pair. We will need to define a key, for this reason we are defining our own key object. The DataSourceKey class provides the definition of the key. As we are using this object as a key in the datasource cache we need to override the equals and hashCode so that the key comparison can be done properly by the guava cache.


Cache Key

import java.unit.Properties;

public class DataSourceKey {
   private String clientID;
   private String dbHostUrl;
   private String userName;
   private String password;
   private Properties connProperties;  

   public DataSourceKey(String clientID,String dbHostUrl, String userName,
  String password) {
      this.clientID =  clientID;
      this.dbHostUrl = dbHostUrl;
      this.userName = userName;
      this.password = password;
      this.connProperties = generateConnProperties(dbHostUrl,userName,password);
   }
   //add getters
 
   public String getKey() { 
return clientID + "_" +  dbHostUrl + "_" +   userName;
   }
 
   private Properties generateConnProperties(String dbHostUrl, String userName, 
  String password ) {
      connectionProperties = new Properties();
      connectionProperties.setProperty("jdbc-url", dbHostUrl);
      connectionProperties.setProperty("user", userName ); 
      connectionProperties.setProperty("password", password );  
      return connectionProperties;
   }

 @Override
   public boolean equals(Object obj) {
       if(obj == null || !(obj instanceof DatasourceKey)) { return false; }
       if(obj == this) {return true};

       DataSourceKey foreignObject = (DataSourceKey) obj;
       if(foreignObject.getClientID().equals(this.clientID) &&
          foreignObject.getDbHostUrl().equals(this.dbHostUrl) &&
          foreignObject.getUserName().equals(this.userName) &&
          foreignObject.getPassword().equals(this.password) { 
            return true;
       } else {
            return false;
       } 
  }

  @Override 
  public int hashCode() {
       String hashString = this.clientID + this.dbHostUrl + 
this.userName +  this.password;
       return  hashString.hashCode();
  } 
}


Put all of the together

Now Let's create a JdbcManager class that creates the DataSource Cache, Hikari DataSources and provides JDBC connections using DataSources.

import java.sql.Connection;
import java.util.Properties;
import java.util.concurrent.Callable;
import javax.sql.DataSource;
import java.unit.concurrent.TimeUnit; 
import com.google.common.cache.Cache;
import com.google.common.cache.CacheBuilder;

import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;

public class JdbcManager {
  private Cache<DataSourcekey, DataSource> hikariDsCache;
  private final Properties hikariDsConfig;

  public JdbcManager(DataSourceConfigurator dsc, CacheConfigurator cc) {
    hikariDsCache = CacheBuilder.newBuilder()
    .maximumSize(cc.getMaximumSize())
                                     .expireAfterWrite(cc.getExpireAfterWrite(),
TimeUnit.valueOf(cc.getTimeUnit()))
                                     .build();
    hikariDsConfig = dsc.getConfiguration();
  }
  public Connection getConnection(String clientID, String dbHostUrl, 
String userName,String password) throws RuntimeException {
        DataSourceKey dataSourceKey = new DataSourceKey(clientId, dbHostUrl,
  userName, password);
        try {
           DataSource dataSouce = 
  hikariDsCache.get(dataSourceKey,
               createDataSource(dataSourceKey, hikariDsConfig));
           return dataSource.getConnection();
        } catch (Exception e) { 
          throw new RuntimeException(e.getMessage());
        }
  }
  Callable <DataSource> createDataSource(DataSourceKey dataSourceKey,
                                         Properties config) {
        return new Callable<DataSource>() {
          @Override
          public DataSource call() throws Exception {
              String url = dataSourceKey.getConnProps().getProperty("jdbc-url");
     String user = dataSourceKey.getConnProps().getProperty("user");
     String pass = dataSourceKey.getConnProps().getProperty("password"); 
              HikariConfig config = new HikariConfig(config);
              config.setPoolName(dataSourceKey.getKey());
              config.setJdbcUrl(url);
              config.setUsername(user); 
              config.setPassword(pass);  
              DataSource dataSource = new HikariDataSource(config);
              return  dataSource;
          }
        };
  }
} 

Executing queries

Finally the QueryExecuter class uses the JdbcManager and executes queries. Here QueryExecuter is used to fetch all employee details. The Employee class is a plain POJO so it's not shown here explicitly.


Note that I have not specified the database URL, host and port here. I leave it up to you, the developer, to fetch these details from a secured place. Remember the solution is to manage JDBC pools for multiple databases and we can assume here each cloud application instance provisioned for a customer will have its own database. Even if there is only one cloud application shared by different users there should be some data separation between user data. That could be a different database or same database, different tables or schema. The clientID is an identifier using which we can identify the client and can fetch corresponding database details.

public class QueryExecuter {

   private  final JdbcManager  jdbcManager;
   private static final DataSourceConfigurator dsc = 
new DataSourceConfigurator();
   private static final CacheConfigurator cc = new CacheConfigurator();
   public  QueryExecuter() { 
    this.jdbcManager =  new JdbcManager(dcs, cc) ; 
   }
 
   public Employees getResult(String clientId, String dbHostUrl, String userName, 
String password) {
     String SQL_QUERY = "select * from emp";
     List<Employee> employees = null; 
     try(Connection jdbcConn = 
jdbcManager.getConnection(clientId, dbHostUrl, userName, password);
       PreparedStatement pst = jdbcConn.prepareStatement(SQL_QUERY);
       ResultSet rs = pst.executeQuery();) {
            employees = new ArrayList<>();
            Employee employee;
            while (rs.next()) {
                employee = new Employee();
                employee.setEmpNo(rs.getInt("empno"));
                employee.setEname(rs.getString("ename"));
                employee.setJob(rs.getString("job"));
                employee.setSal(rs.getInt("sal"));
                employee.setDeptno(rs.getInt("deptno"));
                employees.add(employee);
            }
       }
     return employees;
}

Remember you will need a JDBC driver plugin to communicate with your database. The plugin depends on the database you use. This code can be leverages irrespective of the type of database used.

152 views0 comments

Recent Posts

See All
bottom of page