Caching with JBoss Infinispan

Infinispan is an extremely scalable, highly available key/value data store and data grid platform. It is 100% open source, and written in Java.
The purpose of Infinispan is to expose a data structure that is distributed, highly concurrent and designed ground-up to make the most of modern multi-processor and multi-core architectures. It is often used as a distributed cache, but also as a NoSQL key/value store or object database.

Main features of infinispan are:

  • Two cache mode:
    • Local mode cache (single node)
    • Cluster mode cache (using JGroups)
      • Local mode
      • Replicated mode
      • Invalidation mode
      • Distribution mode
  • Asynchronous API
  • Eviction policy
  • Expiration policy
  • Cache passivation
  • Transaction support
  • Locking and concurrency

Infinispan offers both declarative and programmatic configuration.
Declarative configuration comes in a form of XML document that adheres to a provided Infinispan configuration XML schema.

The main configuration abstractions in Infinispan are:

  • Global configuration: global settings shared among all cache instances created by a single CacheManager
  • Default configuration: more specific to actual caching domain itself. It specifies eviction, locking, transaction, clustering, cache store settings etc.
  • Named caches: setting for a specific cache

Simple example

Now we see the use of Infinispan through an example. This example is a simple web application has a servlet that read and write a value from a clustered and replicated cache.

Infinispan configuration

Letโ€™s start from the Infinispan configuration, defined in a file named infinispan.xml.

<?xml version="1.0" encoding="UTF-8"?>
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="urn:infinispan:config:5.3 http://www.infinispan.org/schemas/infinispan-config-5.3.xsd"
	xmlns="urn:infinispan:config:5.3">

 	<global>
      <transport>
         <properties>
            <property name="configurationFile" value="jgroups.xml"/>
         </properties>
      </transport>
   </global>
   <default>
      <!-- Configure a synchronous replication cache -->
      <clustering mode="replication">
         <sync/>
      </clustering>
   </default>
   
	<namedCache name="my-cache">
		<expiration lifespan="5000" maxIdle="10000" wakeUpInterval="1000" />
	</namedCache>
</infinispan>

In this file is defined a named-cache my-cache, is configured the cache mode as replication in a synchronous way, and the expiration policy. This file read the JGroups configuration, used for replication in the cluster. This is an example of JGroups configuration:

<?xml version="1.0" encoding="UTF-8"?>
<config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

	xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">

	<UDP mcast_port="${jgroups.udp.mcast_port:45588}" 
		tos="8"
		ucast_recv_buf_size="20M" 
		ucast_send_buf_size="640K"
		mcast_recv_buf_size="25M" 
		mcast_send_buf_size="640K" 
		loopback="true"
		max_bundle_size="64K"
		max_bundle_timeout="30" 
		ip_ttl="${jgroups.udp.ip_ttl:2}"
		enable_diagnostics="true"
		thread_naming_pattern="cl" 
		timer_type="new" 
		timer.min_threads="4"
		timer.max_threads="10" 
		timer.keep_alive_time="3000"
		timer.queue_max_size="500" 
		thread_pool.enabled="true"
		thread_pool.min_threads="2" 
		thread_pool.max_threads="8"
		thread_pool.keep_alive_time="5000" 
		thread_pool.queue_enabled="true"
		thread_pool.queue_max_size="10000" 
		thread_pool.rejection_policy="discard"
		oob_thread_pool.enabled="true" 
		oob_thread_pool.min_threads="1"
		oob_thread_pool.max_threads="8" 
		oob_thread_pool.keep_alive_time="5000"
		oob_thread_pool.queue_enabled="false" 
		oob_thread_pool.queue_max_size="100"
		oob_thread_pool.rejection_policy="Run" />

	<PING timeout="2000" num_initial_members="3" />

	<MERGE2 max_interval="30000" min_interval="10000" />

	<FD_SOCK />

	<FD_ALL />

	<VERIFY_SUSPECT timeout="1500" />

	<BARRIER />

	<pbcast.NAKACK 
		use_mcast_xmit="true" 
		retransmit_timeout="300,600,1200"
		discard_delivered_msgs="true" />

	<UNICAST/>

	<pbcast.STABLE 
		stability_delay="1000" 
		desired_avg_gossip="50000"
		max_bytes="4M" />

	<pbcast.GMS 
		print_local_addr="true" 
		join_timeout="3000"
		view_bundling="true" />

	<UFC max_credits="2M" min_threshold="0.4" />

	<MFC max_credits="2M" min_threshold="0.4" />

	<FRAG2 frag_size="60K" />

	<pbcast.STATE_TRANSFER />

</config>

CacheManager

Class used to manage the cache and cache manager instances.

public class MyCacheManager {
				
	private static DefaultCacheManager cacheManager;
	private static Cache<String, Integer> cache;
	
	private MyCacheManager() { }
	
	public static Cache<String, Integer> getCache() {		
		return cache;
	}
	
	public static void init() throws IOException {
		cacheManager = new DefaultCacheManager("infinispan.xml");
		cache = cacheManager.getCache("my-cache");		
	}
	
	public static void destroy() {
		cacheManager.stop();
	}
}

ServletContextListener

The CacheManager instance continues to live also after when the webapp is undeployed. This causes an exception when you will try to deploy again the webapp.
As reported in the documentation, when the system shuts down, it should call stop() on the CacheManager. This will ensure all caches within its scope are properly stopped as well.
So, itโ€™s important stop it manually on undeploy.

This can be done using a ServletContextListener.

public class InfinispanServletContextListener implements ServletContextListener {
	
	private Logger logger = Logger.getLogger(this.getClass());

	@Override
	public void contextDestroyed(ServletContextEvent arg0) {
		MyCacheManager.destroy();
		logger.debug("CacheManager stopped");		
	}

	@Override
	public void contextInitialized(ServletContextEvent arg0) {
		try {
			MyCacheManager.init();
			logger.debug("Cache initialized");
		} catch (IOException e) {
			e.printStackTrace();
			logger.debug("Cache not initialized");
		}		
	}
}

The servlet

This servlet is responsible to read and put an int value in the cache. It set that value in the HttpServletRequest request. In this way it can be read from the JSP defined later.

public class WebController extends HttpServlet {

	private static final long serialVersionUID = 9021818305083263842L;
	
	private Cache<String, Integer> cache;
	
	private Logger logger = Logger.getLogger(this.getClass());

	@Override
	public void init() throws ServletException {
		cache = MyCacheManager.getCache();	
	}

	@Override
	protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
		
		int value = 0;
		if(cache.containsKey("cacheValue"))
			value = cache.get("cacheValue");

		cache.put("cacheValue", ++value);
		logger.debug("cacheValue update to " + value);
		
		req.setAttribute("cacheValue", value);
		
		ServletContext context= getServletContext();
		RequestDispatcher rd= context.getRequestDispatcher("/index.jsp");
		rd.forward(req, resp);
	}
}

Now you can read and update the cache value from a JSP like this:

<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>Insert title here</title>
</head>
<body>
<c:choose>
	<c:when test="${cacheValue != null}">
		Cache value: <c:out value="${cacheValue}" />
	</c:when>
	<c:otherwise> Cache value: 0</c:otherwise>
</c:choose>
<form action="webController" method="get">
<input type="submit" value="Update">
</form>
</body>
</html>

Test the cache

To test if the distibuted cache is working fine, you can deploy the webapp on different machines (or on different application server on the same machine). All the application will be part of the same cluster, managed by JGroups, and the cache will be replicated on every node. In this way, all the application in the cluster will see the value of cacheValue updated by some other node.

If you want to learn more retro games news, read our magazine and subscribe to our newsletter.