Enterprise Caching Techniques – Distributed Caching

Standalone caching is good solution when requests are for least items in the database. But what if the application has workload and all data is access all the time? Or what if we have the business need to store large volume of data in the cache? In such scenarios standalone caching may not be the right solution. Further stand-alone caching may not be useful if the application is running in the multiple-servers environment. This is when the need for distributed caching arises. The distributed caching is key factor behind many successfully deployed applications and therefore it is widely used. Distributed caching is now accepted as a key component of any scalable application architecture. Memcached which is one of the leading and the most popular distributed caching framework is used as part of highly scalable architectures at Facebook, Twitter, YouTube etc.

Distributed caching is a form of caching that allows the cache to span across multiple servers so that it can grow in size and in transactional capacity. It distributes data across multiple servers while still giving you a single logical view of the cache. The architecture it employs makes distributed caching highly scalable. However like any other distributed systems, distributed caching frameworks are inherently complex due to involvement of network elements. In fact slow network could be key performance and scalability barrier for distributed caching. The distributed caching is out-of-process caching service and from application standpoint it acts as L2 cache.

One of the widely used tool for distributed architecture is Memcache. With Memcache one can configure distributed architecture where two or more node can communicate with each other for data synchronization for any node failure. This n to n network communication can be more tuned for 1 to n where 1 node behaving as master and can hold copy of all data while slave always carry unique data. Both has its own advantage and problems. Below is cluster of same machine just for example point of view where two memcache instance can be executed on two different port. Code can tie them together to setup a cluster as given below.

Memcached –L 127.0.0.1 –P 1212
Memcached –L 127.0.0.1 –P 2705

This can be used with any memcached client API. Spyememcached is in java and examples are given here.

MemcachedClient c=new MemcachedClient(AddrUtil.getAddresses("127.0.0.1:1212 127.0.0.1:2705"));

Databases or IO operations are costly when there is millions of application transactions. Distributed caching is one of the solution to reduce load on backend and make them more scalable. The main idea behind distributed caching is to provide in memory data to hold large volume of the stored data. Distributed caching techniques have advanced and matured to manage massive data completely in memory. A database server usually requires a high-end machine whereas distributed caching performs well on lower cost machines (like those used for Web servers). This allows adding more small configured machines and scale-out application easily. Such caching can be included with small server network configuration as small memory entity which is small size of available memory for application.

Enterprise Caching Techniques – Standalone Caching

Many application domains have more fetched concentric requirements and with very few store operations. Like in E-Commerce, where buyer’s search versus purchase ratio is 9:1 or sometime even wider. Such applications require additional layer of caching in their architecture. Caching is not something new and invented recently, it is there since era of hardware evolution started. What we see with any hardware architecture in form of L1 and L2 CPU cache, are caching mechanism and still in use. L1 and L2 reside in between of processor and RAM, and contain system critical information for processing. Fetching of data from those caches are faster as compare to RAM but size of it is quite small as compare to main memory. This further helps to bifurcate type of data and helps CPU to decide its storing location. Caching with enterprise application directly derives from that same concept. However here it is may be in same CPU or in different machines/nodes connected with parent with very high network cards. So mainly caching in enterprise application is divided into two parts i.e. Standalone Caching and Distribute Caching.

Standalone Caching

Sometime referred as embedded or in-process caching, is single virtual machine based technique of storing frequently asked data. Standalone caching acts as L1 cache from application perspective and resides in RAM.
The main purpose of using the standalone caching is to improve the performance of the business critical operations. The standalone caching has limited main memory as its disposal. Therefore only data that is frequently used and important for the business critical functions is cached. Standalone caching products are always used as a side-cache for an application’s data access layer. Sidecache refers to an architectural pattern in which the application manages the caching of data from a database or filesystem or from any source. In this scenario, cache is used to temporarily store the objects. Applications first checks existing copy of data and returns if present. When data is not present, it retrieves from data access layer and put into cache for next incoming request.
In caching, some mechanism is required to cope with invalid cached data, data which is updated and still not refreshed in cache. There are several techniques that can be used to deal with invalid data or to remove unused caches to free some memory for other in-demand data.

Such concerns can be handled with by writing API which can take care invalid cache.
The caching product like EHCache provides basic functionality to handle invalidate data. The application decides at what point cached data should be invalidated. Typically strategy employed is whenever data is updated at store, application invalidates the cached data. If copy of cached data is not vital to update on the spot, we can apply some other techniques which can periodically refresh cache by assigning some time based configuration. We can even combine both techniques for multi-server environment.

There are also some other ways to update and remove cached data. With TTL(time-to-live) or LRU(Least frequently used) configuration we can monitor individual cache and take action for them with the help of API.

Problem with Standalone cache is, it is very limited and only can be used with single node/machine architecture. Hence need for distributed cache arisen, next in same series.

Memory Based Architecture For Enterprise Application – Introduction

We had this architecture discussion in one of the technical meetings in company recently and I was assigned to share all details on Memory Based Architecture. Sharing details from those sessions.

Memory, changing philosophy with enterprise applications and  Memory Based Architecture:

The main memory is high bandwidth and low latency component that can match performance of the processor in the computer. The bandwidth of main memory is around few GB per second as oppose to disk which is around hundred MB per second. The latency of main memory is in nanoseconds range where as that of disk is in milliseconds range. Traditionally main memory was considered as expensive resource and therefore it was scarcely used. However this perception that RAM is expensive component is now changing due to sharp drop in prices over past several years. Same time enterprise applications require more scalable and performance oriented use of those each chunk of available physical memory. Today they have enormous amount of such main memory cheaply available. Many applications are using memory in Gigabytes and Terabytes. The main memory empowers application architectures to achieve linear scalability and high performance. These qualities are extremely important to the modern enterprise applications for delivering guaranteed high performance under intensive and unpredictable workload.

As enterprises are using more memory, software vendors have flooded the market with several types of memory based products in order to size this new business opportunity. These products are targeted towards supporting various business use cases and architectural scenarios. This series is intended to introduce various memory based product categories along with business uses and architectural scenarios supported by them. 

When we think of any memory based products then high performance is the first thing that comes to our mind. Yes, high performance is primary reason why memory based products are used, but it is not the ‘only reason’. Many a times they are deployed to reduce IO operations over network or address the high latency issues with disk based products like databases. Typically with N Tier Architecture, properly design application code can easily scale out by adding more application servers. However the main scalability barrier is disk based database which is centrally access by all the clustered application servers. Here memory based products are typically deployed to overcome scalability bottleneck pose by disk based database and make application servers more scalable. Thus following can be considered as primary scenarios for any memory based product.

– Improve application performance
– Reduce network & disk IO Operations
– Overcome scalability barriers & make application servers more scalable.

The memory based products can be broadly classified as Caching(Standalone & Distributed Caching), In Memory Data Grid (IMDG), Main Memory Database (MMDB) and Application Platforms that enables Space Based Architecture and covered in great details under this series.