Cache
The Cache Service is responsible for providing an application server cache that all processes, whether local or distributed, have access to for non-transactional caching. Objects stored in the cache are eventually available to other members of the cluster (if there are any), no matter whether running on the same server or on other servers in the cluster.
Modes
The Cloud CMS Application Server can run in one of three modes:
single
nodecluster
multiple nodes (on a single process)sticky-cluster
multiple nodes (on a single process) with sticky sessions
In a single
node configuration, the broadcast service raises events that are subscribed to by services running by the very same process.
In cluster
mode (which uses the Node cluster
module under the hood), a single message is distributed to multiple Node processes running on the same server or on other servers in the cluster. For each server in the cluster, N number of workers are established (with one worker per CPU for N CPUs). If there are M servers with N CPU's each, you will have M x N total workers.
A The broadcast service distributes across the cluster such that a message published to any one worker is distributed internally to every worker in the cluster so that each worker may act up it individually.
The final option is sticky-cluster
which is essentially the same as cluster
but offers one important difference. In some cases where a single server hosts N workers (on N CPUs), you may need to ensure that continual requests coming from the same source machine arrive to the same CPU. One such example is in support of Socket.IO where a socket connection state is maintained on the worker in a "socket session".
For this reason, you may opt to use sticky-cluster
mode. In this mode, a single port listens for all incoming requests and then internally routes them to cluster workers. A sticky calculation is made (based on IP address and network identifier) so that if a second request comes from the same source, it routes to the same worker the next time around.
Note that sticky-cluster
only works for a single server. If you have a cluster with M servers (as likely will), you will still need a physical load balancer ahead of the physical cluster to route requests from the outside world to the correct server. Once the request arrives to the same server, sticky-cluster
takes it from there.
Essential Configuration
{
"cache": {
"type": "<type>",
"config": {
...configuration
}
}
}
There are two types of cache providers:
memory
- user forsingle
moderedis
- used forcluster
orsticky-cluster
mode
For more information about setup modes, please see Application Server Clustering.
Memory Provider
This is very simplistic cache which is useful for single process, development deployments. Cache state is stored in memory. This is very fast but only works for the single process scenario.
If you don't specify a cache
configuration, the memory
cache provider will be configured for you automatically if you are running in single
mode.
Example:
{
"setup": "single",
"cache": {
"type": "memory"
}
}
Redis Provider
This cache uses Redis (a database) for back-end storage of cache state. All workers on all servers that are members of the cluster have access to this shared database for storage.
This provider is the slowest since it maintains an external database connection. However, it is also the most scalable in that storage of objects is not performed in-memory. In general, for scalability goals, you will want to keep heap usage predictable per server. Offloading to an external database is one such way so that less is retained in memory or passivated to disk (which introduces storage concerns).
Unlike the other providers, there is no automatic configuration. You will need to provide your Redis DB connection information.
Example:
{
"setup": "cluster",
"cache": {
"type": "redis",
"config": {
"url": "<Redis connection URL>"
}
}
}
Environment Variables
You can also configure the Cache Service using the following environment variables:
CLOUDCMS_CACHE_TYPE
- eithermemory
orredis
You can configure the Cache Redis Provider using the following settings:
CLOUDCMS_CACHE_REDIS_DEBUG_LEVEL
- eitherinfo
,warn
,error
ordebug
CLOUDCMS_CACHE_REDIS_URL
- the Redis connection URL
Further, the Cache Redis Provider will automatically draw from the following global Redis environment variables if you've set them:
CLOUDCMS_REDIS_DEBUG_LEVEL
- eitherinfo
,warn
,error
ordebug
CLOUDCMS_REDIS_URL
- the Redis connection URL