Kits

Cloud CMS ships a number of pre-built kits containing Docker configuration files to help you get start. These
kits are built from customer feedback to provide the most commonly requested scenarios. They can be used straight
away or at the very least can serve as a useful reference.

To download the Cloud CMS Docker Kits, please visit our Docker download page.

Kits

All of the Docker kits utilize Docker and Docker Compose.

In each kit, you'll find a docker-compose.yml file which describes the services that are to be instantiated
into containers.

The following kits are available:

Services

The following services are generally configured with:

API

This is the Cloud CMS API Server. You may choose to launch a single API server or multiple. In the latter case,
API servers will self-discover and configure into an elastic cluster. Work is distributed across the cluster and
requests are distributed to servers by a front-end load balancer (typically an EC2 load balancer or HA Proxy).

A single API server instance can simultaneously be a "request handler" and a "worker". By default, API server
instances will play both roles simultaneously. This can be adjusted through configuration.

In general, you can get started with standalone API servers and then split your architecture into "request handlers"
and "workers" later when you have a good understanding of job overhead for complex tasks like indexing, mimetype
transformation and more.

API Request Handlers

This are Cloud CMS API Servers that have been configured purely to handle incoming web requests. A load balancer
sits in front of a cluster of API Request Handlers and distributes load across the cluster. Long-running work items
are off-loaded into the distributed job queue. API Request Handlers do not process these work items. As such,
100% of their attention is devoted to handling and turning around incoming requests.

API Workers

These are Cloud CMS API Servers that have been configured so as not to handle real-time incoming requests but instead
to work on long-running scheduled work items. They participate in the global job queue and pick up items to work on.
For example, extraction of metadata from complex document types or indexing of large documents - these are long
activities that are processed as background jobs.

UI

This is the Cloud CMS UI Server. This serves back the Cloud CMS editorial user interface and administration
console. You may choose to launch a single UI server or multiple operating in a cluster. In the latter case,
Redis is used as a backend for distributed cluster messaging, cache management and notifications.

Virtual

This is the Cloud CMS Virtual Hosting Server. The virtual hosting server provides an endpoint to which
Cloud CMS custom APIs, Dust-driven web sites and static asset retrieval can be deployed. The virtual server
provides a real-time presentation layer for your editorial team to instantly preview content changes.

Web Shot

This is the Cloud CSM Web Shot Server. This server is responsible for capturing screenshots and DOM information
from live, running web sites for purposes of introspection and analytics capture.

MongoDB

This is the MongoDB Server. Cloud CMS uses MongoDB as its primary data store. In the kits provided, MongoDB
is deployed using the standard Docker library image for MongoDB (which is maintained and supported by
the MongoDB company itself).

In some kits, MongoDB is deployed as a single standalone server. In other kits, it is deployed in a sharded
configuration or using replica sets. Cloud CMS may provide these configurations as a means for getting started.
For best practices, guidance and production recommendations for MongoDB, we recommend working with MongoDB
or its partners.

Elastic Search

This is the Elastic Search Server. Cloud CMS uses Elastic Search as a secondary index for the purposes of
full-text search and query via the Elastic Search DSL. In the kits provided, Elastic Search is deployed using the
standard Elastic Search library image for Elastic Search (which is maintained and supported by the Elastic.IO
company itself).

For guidance and recommendations for production installations of Elastic Search, we recommend working with
Elastic.IO, its community and its partners.

HA Proxy

We use HA Proxy where needed within the kits for the purposes of demonstrating a load balancer. In practice,
you may opt to use HA Proxy, but you may also desire to use an AWS Load Balancer or even a hardware load
balancer.

Given that Cloud CMS is stateless in all of its containers, setting up a load balancer of any variety should be
fairly straight-forward.

Redis

The Cloud CMS UI and Virtual Server both use the Cloud CMS Node.js-based Application Server under the hood to
deliver clustered configurations. As such, we recommend using Redis as a backend for distributed messaging
and notifications. Redis is therefore is included in many of the kits along with a sample configuration
for getting started.