Cloud Connected

Thoughts and Ideas from the Gitana Development Team

How we use Docker at Cloud CMS

At Cloud CMS, we use Docker to provision our cloud infrastructure servers on top of Amazon Web Services. Our stack consists of five different clusters:

With the exception of MongoDB, all of these clusters are allocated using elastic load balancing and are architected in such a way that we can spin up new servers and tear down old ones with elastic demand. That is to say, they are fully elastic in design. The product components were all built in such a way so that cache state is fully distributed and requests naturally failover to alternate servers as the configuration changes over time.

Note: MongoDB is deemed an exception to this but it’s no less scalable. It simply has a different architecture. Here, we utilize a MongoDB sharded backend with replica sets. The Cloud CMS primary object identifier (_doc) serves as a shard key to distribute objects evenly across the back end shards.

The DevOps tasks of releasing new product updates, allocating new servers and bouncing users between servers are all very automatic. We made a fundamental design decision early on to be stateless so that server-side session management would not be needed. This means that any server anywhere in the cluster can handle the request. OAuth 2.0 bearer tokens are passed with every request and a distributed object cache helps us to ensure consistent performance benefits without having to reload objects every time.

Docker plays a very key role in all of this. With Docker, we’re able to define the various images that comprise these clusters. Docker build files (Dockerfile) describe all of the software that must be installed, everything from yum (dfn) updates to third party libraries like ImageMagick or FFMpeg. They set up users, lay down permissions and produce an image that is a perfect snapshot of a fresh environment.

Docker then lets us launch these images into a configuration locally. On my laptop, I can spin up a full Cloud CMS infrastructure. Every image forms the basis for one or more containers. With a single command, I am able to launch 10 Docker containers:

  • 2 Cloud CMS API Servers (cluster)
  • 2 Cloud CMS UI Servers (cluster)
  • 2 Cloud CMS App Servers (cluster)
  • 2 Elastic Search Servers (cluster)
  • 2 MongoDB Shards (cluster)

Each of these servers exposes a port binding which is dynamically mapped according to the Docker launch script. It takes a few seconds for these containers to spin up. I can then work against this Cloud CMS infrastructure locally. This is the basis for how we do development and testing.

The other beautiful thing about this approach is that we’re able to deploy the exact same containers to AWS and we’re using locally. Thus, we can test the precise bits that are about to go into production (ahead of them actually going into production). This provides vast assurance that our deployment model is sound. And frankly, it’s one of the reasons that we’re all able to sleep well at night!

In addition, we offer these very same Docker images to our enterprise subscribers. This allows our customers to run Cloud CMS on-premise, either within their own data center or on their own Amazon AWS private cloud. This gives them more control over their backend infrastructure and provides them with more autonomy over their costs and services. It’s running on their hardware and they can make decisions about how much bandwidth, storage or data transfer to throw at it.

Cloud CMS offers support for these customers through a software support model, complete with updated Docker images as they are released. Customers are then free to take new images as they wish and are not beholden to Cloud CMS for product updates and bug fixes on the public infrastructure.

And finally, we offer a standalone Docker image for development purposes. Enterprise customers can distribute this image to their developers so that they can achieve faster iteration and the same fluidity of development that our engineering team enjoys. They can run Cloud CMS locally. They can bring it up, tear it down, start over and iterate as quickly as they’d like to. They can work locally, whether on a plane or in a coffee shop. They run Cloud CMS right on their laptops in a Docker container. No internet access required. No stepping on any else’s toes.

Using the Cloud CMS import/export transfer capabilities, these same developers can then push and pull content against their private or public cloud instances. This is fully distributed content replication and is very similar to GitHub. You work on something locally and you push it to the cloud via a command line tool.

At Cloud CMS, we’re very excited about Docker and appreciate it greatly for all of the DevOps challenges it has solved for us. We spent a lot of time early on experimenting with Chef, Puppet and Ansible only to realize how much faster things are when done with a Docker mindset. So far, we haven’t had to rely on any external tools to manage the construction and assembly of our Docker images. Furthermore, the commitment to Docker within Amazon AWS has been very beneficial. We certainly feel that we’ve picked the right technology to get the job done.