Content Management as a Microservice
May 9
One of the big ideas we pursued when we set out to build Cloud CMS was to design the product so that it was entirely decoupled. Our vision was to have a number of discrete tiers that would consist of either single servers or clusters of servers dedicated to a single class of problems.
For example, the Content API tier is dedicated to powering our JSON API. It does nothing else but receive requests, execute them and hand back JSON data responses. It had nothing to do with presentation or rendering of content for consumption.
On the other hand, our Application Server tier does nothing but take that JSON data and convert it into a format desired by the web site or mobile app. It might be HTML but it also might be alternative JSON, XML or binary formats.
There are several other tiers as well, including a DB tier, a Search Index tier, a tier for messaging, a tier for Job Queue management and tiers for Job executors for long running tasks like statistics map/reduce and even web page screenshot capture.
By separating things into different tiers, we created more work for ourselves initially, but we believed this would allow Cloud CMS to be more extensible and more testable from a deployment perspective. We also believed that it would enable the platform as a whole to perform better and in a more elastic fashion where individual tiers could scale out as demand grew.
For our customers, this meant consistent, steady performance on modern cloud platforms (like AWS). Pricing would not only be lower (since capacity of machines per tier would scale down with lower demand) but it would also be as close to a utility model as you could achieve (in terms of paying only for what you use).
The Arrival of Docker
What we didn’t anticipate, however, was the proliferation of container technologies circa 2013 and specifically the arrival of Docker. Docker provided Cloud CMS with a way to package up our tiers into services that could easily be deployed in all kinds of interesting configurations. It greatly reduced the heavy Dev/Ops and IT cycle involved with running a cloud infrastructure service.
With Docker, we suddenly found ourselves in a position where our customers could not only run Cloud CMS in a hosted capacity, but they could also run Cloud CMS on their own. They could run it on-premise, in their own data centers, and even on their own laptops.
Docker allowed us to express our tiers as services that could be launched into containers. Docker then allocates these containers (Docker Swarm) onto a single host or many different hosts (EC2 instances) on the fly.
Furthermore, the elastic architecture of Cloud CMS means that new containers can come online at any time and join the pack. Others can come offline at any time. It’s self-healing and largely automated.
Cloud CMS and Microservices
In bundling our tiers into container images, we realized that we had gone a long way toward packaging Cloud CMS up as a set of microservices. While “microservices” is an abstract description of “small autonomous services that work together, modeled around a business domain”, Docker turns this into a reality by providing a way to package service code and automate its testing and deployment.
This automation of testing and deployment is the critical piece. Once the descriptor files are in place to describe the deployment, Docker handles the instantiation of containers as well as the allocation of those containers onto hosts (EC2 instances). It solves for the software installation and configuration task, but it also tackles the far more complex Dev/Ops challenges of network configuration, port binding, volume mounting, log collection, fail case handling and more.
For us, this makes it easier for us to manage and deliver Cloud CMS as a Software as a Service offering. However, it also opens up a world of possibilities for our customers.
It makes it possible for our customers to drop Cloud CMS into their applications. Not as a remote API service, but as an integrated, low latency, same subnet and possibly in-process module. Their application may pull in many microservices this way, each deployed as a container, including perhaps:
- Shipping API
- Order Processing API
- Inventory Management API
- Invoicing API
If our customers need a Content Management API, they just drop in Cloud CMS and away they go.
Conclusion
At Cloud CMS, we’re seeing more customers transition to running Cloud CMS as a microservice. Docker puts the power of the cloud into the hands of the customer. The implication of this to traditional SaaS companies is very interesting.
While SaaS companies will continue for some time to provide real value in terms of hosting their offerings and handling all of the Dev/Ops tasks associated with running the underlying infrastructure (along with data migrations, upgrades and support), we expect to see customers pull more of this on-premise and handle their own infrastructure management tasks themselves.
Docker and other container technology companies make this easier and lower cost than ever before. This shift in customer preference may shift many traditional SaaS companies into offering a more conventional on-premise support model. We’ve certainly found this to be the case at Cloud CMS.