Deployment Strategies & Release Best Practices
In this article we’ll be covering various options and considerations for deploying code and releasing features.
We’ll discuss patterns for deploying to a fixed set of servers as well as variations where multiple groups of servers can be utilized. We’ll wrap up with strategies for releasing features to targeted groups of users.
While some of the strategies provide better capabilities than others, this is not meant to be a review or comparison, but rather a guide to the options at hand. Every situation is unique and may require a different implementation based on the constraints at hand.
Single Server Group Deployments
In many situations you may have a set of dedicated servers running your application. This may be a traditional data center setting where procuring servers is difficult, or maybe deployment to devices in the field, such as point of sale, or any other case where you need to deploy to a fixed set of servers in place.
Highlander
The most traditional deployment patten is the Highlander strategy. In this pattern all instances running a version of an application are upgraded to the new version at the same time. This is common for apps that don’t require significant uptime such as lower life cycle development servers, or hobby applications.
This is simple but high risk strategy that will impact all users not only in the event of a failure but as part of the deployment itself. Even with a successful deploy, the servers will need to stop taking traffic when switching to the new code.
Canary Deployment
A safer pattern than Highlander the Canary deployment that deploys to only a small portion of the available servers. This pattern allows the new code to be introduced into a live environment and monitored for any abhorrent behavior. Any issues with the code or deployment are limited to a smaller set of users.
While this patten does provide a safer option, having multiple versions running for a length of time brings its own set of challenges ranging from operations to user facing.
Rolling Deploy
The rolling deploy is simply the continuation of the canary deploy. In this case you would update one server after another until your whole bank of servers has been upgraded.
This is safest so far, limiting user downtime and impact, but requires more sophisticated deployment tooling.
Multiple Server Group Deployments
With the adoption of virtualization and cloud computing, the need to limit ourselves to a fixed set of servers is gone. Instead we can spin up while new sets of servers whenever we choose.
This helps enable a deployment best practice where you separate your deployment from the release or usage of the deployment. For example with multiple server groups you can deploy to a new set of servers but never activate it to receive user traffic.
Blue / Green
The Blue/Green pattern (or Red/Black depending on your camp) is the Highlander patten for multiple server groups. In this strategy a new server group with the new version of code is stood up with no traffic. Once all the servers are ready, all the traffic is directed to the new bank of servers.
This technique allows rapid rollback in the event of a failure since we’ve remove the deployment from the equation and are only directing traffic to one version or the other.
While this is a great example of separating the deploy from traffic and while it provides rapid rollback, this is still an all or nothing switch.
As with the Highlander strategy all users are impacted during the switch. Even though we don’t need to deploy the code, often times applications have a bit of startup time where connections are built, and objects are cached which will impact users.
Canary with two groups
The Canary deploy with multiple groups works very similar to the Canary in a single group. The main difference is that we’ve separated the deploy from the traffic.
The most straight forward way is to introduce a new group with one server and add the group to the load balancer. If you had three servers for version one, adding this would introduce a fourth server and direct a quarter of the traffic to the new instance.
This allows you to monitor the new code under live conditions before serving it to all your users.
A variation depending on your capabilities would be to deploy three new servers with the new version and spray a small prevent of all traffic to the new servers. This allows more fine grained control over how many of the users are impacted and provides a warmup period for all the new servers.
Rolling Deploy with two groups
Again as with the single group patterns the rolling deploy for multiple groups is just a continuation of the Canary deploy. We deploy our code and servers in one step then add traffic separately.
Here though we have to add a new technique. As we continue to add servers to the mix, we’ll need to take servers out of rotation on the old group.
This would be an ideal strategy for a CI/CD stack where robust health checking and operational monitoring allow code to automatically roll live.
Feature Release Strategies
Much of the focus with deployment strategies is on the act of putting code into the environment. We’ve talk briefly about patterns that separate the deployment from the user traffic but those were still focused on the code.
Multiple issues can crop up when focusing only on the code. User sessions may be dropped mid stream, users may see V1 of a page on one click, then see V2 on refresh and back to V1 on yet another refresh.
In this section we’ll review strategies targeted toward the user experience.
Environment Separation
The most basic pattern for providing a consistent user experience and testing new code is to build a new environment or site. You may offer your users an option to try out the new site at http://beta.yourcompany.com. You might use this for a huge new design review (I hope not, big bang is always bad) or as part of your regular process where every code deploy goes to beta for a time before moving to production.
The clean separation allows for simple management and clear operation.
Feature Toggles
Feature toggles are a technique where both versions of your feature are included in the same code base, but are surrounded by logic to execute on or the other based on external factors such as a property value or database switch.
This is a useful technique to separate the deploy from usage in any setup, multiple server groups, single group and legacy monoliths.
Ideally these are more dynamic in nature, managed by a backend datastore. Operators would toggle a feature on or off by updating the setting in the database not by deploying code or manipulating traffic.
This also acts as a safety shutoff in case some external dependency or service provider starts to impact your site, just flick the switch and shut off the feature that uses them.
User Targeted Testing
Feature toggles are useful but by default they’re all or nothing and don’t provide the ability to test a new feature for a group of users.
Small enhancements to the Toggle pattern allow the switch to be related to users instead of the system as a whole.
For example instead of the toggle using an database on/off value to show version A or B, you might utilize a cookie value. All users with a cookie value ending in an odd number would get one version of the feature while those with an even cookie could get version 2 of the feature.
This technique can be very simple or as complicated as you wish. You might set random cookie and split 50/50 as listed above or get more sophisticated and barrack down to smaller percentages. You might also begin to utilize user data such as location to target users on the east coast. You could even tap into the customers profile to target their experience.
Entire companies and products are built around this technique but there is a lot of value in even the simplest implementation.
Conclusion
There are many techniques for deploying code into an environment. Depending on your use case one may fit better than another.
Balancing the technical complexity with the customer impact and overall business needs will ultimately drive which pattern works best for