Continuous Deployment on SportCollectors.Net

4. August 2014 00:24 by Jay Grossman in   //  Tags: , , , ,   //   Comments (0)

I am the type of person who likes to build cool things that help solve some folks' problems, especially my own problems. With that said, I'm always going to look to minimize the amount of things I have to do over and over so I have more time to build cool stuff. Since I mainly create software applications, building/testing/deploying code are necessary activities - and they can be very painful.

For far too long (2000-2012) I would code away on big new features on SportsCollectors.Net for days or longer on my bloated monolithic code base. When something was finally done coding, I would:

  • merge in my changes
  • try to test everything I could
  • hunt down root causes of unforeseen regression bugs
  • take down the site at off peak and deploy the code to the web servers
  • test everything I could in production
  • celebrate victory and pray everything works

This was not fun. I hated having to stay up late and have to take the whole site down to deploy code. Fixing really complex regression bugs when you are tired makes things even worse. 

A while back, I climbed on the devops train to clean up my act. The end goal was to implement Continuous Deployment - a series of processes designed to ensure that every change that passes required tests is deployed to production automatically. So I would need tests, continuous integration to run builds and the tests, a deployment mechanism, and something to ensure the deployment was successful - and it ALL HAS TO BE COMPLETELY AUTOMATED (or single click).

Moving to Continuous Deployment required me to make some changes in the way I think about going about designing applications, coding projects, and deploying code. 

Design Considerations for Continuous Deployment

  1. Bigger is not always better.

    Having a big monolithic app usually means more complexity, and makes for higher risk (cost) that changes will break something. It also requires more cycle time to test apps that contain large number of use cases. In scenarios with multiple developers, you’ll have branching and merging pain so they don’t step on each other.

    I broke my app into logistical functional areas and built some services for reusability. This allowed me to work on them individually and loosely couple them together. I can also deploy new functions in advance of when my UI will need to consume them.
  2. Database changes are backwards compatible.

    When I was working on delivering big features all in one event, I would assume the database changes would get deployed with the associated code.  Now that I adopted the rule that database changes (especially schema changes) must work with the most current version of the app, I very often commit those changes ahead of the supporting front end code. This gives me flexibility as priorities or designs change. 

    - Don't rename columns/tables which are in use by the app - always copy the data and drop the old one once the app is no longer using it
    - Don't rewrite a table while you have an exclusive lock on it (e.g. no ALTER TABLE foos ADD COLUMN bar varchar DEFAULT 'baz' NOT NULL)
    - Don't perform expensive, synchronous actions while holding an exclusive lock (e.g. adding an index without the CONCURRENTLY flag)

  3. Functional test and end point test coverage.

    Test Driven Development (TDD) is quite popular these days, with the goal of complete test coverage on all of your code.  DHH from 37Signals had a pretty good tirade on how unit testing can be overkill.

    I certainly do write unit tests, especially for business objects and complex logic. But I try to focus less on that kind of method level testing and spend more of my limited time to make sure I have functional tests that make sure the features support the key use cases. I also build test harnesses to make sure my service end points return what I would expect to external consumers.

Coding Considerations for Continuous Deployment

  1. Create the functional tests before implementing the code.

    I want to Fail Fast when possible. Finding issues later in the development cycle leads to higher risk and cost, so I want to find them early on.  I also want to have a robust test suite to help minimize regression bugs introduced by code changes. 
  2. Smaller and more frequent commits.

    I used to wait until my feature was fully complete to commit it in, test it, and release it to users. Following an agile SDLC and committing more often allows me to get functionality to users and then iterate based on their feedback/actions. It also allows me to fail faster (from both testing and feature adoption perspectives).

  3. Regular use of feature flags.

    Feature flags allow developers to disable or target functionality to certain users. I use them regularly so I can assume that all commits are safe to go to production (key for continuous deployment). I also have used them to run experiments to see how a group of users interacted with changes.
  4. No branches.

    All commits get made to trunk, PERIOD. I'd rather use feature flags rather than the huge tine sinks that can come from merging branches and dealing with conflicts.

Setting up Continuous Deployment

There were a lot of paths I could taken to get to continuous deployment, and I toyed around with a few of them.  After some trial and error, my efforts had evolved into a fairly efficient flow that made sense here. It needed to be lightweight since I am a single developer on the project, but I wanted something that would make sense if I needed others to contribute in the future.

After each commit of app code or database code, I have set up this workflow:

  1. Jenkins polls for changes and runs a build (nant script that calls MSBuild and commits products to artifact repository in git)
  2. Deploy products to test environment
  3. Run unit and integration tests
  4. Deploy to one node in production
  5. Run health check tests, with auto rollback if necessary
  6. Repeat 4-5 on remaining nodes

Instead of a classic blue-green deployment pattern where I would have two versions of the app in production for quick switching if necessary, I like the canary scenario where I push the updates to a portion of the traffic and see the response (from functionality and possibly effectiveness perspectives) before sending them to the full farm.

Today I coded a new feature iteratively and the system deployed code through this workflow to production 15 times. It was really painless, the site had no downtime, and members seem happy with the new functionality.   

Other Considerations

  • Having Jenkins polling git is not the most efficient mechanism. I could put in a post commit hook for git or use the new Jenkins CI service broker for Bitbucket repos so Bitbucket pings your Jenkins CI server when a new commit is pushed.
  • Even though I am the only developer running a pretty straight forward architecture on a few nodes, I am considering using a configuration management tool like Puppet/Chef instead of custom scripts. I created a quick POC putting a Puppet agent on the nodes to handle the deployment mechanics.  It was pretty easy to set up a template had it monitor for changes to the products on a git branch thanks to the vcsrepo task:
ensure   =>latest,  
provider =>git,
source   =>'git://',
  • Right now I am serving on dedicated servers at a host. I am considering moving to AWS, and then I may refactor this whole flow to create immutable servers with packer and using vagrant to bring them up.
  • Since SportsCollectors.Net is running on Windows, docker is not an option. Maybe I'll have to try to take a look at getting it running on Mono-Project!

About the author

Jay Grossman

techie / entrepreneur that enjoys:
 1) building software projects/products
 2) digging for gold in data
 3) rooting for my Boston sports teams:New England PatriotsBoston PatriotsBoston Red SoxBoston CelticsBoston Bruins

Month List