Tuesday, October 27, 2015

Circular Breathing Your Agile Development

Way back when I was learning to play the French Horn I had to learn the concept of circular breathing.  As a wind player sometimes you have to sustain notes for extended periods of time, much longer than your lungs would typically allow.  This isn't as common for hornists as say, trombonists, didgeridoo players or bagpipers but I'm glad that I had to pick it up because the concept fits naturally into agile development.

Circular breathing is basically breathing while continually playing your instrument.  You have to breath air in through your nose while sustaining a note.  If you want to try it but don't have a wind instrument handy then the next time you're at lunch try blowing through a straw into your drink while breathing air through your nose...it's not easy and it takes some practice.  Agile development teams typically get some time to check and adjust, but not much time - maybe a day every few weeks.  Teams have to keep evolving the software products they're working on as new requirements come in.  Agile teams also have to continually improve their process as they're building and delivering software.  Many business partners don't allow their teams to stop and take a breath, they're expected to keep on sustaining or improving their development pace without pauses or breaks in delivery.  To accommodate this teams need to learn to circular breath their development processes.

At this point you may be saying - no way, we're supposed to get a tech debt sprint every once in a while, or maybe you expect your sponsor to understand that you've built buggy software on their dime and you need time to go back and fix it.  Right, that doesn't sell so well.  Even if you are able to sell it why would you?  With CI/CD tools available today you should be able to deliver fewer bugs and more features with calculated tech debt ratios that balance time to market over flawless code.  Or maybe you can slice off some functionality from your legacy app and deliver it with faster, more CD based methodologies.  If you keep a constant eye on agreed upon metrics you should be able to check and adjust immediately instead of having to pause and take a week go back and fix any messes you made.  Here are some ways that I like to manage this into my agile projects.

My favorite code quality tool is SonarQube.  If you're running SonarQube you should have a good idea of how much tech debt you're accruing while writing and delivering code.  I like to run SonarQube in the path to production so there is always a clear picture of how much debt we're running up along with delivery.  Depending on the estimated longevity of the code we're working on we can make some compromises with quality at times.  For example if we're building something for an ad campaign that will last for a few months, we're likely to allow more bugs than key components of our e-commerce site.  If you have been able to convince your stakeholders to stop every once in a while and fix stuff then running SonarQube nightly and reacting based on trends works pretty well.  However I prefer to run it on every production ready candidate.  If bugs come up that don't meet the threshold fix them before merging/deploying your updates.  When you incorporate this into your development process your estimates will end up including the time to fix these issues and your velocity and estimates will end up including more quality.  If you don't like SonarQube then at least pick out some other static analysis tools that you can run on your code in your CI process.  

As I talk about in my book, Agile Metrics In Action there are numerous agile metrics you can track continuously to keep an eye on your team health.  Recently I've been working a lot with teams that are adopting more automation, moving toward continuous delivery, and striving for continuous deployments.  In that context one metric we've been very focused on is number of deploys over environment type.  For example, a team with 10x more deploys in their test environment than their production environment is usually not managing their changes in a way that can be deployed without disrupting the consumer and usually needs better tools for local testing or more sophisticated pipeline testing.  When I start to see that trend it usually means it's time to dive in and figure out what that team needs to get to their goals.  In the spirit of circular breathing I like to work with the team to identify the root of the problem, then add a story or two every sprint to address it.  As with the previous example these adjustments end up averaging into velocity and your continuous delivery cycle.  For this particular metric we end up shipping Jenkins (used for building/testing/deploying) data into Elasticsearch so we can report on it through Kibana.  You can usually do the same thing with whatever data speaks to the problem you're trying to solve.

So the next time you're listening to your favorite didgeridoo piece or drinking with a straw, think about how you incorporate continuous metrics into your agile development cycle for better and more consistent delivery.  You'll find that if you can incorporate these quality and process metrics into your development cycle and work through them as with any other change you make to your software you'll be able to sustain your pace and keep your sponsors happy.

Saturday, April 11, 2015

Top 5 tips for implementing maintainable and sustainable automation

Over the years I've worked on several teams that have tried to implement automation of one kind or another; automated deployments, testing, triage, you name it.  Anything that ends up being a repetitive pain is a good candidate for automation.  Here are a few tips that I find help teams focus on creating automation that is sustainable and maintainable.  If you keep this in mind as you embark it should save you from having to trash your entire effort and start over.

Tip 1: Create a roadmap

I am still surprised how many teams embark on the journey toward automation without a plan.  Often I find that automation happens organically; it seems to crop up around the most painful parts of the development process.  For example a team might automate deploying to their development environment if they’re doing that multiple times a day, or they might try to automate their smoke test suite if there's a team that runs the same set of tests manually every time a release is ready.  This organic growth leads to an unmaintainable blob of automation that can only be tweaked by a handful of developers.  

When you find a candidate for automation sit down and plan out what your north star looks like.  Once you know where you want to get to figuring out what you need to get there becomes much easier.  

Tip 2: Don’t invent your own framework

There are so many frameworks freely available that you would need a really good reason to create your own framework.  Often teams will create frameworks with the intention of saving time, money, or sharing work between teams.  In reality they end up creating something that works really well for them, but the time it would take to ramp up on developing the framework usually scares other teams away from adopting it.  

By sticking with common frameworks like Selenium, Spock, Puppet, or whatever you want to use other developers will already be familiar with what you're using which reduces ramp up time and friction around adoption.

Tip 3: Only POC things you aren’t sure will work

Proof Of Concepts (POC) should be for what the name implies, proving out something that's only a concept.  Often teams will use the term "POC" when they're really just trying to learn something new that's already proven and well documented.  I once had to evaluate a team that was "POCing Cucumber."  My first question was, "what are you trying to prove?"  Their answer was that they wanted to prove they could test their web services.  For that they could have simply read the documentation, you don't have to prove Cucumber works. 

Continuing with this example perhaps a team wants to run a POC to prove their nontechnical product owners can write scenarios that can drive TDD within the development team and reduce the amount of churn in requirements definition.  In this POC they will use Cucumber as the BDD framework the product owners will use.  We know that Cucumber works, we don't know if asking the product team to write BDD scenarios as requirements will make the development cycle more efficient.

Tip 4: Don’t mix foundational automation in with your normal work stream

Foundation automation is work that isn't critical path for your current deliverable.  Build servers, pipelines, regression tests, things like that.  When you have deadlines it's pretty typical that anything that's not a feature will not get prioritized.  Ideally tasks will contain space for the appropriate automation to manage its lifecycle within the context of the larger system.  However that doesn't always happen.  Creating user stories for automation tasks is a logical idea but unless you have dedicated resources responsible for getting them over the line they will typically take a long time to get completed, if you ever complete them.  If your team can't quite get over the hump because you can't carve out the time to get a solid foundation in place here are some strategies that I've seen work:
  • Have a separate team set up the automation in partnership with the owner team.  Once everything is set up and handed off they can leave the work with the owner team.
  • Dedicate a sprint every once in a while to improving automation.  You see this in scaled agile, where teams take a sprint across the organization to catch up on tech debt or improve foundational automation.  Catalog your pain points and figure out how to measure them in your feature sprints, then in the sprints where you tackle automation use that data to set goals and prioritize work.

Tip 5: Don’t ask QM or Ops to do an engineer’s job

This one is going to hurt some people's feelings, but those people will have to get over it.  One of the easiest ways to fail in implementing sustainable and maintainable automation is to ask someone who doesn't use those terms in their daily work vocabulary.  I get frustrated with teams who migrate 3000 test cases in excel to 3000 selenium scripts over the course of a year in the name of good automation.  Realistically by the time that work is done it's out of date and you'll have to do it over.  These teams end up finding some efficiency over purely manual work, but they don't find nearly the efficiency they could if they approached their problems with an engineering perspective.

Ensuring that the team responsible for the product being delivered is invested in their automation is critical to success.  After all, that team will be living with it every day.  If you apply an engineering mindset to solving problems and write your automation with sustainability, maintainability, and a north star in mind you are setting yourself up for success.