The ReleaseTEAM Blog: Here's what you need to know...
DevOps Metrics: Measuring Application Quality
This month we continue our look at useful metrics and how to determine if your DevOps changes are improving application quality.
|Metrics to Measure Application Quality|
|Automated Test Coverage||How well do tests cover the application requirements, and what percentage of those can be automated?|
|Defect Volume||How "buggy" are builds/releases? Are you releasing more quickly but spending more time fixing errors after release?|
|Mean Time to Detection||When are defects identified?|
|Mean Time to Recover||How long does it take to fix defective code or patch a released app?|
|Security Breaches||How many attempted and successful security breaches are there against your application?|
Automated Test Coverage
"Test Coverage" may include all tests of code, UI, types of supported devices, and other application requirements. Automated Test Coverage limits the metric to how well your automated tests are covering your application. One frequent refrain in DevOps is to "automate everything!" but the truth is that not all tests should be automated.
It would likely take a similar amount of effort to design and automate a test that you only plan to execute once as it would to manually run the test, so that would not be a candidate. Many user experience tests are better completed by human testers than by software, so focus your test automation efforts on tests that are run repeatedly against every build, that are data-intensive, or tests that take hours to run.
Automated test coverage is calculated by dividing automated coverage by total test coverage. This metric can be distorted by automating dozens or hundreds of meaningless tests, so ensure your test quality is high and that you’re testing the application code, features, or requirements sufficiently. Adding extra tests to meet a metric can delay builds and increase developer and tester efforts if and when these tests return errors.
DevOps promises a faster release cycle, but it’s a balance between faster releases and software quality. Defect Volume is the number of bugs or defects found in a release.
It’s not realistic to expect zero defects, but a larger number can indicate a problem with test quality, rushed development, or with the development scope for the time period. The earlier in the development cycle your team can detect and fix bugs, the fewer that will make it to production.
Mean Time to Detection
The Mean Time to Detection (MTTD) identifies when in the development, test, and release cycle bugs are identified. The earlier bugs are identified, the fewer issues your users will experience, and you’ll avoid adding to technical debt in your release.
Mean Time to Recover
Mean Time to Recover (MTTR) measures how long it takes to recover (fix) an incident in production. The higher your defect volume, the more likely that your team will encounter bugs that are more difficult to fix and the MTTR will increase. One way to calculate this is the production system’s downtime average over the last ten outages.
One outcome of buggy software is frustrated users and customers that abandon your application for a more reliable competitor. A more severe consequence is that your application’s software bugs open up your software or the user’s devices to hacking. These security breaches can cost your customers millions in downtime and lost revenue. More mature DevOps practices will detect bugs earlier in the development cycle to reduce the defect volume, reducing the chance that any of these defects will be used in a security breach.
There are many tools to help DevOps teams automate their testing, reduce defect volume, improve MTTD and MTTR. For examples, check out SmartBear, Perforce Helix Test Case Management, and Atlassian Crucible Peer Code Review.