Common measures that should drive an application or application development team’s metrics collection and measurement:
- Cadence – how frequent and regular is the release cycle
- the number of releases per year
- the probability of keeping a periodic release
- Delivery throughput – how much content (functionality) is released every release
- measures such as jira counts weighted by complexity or size
- Quality – number of defects per cycle
- defect counts from a defect management system such as ALM or quality center
- change requests raised post dev commencement
- Stability – Crashes/ breakage/incidents around the application
- Crashes
- Functionality not working
- Application unavailable
- Each of the above could be measured via tickets from an incident management system like Service Now
- Scalability – how easily does the application expand and contract based on usage
- measure application usage variability across time periods – for e.g. we planned for usage to double for Rx fulfillment at mail order pharmacies during Thanksgiving and Christmas holidays than normal weeks
- application scaling around peak usage + a comfortable variance allowance
- shrinkage back to adjust to non peak usage to effectively manage TCO and use capacity on demand techniques
- Usability – how well and easily can your users access or work the application
- Business Continuity
- ability to recover in the event of a disaster
- time to restore service in a continuity scenario
In my opinion, some key pre-requisites that drive good metrics are –
- Good design and architecture
- Code reviews and design conformance
- Scalability isn’t an after thought
- Usability is designed before the software is written
- Automated regression and functional testing
I have implemented versions of delivery effectiveness for my teams at both Morgan Stanley and Medco and contrary to most practitioners beliefs, its not that hard to do. Please reach out if you want a deeper how to discussion.