2023
Benchmarks
GOAL:
As we learn from the current survey, we will provide some insights. All survey results are intended to help inform our research and provide the community with insights useful in maturing software trust. If you have a metric you would like to see us research, please fill out a Metric Research Request
A majority of the surveyed organizations review their metrics: MONTHLY
Resilience
Adoption
Velocity
Errors
The following metrics are trending as the best metrics within each pillar:
Mean Time to Recover
and
% Availability (Tied)
Customer Satisfaction
Deployment Frequency
Error Rate
Additional metrics being evaluated within this study:
Mean Time Between Failure (MTBF)
Mean Time to Failure (MTTF)
Mean Time to Incident (MTTI)
% Availability
Uptime
Downtime
Mean Downtime
% SLA (Service Level Agreement) met
% Transaction Failures
Outage Durations
% Response Time within X
% Securability
Exploitability
# of vulnerabilities discovered
% Risks above Threshold
Change Failure Rate
% Service Level Objectives met
99th percentile response time
Return on Resiliency Investment (RORI)
Daily Active Users (DAU)
Monthly Active Users (MAU)
Time to Value
Customer Sign-ups
Customer Effort Score
TP99 (Top Percent 99)
Product Adoption Rate
Feature Adoption Rate
Churn Rate
Seven-day Active Users
Time to First [action]
Seat Licenses
Time to First Use
Adoption Rate
% Daily or Monthly Users
Average Time Spent
Revenue
Time to Value
Lead Time for Changes
Flow velocity
Developer Activity
Change Volume
Time to Open
Time to Close
Story Points Completed
Developer Satisfaction
Pull Request Size
Pull Request Frequency
Number of Sprint Issues Closed
Number of Developer Tools
Number of Governance Hours Spent
Error Budget
Error Optimization
Defect Leakage
Test Case Pass Rate
Defects Fixed Per Day
% Defect Reduction
Defect Containment Effectiveness
% Pre-release defects