It’s been a fun CSO tools series so let’s close it with a bang. Number 7 is the ever intimidating operations scorecard. I’m also happy to share this post coincides with the release of the Third Defense Metrics Manager application. I’ll leave the marketing to the website but will include some screen shots below as an example how to build and maintain an ops scorecard.
I’ve only seen a handful of actual working scorecards throughout my career and each were different. The lack of standardization is acceptable at this point in the maturity of security measurement and likely beneficial since everyone has different motivations, audience comprehension, and access to information. As mentioned in the balanced scorecard post, the most important step is to just get started and build or advance your culture of transparent and accountable IT services.
The operational scorecard communicates trends and progress across your operational services. It’s a more tactical view that feeds the balanced scorecard. While lower-level, it still only contains business relevant metrics that must pass a few tests. To start, each metric must:
1. have a defined and communicated goal.
2. pass the “so-what” test i.e. a non-technical person can understand what it means to the business.
Of course you need automated, repeatable, etc. criteria. Much has been written on how to define a metric. I think Andrew Jaquith has the best book on the subject but a book (or a blog post) can only take you so far. The key is to take action. I assume you’ve read the books and visited securitymetrics.org so I’ll focus on areas I think are under served. Obviously security metrics are in the infancy stage. We have plenty of resources to tell us what kinds of areas to measure, however we have little showing us how to collect and report the information.
But I don’t have enough data…
Yes you do. There’s no chicken-egg dilemma. The key is to start small. You only need two items to call it a scorecard. Heck, Third Defense is just adding our second application and we’re calling it a suite (there’s more on the way)! As you pick your metrics, here’s some additional tips I don’t remember reading anywhere. First, your metrics should tell a story. No one will remember the numbers, they’ll remember how they felt while reading the scorecard. Work backward from the desired impression and pick metrics in the following business-relevant “story lines.” The first two are written about the most:
- Value to business: areas where your team contributed to revenue. This will be thin but keep refining till you strike a vein e.g. % of priority business initiatives with security involved at design phase (assuming your business prioritizes initiatives). If outsourcing is part of your business, vendor management and assessment metrics can go here. Don’t be tempted to create a metric to highlight a win. Save the specific anecdotes for your quarterly meetings.
- Reduce impact to business: # business impacting incidents, severity of impacting incidents, MTTR, $ reduction of fraud, % of customer turnover due to security.
- Efficiency: Take a minute to show off or at least set expectations. Readers need to know you’re valuable and thrifty. No fat cat security teams here e.g. avg days for access certification, hours to provision, % roles with automated profiles, % processes with SLAs, % processes within SLA, even % processes with defined RACIs.
- Control posture: I disagree with sources that say ops metrics e.g. % devices managed for security, don’t pass the so-what test. One of the emotions you want readers to internalize is “those IT security folks got it covered (or at least they know where they’re going).” This credibility is important as you negotiate controls, assess, and support the business on a sustained basis. If you run a tight shop, or at least a measured shop, people will be less inclined to BS you.
But I still don’t have the data…
Yes you do, try harder. Do you scan your end-points and servers? Do you compare your scan inventory with the spreadsheet or cmdb from IT? Do you participate in incidents? Administer access? Interact with people outside IT? Of course. The key is selecting the relevant metric and communicating it properly. Heck, you could have an early metric simply measuring % of metrics with established baselines and targets! You only need a few to start.
Ok, what’s next?
Now that you’re fired up, I’ll expand on the three basic steps:
- Design Metrics: it’s an art form
- Data Entry & Management: efficient, scalable as possible
- Reporting: make sure you get the bang for your buck
Step One: Design
Isn’t it amazing we don’t have an industry standard scorecard already? I don’t mean books and blog posts, rather every team using an active scorecard they stand behind. Because selecting and organizing metrics is still an art, I haven’t seen a one-size fits all repository of metrics. Below are some of the metrics in our Metrics Manager repository. Feel free to login (to the suite :-) for free to see how we organize and compare notes.
A quick word about organizing metrics. I call these different “collections” e.g. you can shuffle your metrics to line up with your favorite compliance checklist, your service catalog, ecosystem, whatever best tells your story. Below is a list of my favorite metrics. This view doesn’t have all the compliance associations yet. We’ll share the complete version as we pretty it up.
Aside: eventually we’ll get all our material in a bulletin board system with an open creative commons license. For now, let’s go with this: I make no promises or warranties on this list of metrics. It’s simply collected from my experience over the years. I haven’t used all these before so they may or may not be a fit for you. In general, feel free to do as you please with this blog and content. IT security processes and practices should be open source.
rough list of metric examples (we’ll produce a more usable version as soon as we can)
Also, here’s a screen shot of the fields we capture per metric you may find helpful:
Step Two: Data Entry & Management
I’ve done this in a spreadsheet in the past so you have no excuses. There are also tools out there (like ours) so you really have no reason to delay! I need to emphasize a tip I mentioned earlier. Each metric should have a baseline and target. Without them, it’s just a statistic. When you define where you were, are, and want to go, you tell a story. (Kind of funny how much of security is just about telling great stories.) Another benefit from these three datapoints is the ability to create an expected value at any point between them i.e. how fast will you hit the target, or are you already there and just need to optimize. This can get tricky in spreadsheets as the months roll by, but we did it at msft and wamu so I know you can do it too.
Another tip is to let the metric owner drive the baseline and target definitions. It’s empowering to be able to set your own bar. It’s also motivating to achieve the selected targets. As long as targets are realistic, they make great performance review evidence.
One more note related to metric design. Many of your business relevant metrics will be a combination of tactical metrics. For example, % accuracy of inventory is a calculated field from what you enumerate vs. what IT tracks. You’ll either have to build a business intelligence system, maintain many spreadsheets, or swivel chair the numbers from tactical outputs into your metric tracking tool. I do know of one IT shop investing in a BI platform to integrate source feeds, apply business logic, and present relevant results. I wish we were all there! If you don’t have the resources for BI, check out the middle ground of summarizing tactically, then manually reporting the relevant metrics. We gotta start somewhere. This was the motivation for our Metrics Manager application. Some day the industry will have some magic API that translates raw data into executive presentations. Until then, we make it easier to transform your tactical evidence into compelling visuals that drive decisions.
Also, set aside a place to capture notes for each metric recording period, especially for calculated fields. Dependencies and surprises happen. It doesn’t mean you’re failing, you just need to explain why you’re green one quarter and blazing red the next. No one said transparency is easy, just valuable.
Here’s a shot how Metrics Manager tackles data entry and management. Note the ability to assign a baseline (red dot) and target (green dot). You should also allow for multiple targets to set expectations on the pace of progress. The below example shows linear expected progress. In real life this could be flat for six months then jump up. Or your baseline could already be at your target level and you’re simply tracking progress. The key is to think of a metric target as an “acceptable risk” definition for that point in time. As you reach your targets you can re-evaluate if it’s optimal for the business.
Reporting
All is for not if you don’t have the eye candy to communicate your story. Yes, I feel so strong about this we invested six months in our first applications to make the tasks of organizing and presenting data easier and more effective. You have to draw the future picture. It’s important to define what success looks like and work toward it, otherwise you’ll never get there. Defining what success looks like is also self-fulfilling. If you write down your optimal scorecard but only have two metrics started, you have another nice story to show how additional investment in security (to collect and measure evidence) will translate into better risk decisions for the business.
I have a few goals for metrics reporting:
- Empower the metric owner to trend their individual metrics.
- Review the team’s progress as a whole with the ability to drill down where needed.
- Create an overall roll-up to show program level progress to non-technical stakeholders.
Again, you can do this in spreadsheets it just takes a bit longer. Here are a couple screen shots from our tools to get you started.
Group table summary and individual drill-down. Note we also track if you’re trending toward or away from your expected progress.
The overall metric roll up always sits on top of the table, looking down upon its members. We call ours the “Master Security Index.” This shows the average % distance between actual values and their expected progress values. Thus, you get a high level view if security is trending toward or away from expectations. If the overall index doesn’t represent your story, you can drill down to expose the areas of concern. You can also exclude specific metrics for what-if scenarios.
Wrap up
I hope this post inspired you to start or advance how you collect evidence and communicate your progress. It’s dirty work translating tactical statistics into relevant metrics. However the pay back in credibility and demonstrating the value security brings to the business is well worth it. The Ops folks have their throughput and uptime metrics, show them what security can do. Still don’t believe me? Just give it a try already. Start small with a few experiments and grow from there.
One item I forgot to mention above is the added accountability and pride your team will suffer through. You might even see all five stages of grief here… Every IT security department I’ve seen has areas they know they should be doing better. It really hits home when you broadcast the numbers. Please don’t hide these. Celebrate them! What a great opportunity to take a bath and justify why improving your posture is good for the business. If you still don’t get the resources to raise the bar, no problem. You now have another way to show what acceptable risk is for you business. I’m already sleeping better.
You’ll also be challenged with your team being “too busy” to calculate and enter in their monthly metric data. Don’t be fooled: if you’re too busy to measure what you’re doing, you’re doing the wrong things. Break out the management whip if you have to. In 6 months when you look back and see the trends, you’ll love the fact you have an evidence based story to tell. Count on it :-)