Why Better Metrics Are Hard to Get
The metrics we know we should be collecting often live outside our direct control. Incident report rates sit with the SOC. Credential hygiene data lives in IT. Policy exception data is tracked by someone in GRC who has their own priorities.
Even when we can get access to the data, it rarely arrives clean. Report rates go up after a phishing campaign and everyone assumes the programme is working, when actually the spike is just campaign noise. Password reuse data is available in theory but never quite formatted in a way that maps to your programme goals.
And underneath all of it is a political problem. If you start reporting on behavioural outcomes rather than training activities, you're taking on more accountability. A completion rate of 96% is safe. A report rate that's been flat for six months is a harder conversation.
So we default to what's easy to pull and hard to argue with. Even when we know it doesn't tell the real story.
The Phishing Rate Problem Is Worse Than It Looks
Phishing simulation data has a specific issue that doesn't get enough airtime: the metric rewards the wrong thing.
When click rates drop, we report it as a win. But click rate reduction is often a sign that people have learned to recognise your simulations, not that they've become more resilient to real phishing. Run a slightly more sophisticated template and the numbers look very different.
The metric we should care about is report rate. Not just whether someone avoided the click, but whether they flagged it. Someone who spots a suspicious email and deletes it quietly hasn't helped anyone. Someone who reports it has. That's the behaviour worth measuring, and in most programmes it's still an afterthought.
Making the Shift Without Starting From Scratch
You don't need to overhaul everything at once. The most effective approach is to pick one behavioural metric you don't currently track, build the data pipeline to get it, and start reporting it alongside your existing numbers.
Report rate is usually the easiest place to start because it's directly influenced by your programme, it reflects genuine security value, and it gives you a story to tell. If you run a campaign focused on encouraging reporting and the numbers move, you have something real to point to.
From there, you can start building the case for access to broader data. Credential hygiene. Policy adherence. Culture survey data that's designed properly rather than appended to a compliance module. Each one adds a layer to a picture that completion rates simply cannot paint.
The shift also changes the conversation with stakeholders. Once you're reporting on outcomes rather than activities, you're talking about risk in terms leadership actually cares about. That's a different relationship with the business than one built on training dashboards.
The Narrative Matters as Much as the Numbers
Data without context is just noise. The practitioners who do this well don't just track better metrics, they build a coherent story around them.
That means knowing which direction you want each metric to move, why, and over what timeframe. It means being honest when a number is flat and having a view on what's driving it. It means connecting your programme activity to the outcomes you're seeing, so leadership can follow the thread.
A programme that can say "we focused on reporting culture in Q2, report rates are up 22%, and here's what that means for early threat detection" is a completely different proposition from one that shows a slide with 94% completion and calls it a result.
We've had the knowledge to measure this stuff properly for a long time. The frameworks exist. The conversations have been had at every conference for the past decade. At some point the gap between what we know and what we actually report stops being a resource problem and starts being a choice. If you're still leading with completion rates, it's worth asking yourself who that number is really serving.