How to Pass Every Audit - Measure
Your scope is everything when it comes to metrics. Your metrics will show that you are governing properly. A metric that is always 100% can often indicate a denominator problem rather than a perfect program. This is a big reason to have multiple measurements. In fact, I would advise defining your metrics to have a threshold of tolerance. Things change, systems move, are removed, are added, and there are processes that have to happen in order to finalize systems, and your metrics should account for that. I will add a caveat here by saying that the amount of data that you have available to you greatly determines the accuracy of your metrics. If you are unable to see where a system is within its lifecycle, it greatly diminishes your ability to add precision to your metrics.
You want to define your metrics within your standards. For the most part (not universal but close) each standard statement should have a metric tied to it. A metric also does not have to stop at a single measurement either. You will want measurements that are available to you as a team and then measurements that are available to your executives and risk committees. This is not to say you should hide anything but clearly define where the risk is. We will continue our MFA example.
Some measurements that you probably want to have for your MFA standard:
These are purely made-up numbers, but what you can see are a number of metrics that you can use to show audit a few things.
M0 and M1 show audit that you are watching your scope. M5 and M6 are showing audit you are watching your out of scope to ensure that it shouldn’t be in scope. M2 and M3 are accounting for account lifecycles and giving you a threshold for accuracy. M4 is the only metric that needs to be shared proactively outside of your team. Whether it makes it to board metrics or risk committee is a leadership call, but the others are your view of your program’s world. And you can use that view to zoom in on the real risk.
You may still have some auditors that want to pick these apart. If they are automated, they may ask to validate calculations. The way they would do that is to query the systems of record themselves and compare their calcs to yours. You may have to in turn audit their calcs if they are off to ensure you are each calculating correctly.
Another common validation they will use is a test of 1. They will pick a random account and validate that it has MFA. Two risks come with this approach. One is that they pull a non-human or disabled account, just be prepared to explain. They will pick another data point. The other risk is that they pick the one account that is using interactive login, and is out of compliance. What you will want here is a documented process for how you are notified if this ever happens, how you track the exception, your process for remediating, and your evidence for all of this. As you can see metrics can get unwieldy quickly, but they are essential to move you from reactive auditing to continuous compliance.
If you can automate these metrics, then you can give audit access to the metrics and exception tracking and they can audit whenever they want to. Keep in mind that even with solid metrics, there will always be accounts that don't comply for legitimate reasons. That's what an exception process is for which is covered in the next post.