Complexity Firewall

Wed, Nov 3, 2021 4-minute read

I periodically get asked “are we secure?” or “is this system secure?”. That’s not an unusual question for a CISO, or any one cyber security professional in a leadership role, to be asked. How can we answer that question? Can we answer that question in a genuine and complete matter? What if we’re wrong? What if our answers turns out to be wrong later?

In a prior writing I used the term “Complexity Firewall/Filter" and I wanted to expand on it. The premise behind being a complexity filter is that it’s our job as expert practitioners to make a topic accessible to non-practitioners. We aren’t there to bombard our audience or our stakeholders with a crushing number of facts; we aren’t there to ask them to interpret the data and come to some conclusion. To do that proficiently would require they become us. The same is true for anyone in a complex role. The example of CFO applies here when being asked “are we profitable?” or “Are we a sustainable business?”.

As cyber security practitioners we often get asked the question “are we secure?” or “is this system secure?”. It’s a difficult question to be asked and even answering it requires a lot of caveats and assumptions. When asking a simple question, I suspect that people don’t like lots of “it depends” or carve outs as part of the response. Generally they’re after a yes or a no. That’s our job, to say Yes or No; not it depends.

To be clear, being a complexity firewall doesn’t mean dumbing things down; the people asking the question are likely smart and capable in their own domains, they just aren’t familiar with your area of expertise. We need to make our domain accessible to others; we’re the interface into that complex world in the same way a tax accountant is your interface into the complex world of filing your annual tax returns (that’s mostly a North American thing I’ve heard). Dumbing things down might even make smart and capable people feel negatively about your interactions with them.

When talking about a simple system there are already many assumptions in play; a single small system or application has assumptions (many dependency based) that we have to rely on when determining how secure a system is. Larger systems or a group of systems working together have many more underlying assumptions about the state of it’s security. So how do we answer the question of “is this system secure?” or “are we secure?” without having to say “it depends” or point to the underlying assumptions? How do we answer these questions in a way that doesn’t leave us as cyber practitioners feeling uncomfortable about the potential of an incomplete or unintentionally wrong answer?

I think the short answer is structured analysis: I know this system is secure because I evaluated the threats and risks it is exposed to; then I evaluated the controls in place and I found them to be adequate for mitigating the risks. The important thing here is that you have a documented and repeatable structure for this assessment, one that clearly lists out the threats and risks, the controls and gaps. If you have such a process then you can say Yes or No with integrity. You can even explain the approach you use in an accessible manner so that stakeholders have confidence in the answer; I think describing a process is easier than explaining the underlying technicalities. Processes can also be independently assessed for reasonableness by your auditor if needed to instill a greater level of confidence. A Threat Risk Assessment is a good example of such a structured process. A penetration test is not an example of a structured process; it is an example of “are there any obvious flaws?” (I’ll write about the issues with penetration tests another time).

If your answer is challenged you can respond with “we have a clear process to assess systems and did not identify any gaps that caused risks outside of our tolerances” or “we have a clear process to assess systems and DID identify issues that need to be addressed before we can consider the system secure”. If the system changes and is later breached, your analysis still holds for the system you evaluated then. If a risk/threat arises you didn’t evaluate, then perhaps there is a flaw in your process, but it also might be something legitimately new and we’re not oracles with the ability to see the future (although I’m a little negative on the whole idea of “the evolving threat landscape” for a truly static system - again a future writing topic).

Our job is to give people confidence; confidence founded on facts, founded on repeatable processes. If we can do that in a way that makes sense to us without overloading our stakeholders with a mountain of information then we’re acting as a complexity firewall.