Trust Doesn't Scale
Several years ago I was working as a security consultant with a natural resources company. One of my tasks was to rewrite the information security policies for the company. In there I found this gem of a policy statement that forbade specifically downloading MP3s.
There’s an old joke but if you see a warning sign somewhere asking people not to do a certain thing, such as “don’t feed the alligators”, but it’s pretty good money that somebody did in the past and suffered some consequences. There was this implicit Trust in that warning sign and in that no MP3 policy; there was no fence to keep you away from the alligators and there was nothing to stop you from downloading MP3s at that place. The reader of the sign or the policy was being asked to do the right thing, but there was no assurances that they would. Sure, the consequences of feeding an alligator incorrectly are probably sufficiently bad dissuade to most people, but there’s always someone.
Years before my time at the natural resources company, I was lucky enough to be invited to a fireside chat over dinner with Gary McGraw and Brian Chess. At that dinner several of us were batting around different approaches to ensure that we delivered secure software. A vice president of software development announce that they have solved the issue with their organization. We were a bit incredulous when several of us inquired as to how, to which this vice president responded “we’ve set a policy that prohibits codes that isn’t secure”… there was enough wine to make that a fun conversation. Again, an example of trusting the policy readers… assuming anyone reads the policy.
Policy setting without enforcement is an act of trust. You trust the individuals reading the policy to do the right thing. Trust works until it doesn’t. If nobody is monitoring the alligators, then how do we know if they were or weren’t fed? An after the fact survey of everyone visiting the alligator swamp to inquire if they’ve fed the alligator won’t do anything to stop that one unlucky person who tries to feed the alligator then gets beaten by them instead. Even if nobody gets eaten, you may still learn that many people didn’t follow your policy.
The right approach to stop people feeding alligators, either with food or themselves, is to put a fence up or maybe search people for food or have a guard on duty. Each of these controls has various levels of effectiveness and various levels of cost. Or you could just put a sign up and Trust that inevitably someone will feed the alligator, maybe literally.
Of course we’re not talking about real alligators, we’re talking about internet alligators and technology alligators.
Trust based controls are appropriate in certain situations, perhaps where there’s lots of ambiguity or the process you wish to safeguard is entirely people based. Trust based controls work when the stakes are low, like don’t download MP3s. Trust based controls work in places where you want to safeguard yourself from some minor legal threat, again don’t download MP3s, in which you’re really trying to establish some plausible future litigation defense.
Trust based controls don’t work when you want certainty; for example nobody should expose virtual machines in AWS to the internet directly. Trust based controls don’t work when you can’t trust someone; like hackers trying to breach your security. There lies an interesting intersection between people you do trust and people you don’t; that’s why social engineering works, it makes trust a transitive property. The hackers can access and abuse the trust you give others. You wouldn’t trust a hacker to download and run code in your environment; but you might trust an employee to do that; so employees and the trust you have given them is an inviting target.
Trust based controls fail when making the right decision relies on specialized domain knowledge - is this action safe or not because of a certain set of bits that aren’t normally visible to a user and require specialized tools to assess said bits? If the user is unable to make the assessment as to which is the right answer then you cannot trust them to take the right action. Even in cases where you are working with appropriate domain knowledge, one may not know the downstream consequences of a particular decision, especially not in a complex technology environment.
Fences are always better than after the fact dealing with an alligator that likes to snack on humans. So too are preventative controls; making sure a bad thing can’t happen is far more scalable and doesn’t rely on someone knowing the right thing and making the right decision at the right time.
Trust might work in small organizations, maybe startups. But trust can’t work at scale.