In security, we sometimes see trust a failure. We especially find the idea that people trust systems they haven’t tested to be a sign of weakness or stupidity. There is a ton of rhetoric this year in the US around why you shouldn’t trust the presidential candidates, the police, protesters, and pretty much anyone you don’t directly know well. Perhaps this is the safest approach, but I would argue that it causes us to miss some of the most amazing parts of being human.
I would argue that our society is based on the idea that we can trust people to behave in a cooperative manner a high enough percentage of the time to take reasonable risks. While we seem to take fewer chances today, only the most risk-averse around us avoid risk at all costs. We send our kids away on the bus, trusting that they will get to school and be safe there. We lock our doors, trusting that someone probably won’t pick the lock. We post on the Internet, most of the time assuming we won’t be stalked.
Sometimes we have the most amazing experiences when we decide to trust when we don’t have proof. That isn’t to say that bad things, horrible things don’t happen as well, but some of the best vacations, opportunities and adventures are found when we decide to take a chance that someone isn’t an axe murderer.
On the other hand, sometimes we find ourselves in a twilight zone of finding out that the world we’ve built around us is completely untrustworthy, and the consequences are dire. We find ourselves physically hurt, lacking the opportunities and things we need to take care of ourselves, or feeling like we must be going crazy. We may fall into the trap of being unable to trust anyone, and as such, being unable to move forward. This is particularly damaging when the damage comes from people close to us, whom we let our guard down with.
For me, this is why preventing social engineering attacks programmatically is so important. Teaching people not to trust anyone ever leads to the kind of society I don’t want to be a part of. No more trick-or-treating for the kids, no more walks around the neighborhood, no more transactions with any risk involved.
The problem with social engineering today is three-fold. First, the percentage of scams or attacks verses trustworthy interactions has gone up. Most of us have only run into a street scam a few times in our lives (depending on where you live and travel), but we see email scams daily. This is due to factors such as scalability of attacks, increasing the ability to perform remote attacks, and the lowered cost of attacks. Secondly, the possible benefits to an attacker are higher, especially compared to the cost of performing the attack. While physical harm may create far more harm, the actual benefit to the attacker of IP or money may be higher. Thirdly, there is a smaller chance that the perpetrator will face significant punishment for the attack.
How do we go about reducing the risk of social engineering without eroding people’s desire to trust? Much like the traditional analysis of the economics of an attack, we focus on changing the factors that are driving up social engineering attacks. We find ways to reduce the likelihood that an interaction is not trustworthy. We reduce the benefit an attacker is able to gain from any attack. Finally, we find ways of providing negative consequences for attackers. Over the next few weeks I’ll explore ways of doing each of these things.
A quick thank you to my anonymous yard penpal, White Rabbit, for inspiring this post.