Absolve Security
  • Home
  • Blog
  • Why Absolve Security?
  • Services
  • About Us

Trust - Down the Rabbit Hole

7/20/2016

4 Comments

 

In security, we sometimes see trust a failure.  We especially find the idea that people trust systems they haven’t tested to be a sign of weakness or stupidity.  There is a ton of rhetoric this year in the US around why you shouldn’t trust the presidential candidates, the police, protesters, and pretty much anyone you don’t directly know well.  Perhaps this is the safest approach, but I would argue that it causes us to miss some of the most amazing parts of being human.

I would argue that our society is based on the idea that we can trust people to behave in a cooperative manner a high enough percentage of the time to take reasonable risks.  While we seem to take fewer chances today, only the most risk-averse around us avoid risk at all costs.  We send our kids away on the bus, trusting that they will get to school and be safe there.  We lock our doors, trusting that someone probably won’t pick the lock.  We post on the Internet, most of the time assuming we won’t be stalked. 

Sometimes we have the most amazing experiences when we decide to trust when we don’t have proof.   That isn’t to say that bad things, horrible things don’t happen as well, but some of the best vacations, opportunities and adventures are found when we decide to take a chance that someone isn’t an axe murderer. 

On the other hand, sometimes we find ourselves in a twilight zone of finding out that the world we’ve built around us is completely untrustworthy, and the consequences are dire.  We find ourselves physically hurt, lacking the opportunities and things we need to take care of ourselves, or feeling like we must be going crazy.  We may fall into the trap of being unable to trust anyone, and as such, being unable to move forward.  This is particularly damaging when the damage comes from people close to us, whom we let our guard down with. 

For me, this is why preventing social engineering attacks programmatically is so important.  Teaching people not to trust anyone ever leads to the kind of society I don’t want to be a part of.   No more trick-or-treating for the kids, no more walks around the neighborhood, no more transactions with any risk involved.

The problem with social engineering today is three-fold.  First, the percentage of scams or attacks verses trustworthy interactions has gone up.  Most of us have only run into a street scam a few times in our lives (depending on where you live and travel), but we see email scams daily.  This is due to factors such as scalability of attacks, increasing the ability to perform remote attacks, and the lowered cost of attacks.  Secondly, the possible benefits to an attacker are higher, especially compared to the cost of performing the attack.  While physical harm may create far more harm, the actual benefit to the attacker of IP or money may be higher.  Thirdly, there is a smaller chance that the perpetrator will face significant punishment for the attack.

How do we go about reducing the risk of social engineering without eroding people’s desire to trust?  Much like the traditional analysis of the economics of an attack, we focus on changing the factors that are driving up social engineering attacks.  We find ways to reduce the likelihood that an interaction is not trustworthy.  We reduce the benefit an attacker is able to gain from any attack.  Finally, we find ways of providing negative consequences for attackers.  Over the next few weeks I’ll explore ways of doing each of these things.

A quick thank you to my anonymous yard penpal, White Rabbit, for inspiring this post. 

4 Comments

First, Do No Harm

7/16/2016

0 Comments

 


A friend sent me a link to Healthcare workers prioritize helping people over information security (disaster ensues) as a great example of what happens when security isn’t designed for the users of the system.  For example, “Other IT-based checks forced even-more-dangerous workarounds, like the system that wouldn't let doctors save work without ordering potentially lethal blood thinners, which they'd have to remember to log back in and cancel, or kill their patients.” 

When my girls were born very prematurely, I had a chance to spend many months experiencing how hospitals work day to day.   I can attest to how busy our doctors and nurses were, and how many details they needed to remember.  I have seen doctors rush from one wing to another to save a little girl who has turned blue (but is now a happy 8 year old).  One of my daughter’s nurses committed suicide years after we left the hospital after she had made a mistake on a medication.  It is no wonder that medical professionals work around security when it gets in their way.  In many cases they are weighing the risk of a data leak vs. a medical emergency that is in progress. 

The best thing we as security professionals can do is to find solutions that not only help protect data, but also help catch medical errors, and assist professionals in an emergency.  Auditing is one solid area for this.  Another is to find better biometrics that don’t require physical touch (to cut down on germs), and have smart overrides in an emergency (allow access anyway but call security to come verify once the crisis is under control).  The work done on Microsoft’s Kinect and Amazon’s Echo come to mind as technologies that may help move research forward in this area.

The next time you’re at the doctor, or at a hospital, ask about how the data security system is working for the people you depend on.  What do they like? What would they change?  What do they work around?  How would you make their life easier so they can concentrate on treating you or keeping you well?

Questions or comments?  e[email protected]. 

0 Comments

Crisis Mode

7/7/2016

6 Comments

 
Security policies often contain safeguards, delays, limited access, or other controls designed to reduce risk impact.  However, most companies are unwilling to allow these controls to prevent necessary action in a crisis or critical situation.  This means that many security policies have built in loopholes from the beginning, or that they are added when response to a critical non-security issue is more difficult due to the controls. 

Hospitals are a classic example of this.  If a patient drops to the floor in a faint, the medical professionals nearby need to be able to access the patient’s records to find out allergies, medical conditions and medications so they can start helping them quickly.  However, medical records are very sensitive.  Fortunately, there are few cases where a highly paid professional would risk their employment reputation in order to peek at a few records.  Due to this, auditing along with smart velocity controls can reduce the risk of data leakage without compromising patient safety. 

How do you approach planning for urgency?  First, design security policies with the need to react to issues in mind.   Consider options such as a team of trusted people who have access to override controls, requiring more than one signoff for overrides, and creating tools to handle any common urgent issues.   Create an alerting system to make sure the security team is engaged as soon as possible.  Set up auditing so the people who are able to take high impact actions cannot turn it off. Finally, educate your PR and field staff in advance, so they can provide clarity to customers when issues do take longer to solve because you are safeguarding their data. 

Do you have examples of effective steps for reducing security risk during a crisis?  If so, please send them to [email protected].

6 Comments

    Author

    Absolve Security focuses on designing systems and processes to reduce social engineering while empowering the business to focus on it's goals.  

    Archives

    July 2016
    June 2016

    Categories

    All

    RSS Feed

Site powered by Weebly. Managed by Hostmonster