News

9 biases killing your security program

I see what I want to see We’re not always as rational in our decision-making as we’d like to think we are. This is often true in our daily decisions; from what you’d like to eat for lunch to the bigger decisions we make, such as what kind of car to buy to where we choose to live. These cognitive biases, or deviations from rational judgement, can affect every aspect of our decision-making. It’d be foolish to think such irrational thinking doesn’t lead to a distorted view of cybersecurity risks, or inaccurate judgements in defending enterprise systems. Here’s a (by no means all-inclusive) list of nine such cognitive biases that security professionals should especially remain aware.

The availability heuristic This is a bias where we rely too heavily on what comes first to mind. Someone being susceptible (falling into) the Availability heuristic would focus their efforts on the most recent events, whether that be a worm attack (such as what occurred in the early 2000s), ransomware (right now), botnets, denial of service attacks, or whatever the most recent trend is. That’s a great way to put out fires, a bad way to build a sustainable risk management program.

Confirmation bias This bias is when we seek and interpret new information as a way to confirm our current views, as well as discounting data or views that contradict or are alternative to our views. We see this in information security when executives believe that technology can provide most of their defenses and they look at the successes these devices have, but perhaps[s ignore their shortcomings and therefore inflate the real effectiveness.

Information bias With information bias, security pros make measurement or information errors. This could occur when one sets out to research threats, their organization, or the effectiveness of a new security control. Perhaps the executive thinks more information is always better, but doesn’t know what information is more valuable, or strong decisions may be made on incomplete information. This is also called observation bias.

The ostrich effect We see this one a lot in our lives. We all know that a person who can’t face the pain of talking about their finances, or parents who just won’t hear the reality about the behavior of their child. We see similar things in cybersecurity when software makers don’t want to hear about the legitimate vulnerabilities discovered by security researchers, or executives in enterprises not wanting to deal with the results of vulnerability scans.

Pro-innovation bias This bias is the view that a new technology or innovation needs to be used by everyone as it is. We see this in the technology field all of the time, a new solution or technology explodes on the market and it’s touted at trade shows and by the media that the tool is hyped. In security, it becomes clear that technologies such as big data, threat intelligence, cloud security or whatever the current trend may be. Beware those pushing the latest innovation as they may be so blinded by pro-innovation bias that they can’t see the limitations ahead.

Survivorship bias Survivorship bias is a type of selection bias. We see this bias in some aspects of life, such as when looking at successful startups and come to reason that becoming successful is easier because we are only seeing the successes, and not the wake of failed efforts. This can affect risk management decisions, such as when looking at other companies that haven’t suffered a public and devastating breach and therefore minimizing the chance that it may happen to them.

Zero-risk bias We see a lot of zero-risk bias in society, especially when looking at terrorism, crime and law enforcement, and public safety issues. The challenges are discussed in the frame of the elimination of crime and risk. This is obviously absurd. Crime and risk can me managed and minimized, not eliminated. The same is true when it comes to cybersecurity. Listen to people talk about information security. It’s not framed as an acceptable reduction of risk, but as absolute reduction of risk.

The bandwagon effect The bandwagon effect is something we see among vendors and security executives alike. One year at the major conferences big data is all the rage, the year before it’s Governance, Risk, and Compliance dashboards. Ignore the hype and focus on the exact needs of the organization.

Automation bias We are inundated with computer generated dashboards and consoles. And the tendency is to trust the information displayed on these systems, and automation bias is that exact tendency to trust these systems. We are likely to discount other possibilities in favor for what the machine is suggesting. According to the Wikipedia entry on automation bias, this “bias takes the form of errors of exclusion and inclusion: an automation bias of exclusion takes place when humans rely on an automated system that does not inform them of a problem, while an error of inclusion arises when humans make choices based on incorrect suggestions relayed by automated systems.”


References: http://www.cio.com/article/3120207/security/9-biases-killing-your-security-program.html


back to home page