There are many reasons why business managers seem to ignore the risks brought forth by information security professionals. I outlined six of them in an earlier post. In this note, I’d like to add another possible explanation: the endowment effect, which affects how humans value their possessions.
Richard Thaler coined the term endowment effect to describe the tendency of individuals to value the item in their possession more highly than the same item possessed by someone else. In the words of Dan Ariely, “once we own something, its value increases in our eyes.” Dan also points out that ownership isn’t the only way to endow something with higher value:
“You can also create value by investing time and effort into something (hence why we cherish those scraggly scarves we knit ourselves) or by knowing that someone else has (gifts fall under this category).”
This propensity seems irrational, yet it was observed in numerous experiments.
Information security professionals experience a sense of ownership for the data they safeguard. Therefore, the endowment effect might bias us towards overestimating the value of this data. Business managers are somewhat removed from the data by layers of applications and business processes and aren’t affected by the bias to the same degree.
In other words, business managers might value the data less than how infosec professionals value it. This would contribute to the disagreement regarding the level of risk associated with security of the data.
If information security professionals are, indeed, irrationally influenced by the endowment effect, what can we do about it? Alternatively, when persuading business managers to agree with our perspective, how might we influence them to experience the endowment effect to the same extent?
Information security professionals are often frustrated when their concerns regarding vulnerabilities and associated threats appear to be ignored by the company’s executives. I already discussed 6 reasons why business managers ignore IT security risk recommendations. I’d like to add a few more to the list, based on recent research into the links between power, prestige and decision-making.
High-Status Individuals Are More Trusting
In one study, Lount and Pettit researched how a person’s social status might influence the extent of trusting someone. In one of their experiments “participants were primed to experience either high or low status and then given the opportunity to send money in a trust game.” In this context, high status might be associated with the prestige of being a business executive, while another extreme of a low status might be associated with an entry-level mail room clerk.
The participants who were assigned a high status were more trusting when sending money, hoping that the recipient would return the funds. Low-status individuals were more cautious. The researchers concluded from this and related experiments that “having status alters how we perceive others intentions” to believe “that others have positive intentions toward us.” They also pointed out that:
“The possession of status can fundamentally alter our expectations of peoples’ motives toward us, and in turn, influence our initial trust in others.”
People with prestigious positions, such as executive managers, might be more trusting of others and, therefore, might be willing to accept more risks.
Power Leads to Overconfidence
In another study, Fast, Sivanathan, Mayer and Galinsky explored the links between an individual’s perception of power and self-confidence. Their research found that people who believed themselves to be powerful experienced more certainty in the accuracy of their believes and opinions. They confirmed that “power increases overconfidence in the accuracy of one’s thoughts and beliefs.” This matters in organizations because many “high-impact decisions are based on perceived precision of relevant knowledge.”
The effect of this phenomenon is magnified because not only the subjective sense of power causes people to become overconfident in their knowledge, but also “overconfident people tend to acquire roles that afford power.”
Prestige, Power And Decisions About Risk
My perspective on these findings through the lens of information security and related risks is as follows:
So, there you have it: a few more reasons why executives are more prone to accept risks, in addition to the 6 explanations I offered earlier. You might also like to know that choice fatigue contributes to the willingness to accept risks and that sleep deprivation contributes to risk-taking behavior. We just cannot help it—it’s in our nature.
There are several reasons why business managers ignore IT risk recommendations from information security professionals. One of these is the perception that acting upon the advice is too costly or not practical. You can tackle this issue by presenting several alternative ways of mitigating the risk, giving the business manager an alternative to simply accepting the risk.
IT Security Risk Mitigation Alternatives
When information security professionals identify IT risks, they tend to think of the most reliable way of addressing the problem. That can be expensive. If a business manager believes the cost of fixing the issue outweighs the benefits, he or she will probably keep things the way they are, electing to maintain the status quo by accepting the risk.
In anticipation of this, prepare several risk-mitigation options. For example, the most reliable way to deal with a vulnerability in a web application is probably:
Implementing these steps can expensive and may feel overwhelming to a business manager. In this situation, consider discussing an alternative that may be less costly, though perhaps less reliable in the long term: implement a virtual patch using a Web Application Firewall (WAF). This might buy the organization some time to budget for and implement the more reliable solution (code fix and SDL).
Best Alternative to a Negotiated Agreement
Treating the risk discussion as a negotiation, the information security professional might be more effective at persuading the business manager to agree to the more reliable mitigation approach. One aspect of negotiations that might help is Best Alternative to a Negotiated Agreement (BATNA)—a concept discussed in book Getting to Yes by by Roger Fisher and William Ury.
BATNA is the course of action that a party in negotiations can take if an agreement is not reached. According to Fisher and Ury, knowing your BATNA can protect you from “accepting terms that are too unfavorable and from rejecting terms it would be in your interest to accept.”
Dr. David Venter points out that when determining his or her BATNA, the negotiator should:
Information security professionals can consider their favorite way of dealing with the IT risk as their preferred outcome of negotiations. At the same time, they should understand their BATNA—the next best way of handling the security issue. This approach might provide them with the ammunition to be more persuasive in risk discussions with business managers.
Information security professionals get frustrated when their concerns are seemingly dismissed by business managers who accept the risk instead of approving the proposed remediation strategy. There are many reasons why infosec personnel’s IT security risk recommendations may not be accepted, including:
As information security professionals, we can do a lot better at presenting IT security risk recommendations in a more practical, business-relevant and persuasive manner. To improve, we need to first understand why our advice appears to be ignored. The list of reasons that I presented above isn’t complete, but it might be a good starting point.
For a follow-up to this post, see The Endowment Effect in Information Security.
David Hoelzer’s post How to Present Audit Findings Effectively emphasized the need to frame security discussions by referring to the organization’s internal “currency” that’s not necessarily financial. After all, information security is usually a means of accomplishing some goal. The extent to which security contributes towards or detracts from that goal might be described using some form of currency. I’d like to build upon this idea and possibly take it in a slightly different direction.
Organizational Internal Currency
As David points out, “putting audit reports and risk assessments in terms of dollars and cents is the most motivating context for management” in most organizations. He also explains that money isn’t the only internal currency you can refer to.
For instance, you might be able to engage your audience by framing the discussion in terms such as:
In theory, risks related to these factors can ultimately be described in terms of financial expenses. However, sometimes when aiming to frame security discussions in financial terms, people make up numbers or use meaningless calculations.
You might not have enough data for monetary computations and might be tempted to make hopeful, but possibly incorrect assumptions. Rather than give up and begin talking about security as if its importance is widely acknowledged, consider other forms of internal currency that might resonate with your audience.
Individual Internal Currency
I’d like to take a somewhat Machiavellian perspective on this matter, very possibly diverting from the road map charted in David’s post. (So don’t blame him if the following rubs you the wrong way.)
Remember that companies don’t make decisions. Instead, individuals working for companies make decisions. As the result, consider which form of internal currency is most relevant to the person with whom you’re interacting. Though the person operates within a company that pursues certain, usually financial goals, he might have more immediate concerns related to avoiding:
Keep these subjective concerns in mind when preparing to discuss your information security findings, recommendations or requests.
The goal of accounting for internal currency isn’t to distort findings or manipulate the organization or the person into making bad decisions. Rather, it’s a technique that helps capture the attention of the audience in the context within which the security program exists. Your discussion still needs to be based on accurate observations, factual information and, whenever possible, empirical data.
In the perfect world, we’d have all the data we need to calculate the best outcome congruent with the organization’s strategic goals. In the mean time, recognize that internal currency can take other forms than money and might differ across individuals within the company.
For more thoughts along these lines, take a look at my article Situational Awareness for Information Security Professionals.
I’ve been thinking about the role that deception can play in information security. Honeypots present an example of how data defenders can mislead or slow down attackers. Similarly, attackers can deceive defenders. For example, Nmap can spoof source IP addresses of network-scanning packets, so defenders have a hard time determining the true origin of the probes.
I stumbled upon a facinating paper by Donald J. Bacon titled Second World War Deception: Lessons Learned for Today’s Joint Planner (PDF). Among its many examples, the paper mentions two types of deception operations employed by Allied Forces to confuse and misdirect the German military during World War II:
The Allies were able to use deception to gain “surprise for offensive operations and to provide increased security for forces by masking military objectives, planning, preparations, and operations.”
The paper also mentions that Soviet military’s deception efforts during World War II were used primarily to “conceal large troop movements and concentrations to attain surprise for offenses.” The Soviets’ greatest victories in the war can be traced to their success at fully integrating deception “into their operational planning and execution. The result was the Germans often knew only the frontline Soviet troop dispositions—everything behind the front line was a ‘blur.’”
The paper quotes historian Charles Cruickshank, highlighting a critical aspect of successful deception:
“The perfect deception plan is like a jigsaw puzzle. Pieces of the information are allowed to reach the enemy in such a way as to convince him that he has discovered them by accident.”
One of my take-aways from this paper is that deception efforts cannot be one-off projects. To be successful, they have to be centrally managed and integrated into the fabric of operations. This is why the use of deception might only be practical for mature and well-funded parties.
To what extent can deception be used as part of offensive and defensive information security operations? The efforts by the Honeynet Project have been instrumental at helping the infosec industry figure this out. I suspect there’s much more for us to learn about using deception to detect and resist network-based attacks.
Information security professionals are often in the position where they need to influence colleagues, vendors, partners and customers. Yet, they don’t always have the formal power to affect the desired changes. To improve the odds that people will do as you ask, you must be good not only in the content of the message, but also in how you present it.
Here are my favorite books that discuss the principles of influence, negotiation and decision-making:
Here are a few of my earlier tips on crafting a convincing message to express yourself properly:
Have tips and recommendations of your own? Please share in the comments below.