There are many reasons why business managers seem to ignore the risks brought forth by information security professionals. I outlined six of them in an earlier post. In this note, I’d like to add another possible explanation: the endowment effect, which affects how humans value their possessions.
Richard Thaler coined the term endowment effect to describe the tendency of individuals to value the item in their possession more highly than the same item possessed by someone else. In the words of Dan Ariely, “once we own something, its value increases in our eyes.” Dan also points out that ownership isn’t the only way to endow something with higher value:
“You can also create value by investing time and effort into something (hence why we cherish those scraggly scarves we knit ourselves) or by knowing that someone else has (gifts fall under this category).”
This propensity seems irrational, yet it was observed in numerous experiments.
Information security professionals experience a sense of ownership for the data they safeguard. Therefore, the endowment effect might bias us towards overestimating the value of this data. Business managers are somewhat removed from the data by layers of applications and business processes and aren’t affected by the bias to the same degree.
In other words, business managers might value the data less than how infosec professionals value it. This would contribute to the disagreement regarding the level of risk associated with security of the data.
If information security professionals are, indeed, irrationally influenced by the endowment effect, what can we do about it? Alternatively, when persuading business managers to agree with our perspective, how might we influence them to experience the endowment effect to the same extent?
Common wisdom suggests that anxious individuals are better at spotting danger than those with more mellow personalities. However, research by Tahl Frenkel and Yair Bar-Haim indicates that the opposite may be true: People with nonanxious personalities might be more skilled at spotting the early signs of trouble. This finding could highlight the type of people best suited for information security jobs.
Spotting Fearful Faces on Photographs
Some of the individuals selected for the study possessed anxious personality traits on the a State-Trait Anxiety Inventory scale, while others were nonanxious. The participants were shown photographs of a face that exhibited a progressive degree of fearfulness. The researchers measured how early in the progression the participants could detect fear on the photos.
“As expected anxious participants needed significantly less stimulus fear intensity for conscious fear detection,” researchers discovered. However, only non-anxious participants began exhibiting early signs of fear detection before consciously recognizing fear on the photograph. The Scientific American Mind’s March 2012 issue clarified:
“The brains of anxious subjects barely responded to the images until the frightened face had reached a certain obvious threshold, at which point their brains leapt into action as though caught off guard. Meanwhile nonanxious respondents showed increasing brain activity earlier in the exercise, which built up subtly with each increasingly fearful face.”
The researchers concluded that anxious people might lack the ability to detect threats in a granular manner and “therefore might face threats with no prior warning signal—further contributing to their already heightened anxiety level and perhaps associated with their enhanced baseline threat vigilance.”
Early Detection of Threats in Information Security
If there is a stereotype of an information security professional, it is sure to include anxious characteristics, such as concerns regarding threats, distrust and perhaps a degree of paranoia. These traits allow us to recognize the signs of danger, building defenses in anticipation of risks and also responding to the situation when the defenses fail.
Yet, those professionals who are calm and nonanxious might be better at spotting early warning signs of an intrusion before it escalates into a major breach. This skill is similar to the ability to detect the subtle signs of fear when looking at a photograph of a face. If this is true, then I wonder whether such individuals trust their instincts and have the time to begin investigating the potential problem early enough.
If this is interesting to you, see my earlier post Are Mistrustful Individuals Better at Information Security?
Information security professionals are often frustrated when their concerns regarding vulnerabilities and associated threats appear to be ignored by the company’s executives. I already discussed 6 reasons why business managers ignore IT security risk recommendations. I’d like to add a few more to the list, based on recent research into the links between power, prestige and decision-making.
High-Status Individuals Are More Trusting
In one study, Lount and Pettit researched how a person’s social status might influence the extent of trusting someone. In one of their experiments “participants were primed to experience either high or low status and then given the opportunity to send money in a trust game.” In this context, high status might be associated with the prestige of being a business executive, while another extreme of a low status might be associated with an entry-level mail room clerk.
The participants who were assigned a high status were more trusting when sending money, hoping that the recipient would return the funds. Low-status individuals were more cautious. The researchers concluded from this and related experiments that “having status alters how we perceive others intentions” to believe “that others have positive intentions toward us.” They also pointed out that:
“The possession of status can fundamentally alter our expectations of peoples’ motives toward us, and in turn, influence our initial trust in others.”
People with prestigious positions, such as executive managers, might be more trusting of others and, therefore, might be willing to accept more risks.
Power Leads to Overconfidence
In another study, Fast, Sivanathan, Mayer and Galinsky explored the links between an individual’s perception of power and self-confidence. Their research found that people who believed themselves to be powerful experienced more certainty in the accuracy of their believes and opinions. They confirmed that “power increases overconfidence in the accuracy of one’s thoughts and beliefs.” This matters in organizations because many “high-impact decisions are based on perceived precision of relevant knowledge.”
The effect of this phenomenon is magnified because not only the subjective sense of power causes people to become overconfident in their knowledge, but also “overconfident people tend to acquire roles that afford power.”
Prestige, Power And Decisions About Risk
My perspective on these findings through the lens of information security and related risks is as follows:
So, there you have it: a few more reasons why executives are more prone to accept risks, in addition to the 6 explanations I offered earlier. You might also like to know that choice fatigue contributes to the willingness to accept risks and that sleep deprivation contributes to risk-taking behavior. We just cannot help it—it’s in our nature.
When information security controls consistently protect networks, systems or applications, then there is the risk that the defenses they provide might be taken for granted. The executives might wonder, “We haven’t had any breaches in recently. Why do we need a CISO?” Home users might muse, “Why do I need an antivirus tool if I’ve been malware-free for months?”
How might we preclude information security successes from leading organizations and individuals towards complacency? How can we make sure that the safeguards continue to be valued by their beneficiaries?
One way to mitigate the risk that the security measures will be taken for granted is to collect meaningful metrics that show that the safeguards are active and provide value. Determining what metrics to collect and how to do it is hard, and a good way to start learning about this topic might be Andrew Jaquith’s iconic book Security Metrics: Replacing Fear, Uncertainty, and Doubt.
The makes of computer security products also need to remind users that the product is active and providing value. The challenge is to do this without annoying the user with frequent and irrelevant prompts. Some of the tools I’ve seen accomplish this by presenting the user with periodic activity reports, summarizing the number of blocked intrusion or infections attempts. Though the numbers in such reports are rarely meaningful for people, they act as reminders that the system is being protected.
Both metrics and activity reports can be an opportunity to not only show that safeguards are in place, but also help the audience judge whether the controls are in need of tuning. This might also be a chance to educate the audience how to improve security posture even further.
For instance, an antivirus tool might report on the number of times the user clicked links embedded in email messages and explain the risks of this behavior. This would allow the product to not only remind the user that the protection is active, but also provide additional value through education. A CISO might show an increased percentage in security patch coverage in a given department, using it as an opportunity to gain support for expanding the program to other groups.
Hand-picked related posts:
Information security can fail in many ways, despite the best plans and intentions. That’s why we must design the security program with failure in mind. You should be able to detect suspicious activities before they escalate into major incidents. And when the data breach does occur, the security architecture should limit the incident’s scope, dampening the spread of malware or the attacker’s influence to give you time to respond.
We can learn how to account for the eventual failure of controls from the more established industries, such as boat-building. Consider the following excerpt from a stability certificate issued by the U.S. Coast Guard Marine Safety Center to a passenger boat:
The boat was tested to stay afloat even when one of its compartments is flooded. (Click on the excerpt above to see the full text.) The authorities expect the vessel to be designed with failure in mind, recognizing that even when it is built and operated perfectly, an unexpected event may cause a hull breach.
Designing an information security architecture with failure in mind involves accounting for possible issues at network, system and application levels. For instance, accomplishing this at the network level might involve deploying multiple firewalls to segment the environment into several tiers according to risk.
In the words of Denis Waitley, “expect the best, plan for the worst, and prepare to be surprised.”
Hand-picked related items:
“A ritual is a set of actions, performed mainly for their symbolic value,” according to Wikipedia. But rituals are more than that, because of the role they play in society and the attention to detail required to perform them. Ritualistic behavior brings a sense of control to otherwise stressful situations. In many ways, information security practices are also rituals that make us feel in control, though without always addressing the risks.
A paper by Boyer and Lienard examined ritual behavior in obsessive and normal individuals. They pointed out that many of the rituals in which people engage dictate specific rules, combinations of action and compulsion. They also explained that:
“The thoughts that prompt rituals revolve around a limited number of themes, such as contagion and contamination, aggression, and safety from intrusion… Ritualized behaviors also include many recurrent themes, such as washing, cleansing, ordering and securing one’s environments, or avoiding particular places.”
Concerns related to information security seem to fit well into these themes, as would the actions we take to ameliorate the situation.
In a paper on behavioral practices associated with threat detection, Eilam, Izhar and Mort suggested that ritual-like behavior is a salient characteristic of precaution in humans. Following explicit instructions during a ritual “confers a sense of controllability and predictability.” Furthermore,
“Since uncontrollability and unpredictability are major stressors, a repeated and precise performance of the same acts can generate a sense of controllability and a consequent reduction in fear from the abstract threat.”
The information security rituals InfoSec practitioners follow typically take the form of best practices, which are also codified as frameworks and standards. Though some of practices are based on experience, metrics and data, many are collection of steps that we’ve been following out of habit. Like most rituals, following these practices requires painstaking details, rules and actions. Doing this makes us feel in control.
Rituals seem to reduce stress because, according to Boyer and Lienard, compulsory action sequences overload working memory. This “might make it more difficult for intrusive thoughts to become conscious.” In the context of information security, our rituals might relieve stress and offer an illusion of control without actually addressing the risks.
— Lenny Zeltser
Some information security professionals have expressed concerns regarding the state of the infosec industry, suggesting that many of our practices don’t work, that our interactions resemble speaking into an echo chamber, and that we spend too much time philosophizing about pointless topics. One way to make sense of what’s going on is to consider whether the industry’s dynamics resemble those of a herd—we might find that not all is bad, though there is room for worry.
A Benefit of a Herd: Increased Vigilance
In a paper on behavioral practices associated with threat detection, Eilam, Izhar and Mort highlight a survival benefit to animals that live as a herd or flock: “the larger the group, the lower the level of individual vigilance and the greater the sum of collective vigilance.” Individual animals can spend more time eating rather than watching out for predators, even though the collective as a whole is safer.
In fact, not all members of the herd are equally attentive, which is one of the advantages of this social grouping. The researchers clarified that the individuals at the perimeter of herds are more vigilant than those in the center. As the result, “higher vigilance by certain individuals enables other individuals to reduce their vigilance” among social animals.
Dare I draw a parallel to participants of the information security industry? Through on-line and in-person interactions we exhibit social herd-like characteristics. Through work, research and writing, some of us pay closer attention to threats, vulnerabilities and risks: this allows others to remain less vigilant without compromising the security of the collective. This seems like a good thing, even if some of the individuals discuss topics without immediate practical applicability.
A Downside of a Herd: Anxiety is Contagious
Eilam, Izhar and Mort point out that there is a characteristic of herds that might counterbalance the benefit of increased collective vigilance: Their research showed that vigilance, and thus anxiety, among social animals is contagious:
“Being among a group of vigilant, watchful and worried conspecifics might exert a contagious effect and, in consequence, other individuals may also become vigilant, watchful and worried.”
This is why the “echo chamber” syndrome is undesirable: By rehashing the same topics among the same groups of individuals, we are infecting each other with anxiety that might be disproportionate to the actual risks. This seems like a bad thing. To address this issue, we should probably:
Being part of the herd has its benefits, though it’s not foolproof. May we graze long and prosper.
There are differences between anxiety and fear in the way people experience and react to these conditions. We might find that addressing the fear of information security events is more practical than handling anxiety. It’s worth considering how we might shift people from the state of anxiety to that of fear, so we can ultimately address their concerns.
Anxiety vs. Fear
Lang, Davis and Ohman examined physiological effects of fear and anxiety on humans, finding a number of distinctions. The authors also provided an explanation of how fear is different from anxiety:
“Fear is generally held to be a reaction to an explicit threatening stimulus, with escape or avoidance the outcome of increased cue proximity. Anxiety is usually considered a more general state of distress, more long lasting, prompted by less explicit or more generalized cues.”
In the context of information security, experiencing fear implies being concerned with a specific event, such as a particular threat agent compromising a specific set of data. We have a way of dealing with such concerns through threat modeling, which involves a structured approach to identifying and rating threats.
Dealing with anxiety related to information security is much harder. Anxiety is a reaction to abstract concerns that are harder to identify, and, as the result, harder to address. For instance, a person might be anxious about the company experiencing a data breach.
Handling Anxiety is Harder than Dealing with Fear
Eilam, Izhar and Mort published a paper on behavioral practices associated with threat detection. The wrote:
“An animal that is anxious about the possibility of a nearby predator, might then come face to face with a predator, which will convert the anxiety into a real and perceptible danger that produces a fear response. Alternatively, the anxious animal might not encounter a predator, and the question is then one of when it will calm down and become less anxious. Relief from a state of anxiety is subjective and thus varies among individuals.”
Marketing and persuasion practices in information security often incorporate the principles of Fear, Uncertainty and Doubt (FUD). FUD isn’t always bad, as Dave Shackleford suggested. The problem is that much of FUD is designed to induce anxiety, rather than fear. There are several categories of FUD tactics, as Mike Rothman outlined; some of them can incorporate fear and thusly persuade people to pay attention to relevant information security threats.
Focus on Fear, Rather Than Anxiety
By focusing on specific threats tied to actual events and meaningful risks, we can focus people’s attention on specific “predators.” We can then consider how to alleviate the associated fears through risk management. Breach and threat reports put out by security vendors (e.g., Verizon Data Breach Investigations Report) make this possible. In contrast, if we focus on inducing the state of anxiety, we might succeed at selling some security products, but we won’t actually address risks in a meaningful manner. It’s possible to react to fear. It’s hard to leave the state of anxiety.