Cognitive Biases: What They Are and How They Affect You

Cognitive Biases

 

Cognitive biases are systematic flaws in the way people process information and make decisions. These biases affect our thought process in every area of life, from how we form our memories, to how we shape our beliefs, and to how we connect with other people.

Because cognitive biases have such a pervasive influence, it’s important to understand them. In the following article, you will learn more about cognitive biases, understand why they affect us, see what types of them exist, and find out what you can do in order to mitigate them successfully.

 

What are cognitive biases

A cognitive bias is a systematic pattern of deviation from rationality, which occurs due to the way that our cognitive system works. Accordingly, cognitive biases cause us to be irrational in the way we search for, evaluate, interpret, judge, use, and remember information.

The patterns of irrationality that are caused by cognitive biases vary in terms of how they affect us and in terms of their scope. Accordingly, cognitive biases can lead to minor issues, such as forgetting a small detail from a past event, or they can lead to more serious issues, such as choosing to reject life-saving treatment due to a misguided belief in a pseudoscientific course of treatment.

 

Examples of cognitive biases

One example of a well-known cognitive bias is the halo effect, which is a cognitive bias that causes our impression of someone in one area to influence our opinion of that person in other areas. This means that, for example, if we think that someone is physically attractive, we tend to believe that they are also more knowledgeable and have a better personality compared to what we would think if they were unattractive.

Another example of a cognitive bias is the illusion of control, which is a cognitive bias that causes us to overestimate our ability to control the outcome of situations where chance plays a greater role than skill. This means that, for example, this bias could cause us to undertake a risky business venture, even if that venture is likely to fail regardless of how much effort we put into it.

Overall, cognitive biases can affect us in many areas of life, from how we interpret new information, to how we remember past events, and to how we perceive other people. The two biases listed here represent two simple examples of cognitive biases, and throughout the article, you will see examples of various other biases that affect us.

 

Who experiences cognitive biases

Cognitive biases are generally viewed as a shared universal trait that all humans experience. This is because biases occur due to the way that our basic cognitive system works, meaning that anyone can experience cognitive biases, including entrepreneursmedical doctors, and even psychologists.

Furthermore, cognitive biases have been observed not only in humans but also in animals, such as bees, pigs, and dogs. However, the type of biases that these animals experience varies significantly across species, due to the innate differences in cognitive functions.

It’s important to note that various background factors, such as age and personality type, can make people more predisposed to certain cognitive biases. However, the relationship between these background factors and the occurrence of cognitive biases is highly complex, and every person experiences some cognitive biases, to some degree.

 

Why we have cognitive biases: bounded rationality and our cognitive systems

Bounded rationality is the idea that our decision-making ability is constrained by the limitations of our cognitive systems, and depends on the type of information that we have to process as well as on the amount of time that we have in order to process it. This means that when we try to solve a problem, we often end up reaching a solution that is different from the one we would reach if our cognitive systems were perfect, or from the one that we would reach if we had more time to consider the problem.

Under this framework, people are seen as being bounded rationally and as satisficing. This means that we make decisions that are not necessarily optimal, since instead of looking for the best solution that is available for a certain problem, we generally tend to look for a solution that is perceived as good enough given the circumstances.

A common way in which we do this is by using heuristics, which are mental shortcuts that help us quickly reach a ‘good enough’ solution to our problems. For example, if we are asked how many apples there are in a certain basket, then instead of counting each apple individually, we might decide to estimate this amount by counting the number of apples in one section of the basket, and then multiplying that amount by the number of sections.

Overall, this means that cognitive biases occur when we a select sub-optimal solution to a problem, due to the way that our cognitive systems works. There are two main cognitive systems at play here:

  • System 1- this cognitive system is responsible for our intuitive processing. It is fast, automatic, and effortless. Accordingly, processes on this system run in parallel, meaning that it’s possible to engage this system on multiple fronts simultaneously. This system can be strongly influenced by emotional considerations. An example of a situation where this system is engaged is when we feel pleased because someone laughed at a joke that we told.
  • System 2- this cognitive system is responsible for our conscious reasoning. It is slow, controlled, and effortful. Accordingly, processes in this system run in a serial way, meaning that this system can only focus on one thing at a time. This system is emotionally neutral, and therefore is not influenced by emotional considerations. An example of a situation where this system is engaged is when we try to solve a complex math problem in our head.

Based on this, one way in which the use of our cognitive systems can cause us to experience a cognitive bias is when we rely on our intuition (System 1) in order to make a decision that normally requires complex reasoning skills (System 2).

For example, in one experiment, students at Princeton University were asked a simple brain teaser:

“A bat and a ball cost $1.10 in total. The bat costs $1 more than the ball. How much does the ball cost?”

Almost anyone who hears this question feels an initial tendency to answer “10 cents”, because the total sum of $1.10 separates naturally into $1 and 10 cents, so that that the answer “10 cents” sounds about right in terms of magnitude.

The problem is that over half the people in the experiment ended up sticking with this initial estimate, leading them to answer this question incorrectly, since the right answer is that the bat costs $1.05, while the ball costs $0.05.

Essentially, what happens here is that most people make an intuitive assessment of the situation using System 1, which gives them a close but incorrect answer to the question.

Some people then use System 2 in order to reassess this initial solution, which allows them to realize that it’s incorrect, and which consequently leads them to calculate the correct solution to this question. However, other people fail to engage System 2 properly or at all, which means that they end up relying on the incorrect initial assessment that they got from System 1.

Of course, this does not mean that the intuitive assessments provided by System 1 are inherently faulty. Rather, our intuition can be a powerful and accurate tool, and there are many situations where our intuition provides us with the right solution to problems that we encounter. Instead, the issue here is that our intuitive system (System 1) usually requires monitoring by our conscious reasoning system (System 2), in order to identify and correct any errors that occur due to faulty intuitions.

Overall, based on this framework, cognitive biases occur primarily due to a failure of three cognitive mechanisms:

  • Failure of System 1 to generate a correct intuitive impression. This means that System 1 gives us a quick but incorrect solution to our problem.
  • Failure of System 2 to monitor impressions generated by System 1. This means that System 2 fails to notice and correct faulty impressions which are generated by System 1.
  • Failure of System 2 to reason correctly. This means that System 2 gives us an incorrect solution to our problem, due to a failure of our conscious reasoning process.

Note that which system is engaged in which task depends on a person’s skill level, and on the circumstances under which they perform the task in question.

For example, skilled drivers are generally able to talk while driving, which indicates that for them, driving is intuitive, since it’s difficult to perform two System 2 processes simultaneously, as we saw above. However, even the most experienced drivers will generally struggle to talk if they’re driving in rough conditions, which require their full attention.

 

Types of cognitive biases

There are two main criteria that you can use in order to categorize the different types of cognitive biases:

  • Area of cognition- different biases affect different domains of our thinking. For example, some biases affect the way we recall information, while other biases affect our perception of ourself, and yet other biases affect our decision-making process.
  • Cause of bias- different biases occur due to different reasons. For example, some biases occur due to our brain’s limited information-storing capacity, while other biases occur due to our attempts to feel good about our decisions, and yet other biases occur due to our susceptibility to social pressure.

Note that earlier, we saw that cognitive biases occur due to the failure of System 1 and System 2 to function properly. However, when it comes to categorizing cognitive biases, ’cause of bias’ more frequently refers to the overall mechanism underlying the bias. This is due to the fact that this type of categorization more accurately captures the nature of the different cognitive biases, and due to the variation in terms of which System people use in order to run which cognitive processes.

Next, we will see examples of the different types of cognitive biases, based on the area of cognition where the bias occurs, and based on the cause of the bias.

 

Area of cognition

Different biases affect different areas of our cognition. Based on this criterion, some of the most common types of biases include:

  • Information biases- these are biases that affect the way in which we acquire and process information. For example, the overkill effect is a cognitive bias that causes people to reject explanations that they perceive as too complex. This occurs because people generally prefer information that is easy for them to process from a cognitive perspective.
  • Belief biases- these are biases that affect the way in which we form our beliefs. For example, the backfire effect is a cognitive bias that causes people to strengthen their support of their preexisting beliefs, when they encounter evidence that shows that those beliefs are wrong. This occurs because when people argue strongly against unwelcome information, they end up with more arguments that support their original stance.
  • Decision making biases- these are biases that affect the way we make decisions. For example, the bandwagon effect is a cognitive bias that causes people to do something because they believe that other people are doing the same thing. This occurs because people feel a need to conform and act in accordance with others, and because people often rely on other people’s judgment when deciding how to act.
  • Calculation biases- these are biases that affect the way in which we calculate things such as probabilities or values. For example, the gambler’s fallacy is a cognitive bias that causes people to mistakenly believe that if something happens more frequently than normal during a given time period, then it will happen less frequently in the future. This occurs because people believe that a short sequence of random independent events should be representative of a longer sequence of such events.
  • Memory biases- these are biases that affect the way our memory works. For example, rosy retrospection is a cognitive bias that causes people to remember past events as being more positive than they were in reality. This occurs because when people experience a certain event, they tend to have both positive and negative thoughts, but as time passes they are more likely to forget their negative thoughts than their positive ones.
  • Social biases- these are biases that affect our social perception and behavior. For example, the spotlight effect is a cognitive bias that causes people to think that they are being observed and noticed by others more than they actually are. This occurs because people naturally see everything from their own point of view, so they struggle to accurately judge how they look through other people’s eyes.

Note that many of these areas of cognition are interrelated, meaning that some biases can affect our cognition in several domains. For example, the bandwagon effect, which as we saw above causes people to conform to the attitudes of others, affects not only the way we make decisions, but also the way in which we form our beliefs, and the way in which we shape our social behavior.

Another bias that can affect several areas of our cognition is the confirmation bias, which is a cognitive bias that causes people to search for, favor, interpret, and recall information in a way that confirms their preexisting beliefs. This bias can, for example, affect the way we acquire new information, as well as the way we remember old information, and the way we evaluate different choices.

 

Cause of bias

Biases can occur due to various reasons, with the most common causes being:

  • Heuristics- these are biases that are caused by our use of mental shortcuts. For example, the representativeness heuristic is a heuristic that causes people to focus on how representative someone or something is of a particular class, when assessing the likelihood that they belong to that class. This occurs because focusing on representativeness allows people to quickly assess the situation and reach a roughly correct conclusion in many cases, without them having to deal with complex background information.
  • Limited cognitive capacity- these are biases that are caused by our limited cognitive capacity. For example, the Google effect is a cognitive bias that causes people to forget information that they believe can be easily found online. This occurs because people sometimes prefer to rely on their ability to remember how to find information, rather than on their ability to remember the information itself.
  • Noisy information processing- these are biases that are caused by the effects of various background factors on the way we process information. For example, the humor effect is a cognitive bias that causes people to remember information better when that information is perceived as humorous. This occurs because humorous information benefits from increased attention and improved encoding, compared to non-humorous information.
  • Emotional motivation- these are biases that are caused by various emotional considerations. For example, the ben franklin effect is a cognitive bias that causes people to like someone more after they do that person a favor. This occurs because people want to avoid cognitive dissonance, which could arise as a result of behaving in a favorable way towards someone that they either dislike or don’t like enough.
  • Social influence- these are biases that are caused by the influence of various social factors. For example, the outgroup homogeneity bias is a cognitive bias that causes people to view members of outside groups as being more similar to each other than members of groups that they are a part of. This occurs because people tend to allocate more of their attention to members of their own group, since interactions with those people are generally perceived as more important.

Note that some biases can occur due to a combination of reasons. For example, many heuristics-based biases occur due to our limited cognitive capacity, and many biases that have an emotional motivation are also affected by social factors.

 

Hot vs. cold biases

Another criterion which is sometimes used in order to categorize cognitive biases is the distinction between hot biases and cold biases:

  • Hot biases are biases which are motivated by emotional considerations, such as our desire to have a positive self-image, or our need to feel that we made a choice that is valid from a moral perspective. For example, the self-serving bias is a cognitive bias that causes people to ascribe their successes to their own efforts and abilities, while at the same time ascribing their failures to external causes. This bias occurs due to people’s need to enhance their self-esteem, and is strongly influenced by various emotional considerations.
  • Cold biases are biases which occur due to emotionally-neutral processes, such as our intent to make an optimal choice, or our intent to make a decision quickly. For example, the telescoping effect is a cognitive bias that causes people to perceive past events as being more recent than they are, and recent events as being more remote. This bias occurs due to the way our memory works, and isn’t generally affected by any emotional considerations.

As with the other criteria that are used in order to categorize cognitive biases, the hot/cold distinction can sometimes be difficult to apply when it comes to the occurrence of certain biases, that are affected by emotional considerations only to a small degree. Nevertheless, the hot/cold distinction can be valuable in many cases, and can be used in conjunction with other criteria in order to categorize cognitive biases and understand why they occur.

 

Cognitive debiasing

The problem with cognitive biases and the importance of debiasing

As we saw so far, cognitive biases can be problematic, because they distort our thinking and cause us to make sub-optimal decisions.

For example, the ostrich effect is a cognitive bias that causes people to avoid situations where they might encounter information that they don’t want to deal with. This is an issue, since this bias can cause us to avoid acquiring important information, such as information on how to deal with a medical condition that we have, or information on the performance of one of our investments. Being able to mitigate the ostrich effect is therefore beneficial, since it could prompt you to acquire useful information that you would otherwise avoid.

Accordingly, learning how to debias yourself successfully can be beneficial in many situations, by helping you think more clearly, and by helping you make better decisions.

Furthermore, debiasing techniques can also be beneficial when you apply them to other people, since they can help you communicate more effectively, and encourage others to be more rational.

For example, by knowing how to mitigate the backfire effect, which as we saw above causes people to strengthen their support of their original stance in the face of evidence that they are wrong, you can improve your ability to persuade people when presenting them with evidence that they don’t want to hear.

 

How to debias

As we saw earlier, cognitive biases occur as a result of three possible issues with our cognitive systems:

  • Failure of System 1 (our intuitive system) to generate correct intuitions.
  • Failure of System 2 (our reasoning system) to monitor and correct System 1.
  • Failure of System 2 to carry out a proper reasoning process.

Based on this, there are three main things that you can do in order to mitigate cognitive biases:

  • Help System 1 generate better intuitions. To do this, you need to train System 1 to instinctively reach optimal solutions. You can accomplish this by practicing the formation of intuitive impressions, while providing this System with feedback which helps it learn how to improve.
  • Help System 2 monitor System 1 better. To do this, you need to help System 2 identify cases where System 1 leads to sub-optimal solutions, so that System 2 can handle them properly. You can accomplish this by implementing relevant metacognitive strategies, which prompt System 2 to monitor System 1 better.
  • Help System 2 form better judgments. To do this, you need to improve your conscious reasoning process, in order to ensure that System 2 reaches optimal solutions. You can accomplish this by implementing relevant metacognitive strategies, which promote a more effective reasoning process.

Note that in this context, metacognitive strategies are strategies that you can apply in order to regulate your cognition.

Different metacognitive strategies will be applicable in different scenarios. These strategies can be fairly simple and universal, such as increasing your awareness of the bias in question, or they can be more specific, such as creating psychological distance in order to mitigate the egocentric bias, which is the tendency to anchor other people’s viewpoint to your own.

With enough practice, the application of metacognitive strategies can become intuitive. However, their application can be effective even at an early stage, where you have to consciously remind yourself to use them. This stands in contrast with training System 1 to form better intuitions, which generally requires a significant amount of practice in order to reach a meaningful improvement in performance.

Overall, this section describes the basic idea behind cognitive debiasing. Since the concept of debiasing warrants a comprehensive discussion of it by itself, the current article doesn’t expand on it further. If you want to learn more about the debiasing process and about how to debias effectively, read this in-depth guide on the topic.

 

A few notes on cognitive biases and our cognitive systems

Cognitive biases aren’t necessarily bad

So far, we saw that cognitive biases can cause a variety of issues. However, while cognitive biases cause us to think in an irrational way, it is incorrect to say that cognitive biases always affect us in a negative manner. Rather, cognitive biases can sometimes influence our thought process in a positive way, that helps us make optimal decisions.

For example, the pessimism bias is a cognitive bias that causes people to overestimate the likelihood that negative things will happen to them. This bias can have a negative impact in some cases, such as when it causes people to avoid trying to cope with a difficult situation, by leading them to assume that they will inevitably fail, regardless of their efforts.

However, the pessimism bias can also serve as an adaptive coping mechanism in some cases, such as when it is used as a defensive strategy that encourages people to think through risky future situations. In such cases, this cognitive bias prompts people to prepare for the future, by encouraging them to think about all the possible obstacles that they might encounter, as well as of ways to overcome those obstacles.

Furthermore, many biases are seen as beneficial heuristics, which, as we saw earlier, are mental shortcuts that involve the use of efficient rules in order to simplify complex problems and make decisions quickly, at the potential cost of missing the best possible solution to the problem.

Accordingly, from an evolutionary perspective cognitive biases are sometimes seen as design features rather than design flaws, meaning that they are viewed as an adaptive behavior, that can be beneficial in many cases.

Overall, the important thing to remember is that cognitive biases can sometimes aid your thought process, and enable you to make decisions in an optimal way, even if they distort your view of the situation. At the same time, however, it’s still important to be aware of them, so that you can evaluate their effect, and determine whether or not you will benefit from reducing their influence.

 

The role of perception

Our perception system, which is responsible for our ability to perceive input, is considered to be distinct from our two other cognitive systems (System 1 and System 2).

There are some similarities between our perception system and System 1, since they both consist primarily of automatic subconscious processes that run in parallel to each other. However, our perception system is different from System 1 in some ways, due to the fact that it’s:

  • Relatively neutral, as opposed to System 1, which is influenced by various emotional and social considerations.
  • Evoked by direct stimulus only, as opposed to System 1, which can be evoked by things such as language or thought.
  • Limited to perceptual representations, as opposed to System 1, which can generate abstract representations.
  • Limited to stimuli that we encounter in the present, as opposed to System 1, which can also deal with information that relates to the past or the future.

Essentially, our perceptual system provides us with information that is fed into System 1, in order to form intuitive impressions. Then, these impressions undergo deliberate operations of reasoning by System 2, in order to form conscious judgments.

 

The history of cognitive biases

Cognitive biases have affected humans and other animals from an early stage of our development, and scientists have long known that people sometimes think in an irrational way.

However, the idea that irrational decisions occur as a result of systematic biases in our cognitive systems was made prominent in the early 1970’s by two researchers, Amos Tversky and Daniel Kahneman, in a series of papers on the topic:

The two earlier papers served primarily to introduce the concept of heuristics under the framework described by Tversky and Kahneman, while the 1974 paper became the best-known article on the topic, and introduced cognitive biases as we know them today.

Kahneman later went on to win the 2002 Nobel Prize in Economics for his joint work with Tversky, who died in 1996. (Note: the Nobel Prize is not awarded posthumously).

 

The difference between cognitive biases and logical fallacies

While cognitive biases and logical fallacies are similar, they represent two distinctly different things:

  • Cognitive biases are systematic errors in cognitive processing (a psychological concept).
  • Logical fallacies are flawed patterns of argumentation (a philosophical concept).

This means that cognitive biases occur at a more basic level of thinking, and can therefore lead to the use of various logical fallacies.

For example, consider the appeal to nature, which is a logical fallacy where something is assumed to either be good because it’s considered ‘natural’, or bad because it’s considered ‘unnatural’. This fallacy could potentially be rooted in some cognitive bias that causes people to instinctively prefer things that they perceive as ‘natural’, and to reject things that we perceive as ‘unnatural’.

However, this doesn’t mean that the use of a logical fallacy is always rooted in some cognitive bias.

For example, consider the strawman fallacy, which is a logical fallacy where someone presents a distorted version of their opponent’s argument, in order to make it easier for them to attack. While someone might use a strawman argument because they misunderstand their opponent’s argument due to some cognitive bias, it’s also entirely possible to use such arguments consciously, while being aware that you are doing so, and without being driven to it by any bias.

Overall, the main distinction between cognitive biases and logical fallacies is that biases are a psychological concept, while fallacies are a philosophical concept. Biases can sometimes prompt the use of certain fallacies, but the two aren’t always related, and the use of fallacies can occur without a cognitive bias being involved.

 

How to learn more about cognitive biases

If you are interested in the topic of cognitive biases, and would like to learn more about them and about the way people think and make decisions, here are a few recommended books that you should look at:

  • Thinking, Fast and Slow (by Daniel Kahneman)- this is the foremost book to read if you want to understand how our cognitive systems work and why we have cognitive biases.
  • Predictably Irrational (by Dan Ariely)- this book will help you understand the systematic patterns of irrationality that people display when they make decisions.
  • The Art of Thinking Clearly (by Rolf Dobelli)- this book will help you learn about common biases that we encounter in our everyday life.

 

Summary and conclusions

  • Cognitive biases are systematic patterns of deviation from rationality, that cause us to be irrational in the way that we search for, evaluate, interpret, judge, use, and remember information.
  • We experience cognitive biases when we fail to correct faulty intuitive impressions, or when we fail to conduct a valid reasoning process. Both these issues occur as a result of the way that our cognitive systems work.
  • Cognitive biases can be categorized based on the area of cognition in which they affect us (e.g. decision-making or memory), and based on their cause (e.g. social influence or limited cognitive-capacity). Biases can also be categorized based on a hot/cold distinction, with hot biases being motivated by emotional considerations and cold biases being driven by emotionally-neutral processes.
  • Cognitive biases can affect us negatively in many areas of life, when they cause us to find sub-optimal solutions to our problems. However, biases can sometimes be beneficial, such as when they help us find quick solutions to our problems.
  • It’s possible to debias yourself and others successfully through the use of metacognitive strategies, which can help you conduct a valid reasoning process. Furthermore, you can reduce the number of biases that you experience by training yourself to form better intuitive impressions.

 

Overall, the topic of cognitive biases is fascinating, and there is a lot that you can learn about it. Accordingly, if you want to read more about cognitive biases, three recommended books are Thinking, Fast and Slow“, “Predictably Irrational“, and “The Art of Thinking Clearly“.

 


Hanlon’s Razor: Why You Shouldn’t Start By Assuming the Worst

Hanlons Razor

 

Hanlon’s razor is the adage that you should “never attribute to malice that which is adequately explained by stupidity.” Essentially, this means that when someone does something that affects you in a negative way, you should avoid assuming that they acted out of an intentional desire to cause harm, as long as there is a different plausible explanation for their behavior.

Applying this principle can be beneficial in a wide range of situations. In the following article, you will learn more about Hanlon’s razor, and about how you can implement it in various areas of life.

 

What is Hanlon’s razor

The basic formulation of Hanlon’s razor is:

“Never attribute to malice that which is adequately explained by stupidity.”

As such, Hanlon’s razor is a philosophical razor, meaning that it’s a simple guiding principle, that helps you select the most likely explanation for a phenomenon. Specifically, Hanlon’s razor encourages you to not start out by assuming that a certain action occurred due to someone’s ill intentions, if it’s possible that it occurred due to stupidity instead.

Hanlon’s razor is a valuable tool, that can help you deal with various everyday problems, such as having someone miss an appointment with you or not respond to an email. This is because Hanlon’s razor can help you figure out why people do the negative things that they do, while also helping you avoid the unnecessary anger and stress which are associated with immediately assuming that people had bad intentions.

Note that there are two important caveats that must be mentioned with regards to Hanlon’s razor:

  • Hanlon’s razor doesn’t have to do with whether a certain action was justified or not. That is, the use of Hanlon’s razor doesn’t imply that a certain action is acceptable just because it happened as a result of stupidity instead of malice. Rather, Hanlon’s razor is simply used in order to help you find the most likely explanation for an action, after which you can decide how to judge that action and how to respond accordingly.
  • Hanlon’s razor doesn’t imply that actions never occur due to malice. Rather, it states that in general, negative outcomes are more likely to occur as a result of stupidity rather than malice, and that it’s more beneficial for you to assume that stupidity was the cause of such outcomes, at least initially.

Overall, Hanlon’s razor is meant to serve as a simple rule of thumb, that gives you a good starting point when you’re trying to figure out the cause of something bad that happened.

In the following sections, you will learn how you can benefit from implementing from Hanlon’s razor, and how you can utilize it as effectively as possible.

 

How understanding Hanlon’s razor can benefit you

Using Hanlon’s razor when can be beneficial, for two main reasons:

  • Hanlon’s razor can help you find the most logical explanation for various events. This is because, in general, it’s more likely that people will do something out of a lack of awareness, than out of an intentional desire to cause harm.
  • It’s generally preferable for you to start by assuming a reason other than malice for negative events. In general, assuming malice as a cause of a negative event will cause you to experience more anger and stress than assuming other reasons. Therefore, you can generally benefit from not assuming the worst from the start, in terms of your emotional wellbeing and productivity.

Essentially, using Hanlon’s razor can help you quickly assess situations that you are in, and can help you deal with those situations in a better way.

In addition, from a philosophical perspective, using Hanlon’s razor can be seen as the “doing the right thing”, since it relates to the principle of charity, which represents the idea that you should start by assuming the best possible interpretation of other people’s statements and actions.

Adopting this stance is also beneficial for non-philosophical reasons, since giving people the benefit of the doubt at first can help you communicate with them in a more productive manner, that makes them more likely to cooperate with you in the future. This is especially important in relationships, both personal and professional, where assuming that the other person did something which had a negative outcome out of malice can be detrimental if you end up being wrong.

Finally, another valuable benefit of using Hanlon’s razor is that, in some cases, it could prompt you to take action that you otherwise wouldn’t.

For example, consider a situation where someone is doing something that bothers you, such as a situation where your next door neighbor is making a lot of noise. Instinctively, you might start out by assuming that they are aware that what they are doing is bothering you, and that they just don’t care, which is a mindset that causes you to believe that you shouldn’t bother asking them to stop.

However, by implementing Hanlon’s razor, you could realize that they are doing this not because they don’t care about bothering you, but because they’re simply unaware that what they’re doing is bothering you. This could encourage you to take positive action, such as asking them to stop, which you might not have done otherwise.

Overall, implementing Hanlon’s razor offers various benefits, including helping you feel less stressed out, and helping you communicate better with others. Next, you will see how you can implement Hanlon’s razor, in order to benefit from it as much as possible.

 

How to implement Hanlon’s razor

So far, we saw what Hanlon’s razor is, and how you can benefit from using it. Fortunately, implementing Hanlon’s razor in your everyday life is relatively simple, which is why it’s such a helpful principle to remember.

Essentially, Hanlon’s razor can be implemented any time you find yourself trying to find a reason for why someone did something that ended up having negative consequences, by negating your initial assumption that their actions occurred due to malice.

In the following sections, you will see a few specific guidelines that will help you implement Hanlon’s razor effectively, by expanding its scope, by accounting for the egocentric bias, and by learning how to assess the situation when deciding whether or not to use Hanlon’s razor in the first place.

 

Expanding Hanlon’s razor

While the original formulation of Hanlon’s razor is useful, it can be improved by modifying it a bit, to reach the following formulation:

“Never attribute to negative reasons that which is adequately explained by other causes.”

This formulation involves two important modifications from the original one:

  • “Malice” is replaced by “negative reasons”.
  • “Stupidity” is replaced by “other causes”.

This is important, because focusing only on malice and stupidity limits the meaning of this adage in a problematic way, since people can do things which end up having bad outcomes for others, even if they are not driven by malice or stupidity.

For example, if you’ve applied to a job and haven’t heard back after a few days, you might prematurely assume that it’s because the person in charge of the job thinks that you’re not good enough.

Here, the original formulation of Hanlon’s razor isn’t applicable, since you’re unlikely to attribute that person’s behavior to malice in the first place. Furthermore, you’re also unlikely to attribute their behavior to stupidity, since it’s more likely that there is an alternative explanation, such as the fact that they’re still processing applications from candidates.

By expanding Hanlon’s razor to account for negative reasons other than malice and for alternative causes other than stupidity, you could realize that applying Hanlon’s razor can be beneficial, by helping you not immediately assume that the reason that you haven’t heard back is that you’re not good enough.

Moreover, by expanding Hanlon’s razor this way, you are more likely to find the true cause of the other person’s action.

For example, if you send someone an email with a question, and they don’t reply back after a few days, you might assume that it’s because they’re actively ignoring you. If the only alternative explanation to this is that they’re stupid, you’re going to be less likely to apply Hanlon’s razor in this case, and less likely to find the true cause for their behavior.

By expanding Hanlon’s razor to account for alternative causes, you could realize that they might not have replies due to some other reason, such as because they forgot, because they’re busy at the moment, or because they’re trying to find the relevant information that they need in order to reply.

Overall, there are many alternative explanations for behaviors which can affect you in a negative way, beyond malice and stupidity. These can range from negative things, such as ignorance, carelessness, and incompetence, to more reasonable things, such as the fact that the other person needs more time in order to deal with the issue at hand.

Therefore, by expanding the scope of Hanlon’s razor in order to take alternative causes into account, you can benefit from using this principle in a far wider range of situations, and from being more likely to find the true cause behind other people’s actions.

 

Accounting for the egocentric bias

The egocentric bias is the tendency to rely on our own perspective when we interpret other people’s actions. This means that if, for example, you are an expert in a certain topic or emotionally invested in something, you might be more predisposed to believe that someone who does something negative is doing it intentionally.

For example, when you are an expert at a certain skill and see a novice doing something that they shouldn’t, it can sometimes be natural for you to assume that they did it intentionally, because for you, it’s obvious that what they did was wrong, and the egocentric bias makes it difficult to see things from their perspective.

In order to overcome the egocentric bias and make it easier for yourself to implement Hanlon’s razor in such situations, you can use self-distancing techniques. This involves trying to put yourself in the other person’s shoes, so that you can see things from their perspective.

This is important to do when you’re implementing Hanlon’s razor, because it can help you identify cases where you’re likely to incorrectly assume that someone did something on purpose, when that wasn’t the case.

 

Exceptions to Hanlon’s razor

While Hanlon’s razor is a good rule of thumb, it should be viewed as a guiding principle, rather than as an absolute truth. This is because, there are, in reality, some situations where a negative outcome should be attributed to malice, rather than to stupidity, ignorance, or any other causes.

This means that even though you should strive to give people the benefit of the doubt where possible, implementing Hanlon’s razor shouldn’t cause you to be naive or unprepared.

As such, when deciding when to implement Hanlon’s razor, you should take the following factors into consideration:

  • How likely it is that an action occurred due to reasons other than malice. The more likely it is that whatever happened did not occur due to malice, the more predisposed you should be to giving the other person the benefit of the doubt. When trying to assess this likelihood, you can take the person’s past actions into account, as well as their general personality, their abilities, and what they stand to gain from acting maliciously
  • What are the potential costs associated with incorrectly assuming malice. The more costly it will be for you to incorrectly or prematurely assume malice, the more predisposed you should be to assuming that whatever happened had happened due to a reason other than malice.
  • What are the costs associated with incorrectly assuming reasons other than malice. The more costly it will be for you to mistakenly assume that someone acted for reasons other than malice, the more cautious you should be when implementing Hanlon’s razor.

Accordingly, there are situations where you might choose not to use Hanlon’s razor, because the likelihood of the other person acting maliciously is so high, or because there is a high cost to incorrectly assuming that their actions did not occur due to malice. In such cases, it can be beneficial to start off by assuming malice after all, and to then only accept an alternative explanation after you have sufficient evidence indicating otherwise.

Such situations can be described using the concept of “guilty until proven innocent”, which is the opposite of the concept proposed by Hanlon’s razor, which can be described as “innocent until proven guilty”.

Keep in mind that in some cases, it can be beneficial to use a hybrid approach. This can involve, for example, assuming a non-malicious explanation for people’s actions, while at the same time preparing to act if the malicious explanation turns out to be true (i.e. “assume the best but prepare for the worst”).

 

The history of Hanlon’s razor

In general, Hanlon’s razor in its current form is attributed to Robert J. Hanlon, who purportedly submitted it to a 1980 book containing a compilation of various jokes related to Murphy’s law (which is the adage that “anything that can go wrong will go wrong”). However, since the underlying principle behind Hanlon’s razor has been mention in different formulations throughout history, it’s difficult to attribute it to a specific person.

For example, an early version of this adage appears in the novel The Sorrows of Young Werther, where Goethe famously wrote:

“Misunderstandings and neglect occasion more mischief in the world than even malice and wickedness. At all events, the two latter are of less frequent occurrence.”

Author Robert Heinlein also mentioned this concept in his novel Logic of Empire, when one character tells another:

“You have attributed conditions to villainy that simply result from stupidity.”

A similar notion was described by Bernard Ingham, who served as Margaret Thatcher’s chief press secretary while she was Prime Minister of the UK, and who said that:

“Many journalists have fallen for the conspiracy theory of government. I do assure you that they would produce more accurate work if they adhered to the cock-up theory.”

Beyond the different formulations of Hanlon’s razor itself, there are also some related principles which have been mentioned. For example, in the novel Time Enough for Love by Robert Heinlein, an important principle is mentioned, which should be considered when taking Hanlon’s razor into account:

“Never underestimate the power of human stupidity.”

Another relevant principle which is commonly mentioned in conjunction with Hanlon’s razor is Grey’s law, which has an unclear source, and which states that:

“Any sufficiently advanced incompetence is indistinguishable from malice.”

Finally, author Douglas Hubbard wrote a corollary to Hanlon’s razor, which states that:

“Never attribute to malice or stupidity that which can be explained by moderately rational individuals following incentives in a complex system of interactions.”

– The Failure of Risk Management: Why It’s Broken and How to Fix It

In saying this, Hubbard’s goal was to emphasize the fact that:

“People behaving with no central coordination and acting in their own best interest can still create results that appear to some to be clear proof of conspiracy or a plague of ignorance.”

Overall, it’s difficult to be perfectly certain what is the origin of the underlying idea behind Hanlon’s razor, since it has been proposed in various formulations throughout history.

Nevertheless, since this principle represents a valuable guideline when it comes to decision making, the important thing is to understand the basic rationale behind it, so that you can implement it effectively.

 

Summary and conclusions

  • Hanlon’s razor is the adage that you should “never attribute to malice that which is adequately explained by stupidity”. Essentially, it serves as a guiding principle that encourages you to not start out by assuming that a certain action occurred due to someone’s bad intentions, if it’s possible that it occurred due to stupidity instead.
  • Hanlon’s razor can be improved, by modifying it to say that you should “never attribute to negative reasons that which is adequately explained by other causes“. This is important, because focusing only on malice and stupidity unnecessarily limits the scope of this adage.
  • Using Hanlon’s razor can be beneficial, since people often tend to assume that a bad outcome that they experienced occurred due to some negative reason, even when that isn’t the case. This can be detrimental, when it causes you to feel unnecessary stress, and when it hinders your ability to take action and to communicate with others.
  • When implementing Hanlon’s razor, it’s important to take into account your egocentric bias, which is the tendency to view things from your own perspective when interpreting other people’s actions. This is because the egocentric bias makes us more likely to overestimate the likelihood that someone knows that what they did affected us badly, in cases where the negative impact of their actions is not necessarily obvious to them.
  • When deciding whether or not to implement Hanlon’s razor, you should consider the likelihood that the other person acted maliciously, and weigh the potential cost of incorrectly assuming that they acted out of malice, as well as the cost of incorrectly assuming that they acted due to a different reason.

 


Loaded Questions: What They Are and How to Counter Them

Loaded Question

 

A loaded question is a question that contains an unverified assumption that the person being asked the question might disagree with. This type of question puts the person who is being questioned in a disadvantageous position, since the assumption in the question could reflect badly on them or make them feel forced them to pick an answer which they would not pick otherwise.

This rhetoric technique is frequently used in arguments and debates, so it’s important to understand it. In the following article, you will learn more about what loaded questions are, why they are fallacious, and how you can counter them successfully.

 

What is a loaded question

A loaded question is a trick question which presupposes some unverified piece of information, that the person being questioned might disagree with. Essentially, this kind of question contains an entrapment, which is used in order to attack the person who is being asked the question, and which compromises their ability to reply in the way that they would normally prefer.

Consider the following classic (but crass) example of a loaded question:

“Have you stopped beating your wife?”

This question is considered to be a loaded question due to its presupposition, which is the implicit background assumption that this question contains, and specifically the assumption that the person who is being questioned has been beating his wife. Thus, even though this sentence is phrased as a question, it also contains an implicit statement about the person being asked the question.

In this case, the loaded question pushes the respondent to give a yes/no answer. However, regardless of which of these options the respondent chooses, they will appear to agree with the question’s underlying presupposition:

  • If the respondent says “yes”, then he appears to confirm that he has beaten his wife in the past, but has since stopped.
  • If the respondent says “no”, then he appears to confirm that he has beaten his wife in the past, and is still doing so in the present.

Essentially, even if the respondent has never engaged in such a behavior, his intuition will often cause him to reply either “yes” or “no”, which appears to implicate him as a wife beater.

Both these replies can be intuitive, because this is the type of answer that usually applies to this type of question, and because both answers can make sense if he never beat his wife in the first place. That is, someone might intuitively reply “yes” if he’s trying to convey the fact that he is not beating his wife, or “no” if he is trying to convey the fact that he has never beaten her in the first place.

As such, loaded questions represent a type of an informal logical fallacy, since there is an issue with the premise of such questions, and specifically with the information that they presuppose. This information manifests in the form of an implicit assumption, which is integrated into question in a way that prompts the person being questioned to reply in a way that doesn’t allow them to contradict that assumption.

This rhetoric technique plays a role in various scenarios, both in the personal as well as in the political landscape. For example, in gotcha journalism, loaded questions are frequently used by reporters in order to interview people in a way that causes them to unintentionally make negative statements, that are damaging to their reputation or credibility.

Note: the use of loaded questions is referred to by various names, including the loaded question fallacy, the complex question fallacy, the fallacy of many questions, the fallacy of presupposition, and plurium interrogationum.

 

Examples of loaded questions

Below are various examples of different types of loaded questions, all of which presuppose something that the respondent might disagree with.

“Do you actually support that lazy president of ours?”

This question presupposes the fact that the president is lazy. Accordingly, if the respondent supports the president and replies “yes”, then their answer will inadvertently suggest that they think the president is lazy.

“Do you think that we should convict this criminal?”

This question presupposes the fact that the person being discussed is a criminal. Accordingly, if the respondent believes that that person is innocent and replies “no” in order to show that they don’t think a conviction is necessary, then their answer will inadvertently suggest that they believe that person is a criminal.

“Are you one of those hateful people that doesn’t believe in creationism?”

This question is framed so that if the respondent doesn’t believe in creationism and replies “yes” in order to show that, then their answer will inadvertently suggest that they believe themself to be hateful.

“Have you accepted the fact that most environmental studies don’t support global warming?”

This question presupposes the fact that most environmental studies don’t support global warming. If the respondent says “no”, because they know that this is wrong, then their answer inadvertently suggests that they agree with this presupposition, meaning that they believe that most studies don’t support global warming.

“Are you saying that you support the new bill just to annoy me, or are you seriously stupid enough to believe in it?”

This question is framed in a way that prompts the respondent to disagree with one of the two clauses in the statement (most commonly the second one), which inadvertently suggests that they agree with the other clause. In this case, if the respondent says “no” in order to show that they disagree with the idea that they support the bill because they are “stupid enough to believe in it”, then their answer implies that they support the bill just to annoy the other person.

“Are you naive enough to believe the mainstream media, or do you just not care about finding out the truth?”

This question is similar to the previous one, since it is framed in a way that prompts the respondent to disagree with one clause in particular, which leads to the implicit suggestion that they agree with the other clause, despite the fact that an agreement with either clause reflects badly on them.

“Can you meet to discuss this tomorrow, or are you too busy slacking off?”

This question also uses the double-clause technique we saw above. In this case, the loaded question is used in order to pressure the other person into accepting a certain proposal, because if they simply say “no” without expanding on their answer, then they appear to inadvertently confirm the alternative explanation for their refusal, which is generally framed as negative.

Note that the examples that we saw so far mostly prompt the respondent to give a yes/no answer. However, loaded questions don’t necessarily have to fit this format. Consider the following example:

“When did you stop stealing from your partner?”

Similarly to the loaded questions which prompt a yes/no answer, this type of question presupposes something that the respondent might disagree with (in this case, the fact that he stole from his partner).

However, such loaded questions are less common, because it’s less intuitive to answer them in a way that incriminates yourself, since the answers that they prompt are more open-ended.

Another example of such an open-ended loaded question is the following:

“Why is X so much better than Y?”

This question presupposes the fact that X is better than Y, in an implicit manner which makes it difficult for the respondent to disagree with.

“Why do you hate X?”

This open-ended question presupposes the fact that the person being asked the question hates X. As in the previous examples, while the respondent is technically free to reject this premise, the format of the question prompts them to answer it in a way that confirms it, even if this isn’t what they would normally choose to do.

 

How to counter a loaded question

In order to reply to a loaded question in a way that negates it, you first need to recognize the fact that a loaded question is being asked. You can recognize this type of question, as we saw above, by noticing that the question presupposes something that you disagree with, and which usually has a negative implication for you, or which limits your range of possible answers.

Once you recognize that you are being asked a loaded question, there are several options that you can choose from when it comes to picking your response:

  • Reply in a way that rejects the presupposition. To do this, answer the question in a way that is different from what your questioner is expecting, so that you can explicitly reject the implicit assumption that you disagree with. For example, if you’re asked “did you stop cheating on all of your tests?”, then instead of answering using a yes/no statement, reply by saying “I have never cheated on any of my tests”.
  • Point out the fallacious reasoning. To do this, you should explicitly point out the issue with the question that is being asked, by showing that it contains an inappropriate presupposition, and is therefore a loaded question. You can follow up on this by also answering in a way that rejects the presupposition, as we saw above, or by asking the other person to justify the way that they phrased their question.  For example, if you’re asked “when will you stop cheating your way through your degree?” you can reply by saying “it’s wrong of you to assume that I’m cheating my way through my degree, since I never cheated on any of my exams. Why are you accusing me of doing this in the first place?”.
  • Refuse to answer the question or simply ignore it. In some cases, you might decide that the best course of action is to either explicitly refuse to answer the loaded question, or to ignore it and continue the discussion without acknowledging it. However, note that in general, refusing to answer the question will work best if you point out the fallacious reasoning first. Otherwise, you will likely be accused of dodging the question.

When countering loaded questions, it’s important to remember that people don’t always use this type of question on purpose, and that people sometimes ask loaded questions without realizing that they are doing so.

This is important to take into account when responding to such questions, since replying in a way that doesn’t accuse the other person of asking a loaded question intentionally can often lead to a more productive dialogue, especially if their fallacious reasoning was indeed unintentional.

 

How to avoid using loaded questions yourself

It’s possible that you’re using loaded questions without being aware that you are doing it. In order to solve this issue, the first thing that you need to do is recognize the fact that you’re about to ask someone a loaded question. You can do by following the criteria that we saw earlier, and checking whether a question that you are about to ask presupposes something that your respondent might not agree with.

Once you successfully recognize that you are about to ask a loaded question, you can modify your question in order to avoid using problematic phrasing. Specifically, in order to solve the issues associated with loaded questions, you need to break them apart into a series of questions, with the goal of first confirming that your presupposition is true, before moving on to ask the main question that you are seeking the answer to.

For example, instead of asking the following loaded question:

Why did you stop watching a lot of TV shows?

You can first ask the following question, which confirms that your initial presupposition is true:

Did you use to watch a lot of TV shows?

Then, if the respondent confirms this initial presupposition, you can move on to confirm the second presupposition in the question:

Did you stop watching a lot of TV shows?

Finally, if the respondent confirms this second presupposition, you can move on to ask the main question that you are interested in:

Why did you stop watching a lot of TV shows?

Combining these questions together yields the following deconstructed question:

Did you use to watch a lot of TV shows, and if so, then did you also stop watching a lot of TV shows, and if so, then why did you stop watching a lot of TV shows?

While this example doesn’t necessarily sound very natural, it illustrates the underlying concept behind deconstructing your loaded questions in order to make them valid.

In practice, the main thing to remember is to not ask questions that contain implicit assumptions that the person you are talking to might disagree with. Instead, you should separate such questions into a series of questions, in order to ensure that the other person agrees with your assumption in the first place. This will help you avoid fallacious reasoning in your questions, and will improve your communication with others.

 

Loaded questions aren’t always fallacious

It’s important to point out the fact that loaded questions aren’t inherently fallacious. Rather, they are fallacious only if there is some issue with the presupposition that they contain.

This is because if the presupposition that a question contains is valid, meaning that all the people involved in the discussion agree with it, then the question is generally not considered fallacious. For example, consider the following question:

“What movie do you want to watch tonight?”

This is a loaded question, in the sense that it presupposes that the person being asked the question wants to watch a movie with the person who is asking the question.

If the respondent isn’t interested in watching a movie, then the use of this question is considered fallacious, since it assumes that they do, and pressures them into replying in a way that confirms this assumption.

However, in a situation where both people accept the presupposition, meaning that they are both interested in watching a movie, then the use of this question is generally not considered fallacious, but is rather viewed as a principle of effective communication.

 

Summary and conclusions

  • A loaded question is a question that presupposes some unverified piece of information that the person being asked the question might disagree with. This type of question often pressures the respondent to reply in a way that they would normally prefer not to, and which reflects negatively on them.
  • For example, the question “have you stopped hitting your dog” is loaded, because it presupposes the fact that you have been hitting your dog. This kind of question prompts a yes/no answer, with the problem being that both answers appear to implicitly confirm the fact that you have been hitting your dog, even if your intention is to convey the fact that you have never done that.
  • When it comes to countering a loaded question, you should respond in a way which explicitly rejects the implicit presupposition that you disagree with. You can also point out the fallacious reasoning, in order to highlight the issue with the question being asked.
  • You can choose to refuse to answer a loaded question, or to ignore it entirely, which might be the preferable option in some cases. However, if you do this, it is generally preferable to point out the issue with the question first, in order to avoid looking like you are dodging the question for some other reason.
  • You should pay attention to the questions that you ask others, in order to ensure that you are not asking any loaded questions yourself. If you are about to ask a loaded question, you should instead deconstruct it into a series of questions, so that you can first confirm any presuppositions that you might have, before moving on to ask the main question that you are interested in.