The belief bias is a cognitive bias that causes people to over-rely on preexisting beliefs and knowledge when evaluating the conclusions of an argument, instead of properly considering the argument’s content and structure. Accordingly, the belief bias means that people often accept arguments that align with their preexisting beliefs, even if those arguments are weak, invalid, or unsound, and that people often reject arguments that contradict their preexisting beliefs, even if those arguments are strong and logically sound.
For example, the belief bias can cause someone to accept the argument “all flowers have petals, and roses have petals, therefore roses are flowers”, because they know that roses are flowers, even though the argument itself is logically unsound, since its conclusion does not follow from its premises, and since the premise that “all flowers have petals” is wrong. This issue is highlighted when this argument is contrasted with less believable arguments that have a similar structure, such as “all birds have wings, and planes have wings, therefore planes are birds”.
The belief bias can influence people’s thinking in various situations, so it’s important to understand it. As such, in the following article you will learn more about the belief bias, and see what you can do to reduce its influence on yourself and on others.
Examples of the belief bias
An example of the belief bias is that someone might think that the argument “all fish can swim, and salmon can swim, therefore salmon are fish” is logically sound, because its conclusion aligns with their preexisting beliefs (that salmon are a type of fish), even though this argument is actually logically unsound (specifically, its conclusion doesn’t follow from its premises, since just because salmon can swim, that doesn’t mean that they’re a type of fish).
This example, together with many other examples of the belief bias, occur in the context of syllogistic reasoning, where a syllogism is a type of argument in which a conclusion is drawn from two premises, which contain two unique terms and a single shared one. For instance, another example of the belief bias in a syllogism is the following:
Premise 1: All birds can fly.
Premise 2: Pigeons can fly.
Conclusion: Pigeons are birds.
People might find think that this argument is logically sound, if they know that pigeons are birds. However, this argument is actually logically unsound—its conclusion doesn’t follow from its premises, since both birds and pigeons being able to fly doesn’t necessarily mean that pigeons are birds (for example, other types of animals, such as insects, can also fly). Furthermore, the first premise of this argument is wrong, since not all birds can fly (for example, ostriches, kiwis, and penguins are all flightless birds).
In addition to syllogisms, which are used primarily to test formal reasoning, evidence of the belief bias also appears in studies of informal reasoning, such as when people are asked to evaluate the strength of arguments where the logical validity or soundness of the arguments doesn’t necessarily play a role. This includes, for example, studies on how people solve deductive-reasoning problems, evaluate general arguments, judge the extremeness of statements, provide social attribution to actions, and account for the law of large numbers when making generalizations about groups of people.
Examples of the belief bias in such contexts include the following:
- People are more likely to question the experimental design of a study if they don’t believe its conclusions.
- People are more likely to reject overgeneralizations that are based on religion and social class if those overgeneralizations contradict their existing beliefs.
- After the outcome of the presidential election is known, people are more likely to try to falsify claims that are inconsistent with the real outcomes of the elections than with their political ideology.
- When different people are shown the same type of evidence about politics, they are more likely to accept conclusions that support their political beliefs.
Note: a closely related phenomenon is the confirmation bias, which is a cognitive bias that causes people to search for, favor, interpret, and recall information in a way that confirms their preexisting beliefs.
Causes of the belief bias
There is no single, agreed-upon explanation for why people experience the belief bias, especially given that people may experience it for different reasons under different circumstances and when it comes to different types of information. As one meta-analysis on the topic states:
“…several theories have been proposed to describe how exactly beliefs interact with reasoning processes…
For example, according to the selective scrutiny account… individuals uncritically accept arguments with a believable conclusion, but reason more thoroughly when conclusions are unbelievable. In contrast, proponents of a misinterpreted necessity account… argue that believability only plays a role after individuals have reached conclusions that are consistent with, but not necessitated by, the premises…
Alternatively, mental-model theory… proposes that individuals evaluate syllogisms by generating mental representations that incorporate the premises. When the conclusion is consistent with one of these representations, the syllogism tends to be perceived as valid. However, when the conclusion is seen as unbelievable, the individual is assumed to engage in the creation of alternative mental representations that attempt to refute the conclusion (i.e., counterexamples). Only when a model is found wherein the (unbelievable) conclusion is consistent with these alternative representations, is the syllogism perceived to be valid.
Another account, transitive-chain theory… proposes that reasoners encode set-subset relations between the terms of the syllogism inspired by the order in which said terms are encountered when reading the syllogism. These mental representations are then combined according to a set of matching rules with different degrees of exhaustiveness. The theory predicts that unbelievable contents add an additional burden to this information processing, leading to worse performance compared to syllogisms with believable contents.
Yet another account, selective processing theory… proposes that individuals use a conclusion-to-premises reasoning strategy. Participants are assumed to first evaluate the believability of the conclusion, after which they conduct a search for additional evidence. Believable conclusions trigger a search for confirmatory evidence, whereas unbelievable conclusions induce a disconfirmatory search. For valid problems the conclusion is consistent with all possible representations of the premises, so believability will not have a large effect on reasoning. By contrast, for indeterminately invalid problems a representation which is inconsistent with the premises can typically be found with a disconfirmatory search, leading to increased logical reasoning accuracy for unbelievable problems…
This brief description does not exhaust the many theoretical accounts proposed in the literature, each of them postulating distinct relationships between reasoning processes and prior beliefs… However, irrespective of the precise interplay between beliefs and reasoning processes, a constant feature of these theories is that the ability to discriminate between logically valid and invalid syllogisms is predicted to be higher when conclusions are unbelievable (although the opposite prediction has also been made by transitive-chain theory). In sum, virtually all theories propose that beliefs have some effect on reasoning ability, the latter having been operationalized in terms of the ability to discriminate between valid and invalid syllogisms.”
— From “Characterizing belief bias in syllogistic reasoning: A hierarchical Bayesian meta-analysis of ROC data” (Trippas et al., 2018)
In addition to these theories, various other theories have been proposed to explain the belief bias. For example, one explanation under signal detection theory is that “belief bias primarily reflects changes in response bias: people require less evidence to endorse a syllogism as valid when it has a believable conclusion”, which means that “people set more lenient decision criteria for believable than for unbelievable arguments”.
Furthermore, the belief bias is often explained through the dual-process models of reasoning:
“According to these models, two types of cognitive processes underlie human reasoning. Nonanalytic processes are rapid, parallel, and automatic in their operation and are thought to include retrieval of beliefs and prior knowledge. Analytic processes permit abstract thinking, but they operate more slowly, are effortful, and impose demands on working memory and other fluid capacities… The two processes usually work together, but in some situations, they come into conflict. Dual-process models attribute belief bias to the dominance of belief-based processing over analytic processing.”
In this context, the fast and automatic system is known as System 1, while the slow and analytic system is known as System 2. However, there is variability even within these models. For example:
“According to the [default-interventionist account of belief bias], belief bias occurs because a fast, belief-based evaluation of the conclusion pre-empts a working-memory demanding logical analysis. In contrast, according to the [parallel-processing model] both belief-based and logic-based responding occur in parallel.”
— From “When fast logic meets slow belief: Evidence for a parallel-processing model of belief bias” (Trippas et al., 2017)
Factors affecting the belief bias
The belief bias is a complex phenomenon, which manifests in different ways and to different degrees in different situations. Accordingly, it involves much variability, meaning that various factors, such as age, religious beliefs, working memory, and general cognitive ability can all affect the likelihood that people will experience the belief bias, as well as the way and degree to which they will do so.
In addition, the nature of arguments can also affect the likelihood that people will experience the belief bias. This includes, for example, whether an argument is emotionally charged, and whether the reasoning involved is difficult for people to understand.
Most notably, two important aspects of arguments that affect people’s belief bias, especially in the context of syllogistic reasoning, are the validity of an argument and the believability of its conclusion. Based on these criteria, there are four types of syllogisms:
- Believable and valid.
- Unbelievable and invalid.
- Unbelievable but valid.
- Believable but invalid.
When it comes to these factors, the consistency/inconsistency between the validity of an argument and its believability (sometimes referred to as congruence/incongruence) can also influence people’s belief bias. Specifically, research suggests that people are most likely to experience the belief bias when the believability and validity of an argument are inconsistent, meaning that the argument is either unbelievable but valid, or believable but invalid.
Furthermore, when it comes to the structure of arguments, a related bias that can influence the belief bias is the figural bias, which is the tendency to be influenced by the order in which information is presented in the premises of an argument, when trying to solve a syllogistic reasoning problem.
Finally, the way in which people interact with information can also affect the likelihood that they will experience the belief bias. For example, one study found that people are more likely to display the belief bias when producing a conclusion for an argument based on its premises, than when evaluating an argument’s existing conclusion.
Positive and negative belief biases
A distinction is sometimes drawn between positive and negative belief biases:
- A positive belief bias involves increased acceptance of believable conclusions. Accordingly, the positive belief bias causes people to accept arguments that are logically unsound and conclusions that are false, when they align with people’s preexisting beliefs.
- A negative belief bias involves increased rejection of unbelievable conclusions. Accordingly, the negative belief bias causes people to reject arguments that are sound and conclusions that are true, when they contradict people’s preexisting beliefs.
How to reduce the belief bias in others
To reduce the belief bias in others, there are several things that you can do, either after it has already influenced them, or in advance (if you think that it could influence them later):
- Explain what the belief bias is and how it affects people, while potentially using relevant examples to illustrate this phenomenon.
- Ask the person if they think they might experiencing the belief bias, and if not, then why.
- Ask the person whether their reasoning could be influenced by their preexisting beliefs of theirs, and if not, then why.
- Encourage the person to slow down their reasoning process, so that they have time to properly think through the information.
- Ask the person to explain their reasoning in a clear and explicit manner.
- Ask the person to consider alternatives, such as the possibility that the argument that they thought was logically unsound is actually sound (or that the argument that they thought was logically sound is actually unsound).
- Ask the person questions that guide their reasoning, such as “does the conclusion of this argument necessarily follow from its premises?” or “can you infer the conclusion of this argument from its premises, in a manner that is logically valid?”.
- Point out specific issues in the other person’s reasoning, or ask them to explain those issues.
- Use or encourage the other person to use general debiasing techniques, such as creating favorable conditions for judgment and decision-making, for example by discussing the topic under consideration somewhere where they’re not exposed to things that could remind them of problematic preexisting beliefs.
However, keep in mind that while these techniques can potentially reduce the belief bias to some degree in some situations, there are many situations where they might be partially or entirely ineffective, meaning that people’s belief bias will persist despite the use of these techniques, at least to some degree.
Overall, to reduce the belief bias in others, you can use various debiasing techniques, such as explaining what this bias is, encouraging the other person to slow down their reasoning and make it explicit, asking them if they might be experiencing this bias, and pointing out or asking about specific issues with their reasoning.
How to avoid the belief bias yourself
To avoid the belief bias yourself, you can use similar techniques that you would use to help others avoid it. Specifically, you can:
- Understand what this bias is, and how it can affect you.
- Figure out when and how this bias is likely to affect you.
- Keep this bias in mind in relevant situations, and when necessary, ask yourself whether it’s potentially influencing your thinking (i.e., whether your preexisting beliefs could be causing you to improperly assess a certain piece of reasoning).
- Slow down your reasoning, so you have enough time to properly think through all the relevant information.
- Deconstruct your reasoning and make it explicit, while making sure to clearly justify any claims that you make (e.g., by clearly explaining why a certain conclusion can be inferred from the premises of a certain argument).
- Ask yourself relevant guiding questions, such as “does the conclusion of this argument necessarily follow from the given premises?”.
- Use general debiasing techniques, such as creating favorable conditions for reasoning.
However, as in the case of reducing the belief bias in others, keep in mind that you won’t necessarily be able to fully avoid the belief bias in your reasoning. The degree to which you will succeed at avoiding it depends on various factors, such as the nature of the argument that you’re assessing, and the circumstances in which you’re doing so.
Summary and conclusions
- The belief bias is a cognitive bias that causes people to over-rely on preexisting beliefs and knowledge when evaluating the conclusions of an argument, instead of properly considering the argument’s content and structure.
- The belief bias means that people often accept arguments that align with their preexisting beliefs, even if those arguments are weak, invalid, or unsound, and that people often reject arguments that contradict their preexisting beliefs, even if those arguments are strong and logically sound.
- For example, the belief bias can cause someone to accept the argument “all flowers have petals, and roses have petals, therefore roses are flowers”, because they know that roses are flowers, even though the argument itself is logically unsound, since its conclusion does not follow from its premises, and since the premise that “all flowers have petals” is wrong.
- Various factors, such as the structure of an argument and the abilities of the person who is assessing it, can determine whether and how someone will display the belief bias in a certain situation, as well as how they will respond to debiasing attempts.
- You can use various techniques to reduce the belief bias, such as explaining what this bias is, slowing down the reasoning process and making it explicit, asking whether the belief bias could be influencing the reasoning in question, pointing out specific errors in reasoning, and implementing general debiasing techniques, such as creating favorable conditions for reasoning.