The Burden of Proof: Why People Must Support Their Arguments

Burden of Proof


The burden of proof is the obligation to provide sufficient supporting evidence for any arguments that you make.

Accordingly, the burden of proof fallacy occurs when someone claims that they don’t have a burden of proof with regards to their own arguments, or when someone attempts to shift their own burden of proof to someone else.

Since the burden of proof is a concept that is crucial when it comes to conducting any kind of valid discourse, it’s important to understand it.

As such, in the following article, you will first learn more about the burden of proof in general, and about when exactly people have a burden of proof with regards to statements that they make. Then, you will learn about the burden of proof fallacy, see some examples of when it occurs, and understand what you can do in order to counter people who use it.


The burden of proof

What is the burden of proof

The burden of proof (called in Latin onus probandi) is the obligation to provide sufficient supporting evidence for any arguments that you make.

The burden of proof is one of the most important guiding principles which are used in order to help people resolve disputes. Specifically, each person in a debate has a burden of proof with regards to their own claims, so that if they want their claims to be accepted by others, they must provide proof which supports those claims, either as part of their original argument, or in response to their opponent’s questions.


Examples of the burden of proof

Examples of situations where someone has a burden of proof include the following:

  • If someone claims in a debate that their theory can explain a certain natural phenomenon, then they have to provide evidence which supports this claim.
  • If someone is suing someone else for damages, then the person who is suing has the responsibility to prove that the other person is responsible for the damage that they experienced in some way.
  • If a company claims that the medication they developed is safe and effective, then they have to prove it using clinical data.


Who has a burden of proof

In general, a person has a burden of proof with regards to any arguments that they make, which means that they have to provide sufficient evidence in order to support those arguments.

When it comes to discussions, each side has to provide proof which supports the arguments that they make, and the other parties involved in the discussion may ask critical questions about those arguments, and ask their opponent to provide more supporting evidence. This means that the burden of proof can continuously shifts between the discussants, so that different people are expected to provide supporting evidence at different stages of the discussion.

However, not every statement that a person makes carries with it a burden of proof. In general, there are three main types of statements, some of which require a burden of proof, and some of which do not:

  • An assertion carries a burden of proof. If you clearly assert that a certain statement is true, then you have a burden of proof to demonstrate that this is indeed the case, meaning that you must either prove that your assertion is true, or retract it.
  • A presumption carries a conditional burden of proof. If you ask someone to presume that a certain statement is true, then you don’t necessarily have a burden of proof with regards to that presumption, unless your opponent questions it, at which point you must either prove that your presumption is true or retract it, as is the case with an assertion.
  • An assumption carries no burden of proof. If you simply assume that a certain statement could be true, but don’t claim that this is necessarily the case, then you have no burden of proof. However, note that if you rely on your assumptions too much in the argument, or if your assumptions are perceived as too unlikely, they will end up weakening your argument, especially if they are challenged by your opponent.

Of course, note that all this is also applicable in situations where you assert, presume, or assume that something is false, rather than true.

Finally, keep in mind that in many cases, a person might make an assertion or a presumption which are entirely unsupported, and then refuse to retract it, even when prompted to do so by the other discussants.

In theory, such statements should be ignored from a logical perspective, since the person who made them failed to provide the necessary proof which is required in order to support them. This notion is exemplified in Hitchen’s razor, which was formulated by author and journalist Christopher Hitchens, and which states that:

What can be asserted without evidence can be dismissed without evidence.

In practice however, situations where someone fails to provide necessary proof are complicated, and it’s often up to the other party or arbitrators to determine how to proceed. We will soon see some suggestions on how to deal with such cases, in the section on the burden of proof fallacy.


Special cases with regards to the burden of proof

According to some philosophical views, certain assertions and presumptions are privileged, meaning that they are strongly supported and well-established enough that the person who believes them needs to defend them only in the face of what is perceived as a valid challenge.

This means, for example, that if a person supports the prevailing status quo in some field, which has already been proven to be true in the past by others, then that person needs to reply to challenges against their view only in cases where the challenger has first fulfilled their own burden of proof to a reasonable degree.

For instance, a scientist might refuse to engage someone who claims that the Earth is flat, unless that person can provide some compelling evidence in order to challenge the widespread scientific consensus that the Earth is a sphere.

Of course, this raises the issue of determining what constitutes a privileged proposition, and in some cases, claiming that a certain stance is privileged carries with it a burden of proof in itself. For example, in the case of arguing against those who believe in a flat Earth, one way to demonstrate the consensus regarding the shape of the Earth is to prove that many different scientific authorities agree that the Earth is in fact spherical, without getting into more details beyond that.

The concept of privileged propositions plays an important role in cases where people attempt to bombard someone else with disingenuous questions, in an attempt to undermine their stance. Though asking questions is, in general, a valid thing to do in a debate, it can become problematic when one side is attacking the other with a non-stop stream of questions, since there is usually asymmetry between the ease of asking questions compared to the relative difficulty of answering them.

For example, consider the case of the Gish gallop, which is a debate technique where someone attempts to overwhelm their opponent by mentioning as many arguments as possible, with no regard to the quality of these arguments. In some cases, the person using the Gish gallop might bombard their opponent with questions, in order to force them to defend their stance, by placing most of the burden of proof in the discussion on them.

In such cases, it can be valuable to shift the burden of proof back to your opponent in order to get them to at least justify their line of questioning before answering those questions. However, it’s important to not simply do this in order to evade your own burden of proof, but rather in order to ensure that the questions which are asked contribute to the discourse, rather than hinder it.


The burden of proof in law

The burden of proof plays an especially important role in the legal context, where it is assigned by the judge to different parties in a trial, and where a party’s ability to successfully meet this burden is determined by the judge and jury, as are the consequences of success or failure.

In this context, there is often a distinction between the burden of production, which is the burden of presenting necessary evidence to the judge or jury, and the burden of persuasion, which is the burden of convincing the judge or jury that your argument is true, in light of the evidence that you presented.

The nature of the evidence that people must produce and the degree to which they must persuade the judge and jury vary in different situations.

For example, in criminal cases a prosecutor might have to prove their accusations “beyond a reasonable doubt”, whereas in civil cases the plaintiff might have a more lenient burden of proof, such as the standard of “preponderance of the evidence”, which is the requirement that more than half of the evidence that is presented will support your case.

In addition, there are often differences between different trials in terms of which party has a burden of proof regarding which statements, and there can also be a different burden of proof requirement for one party, such as the prosecution, compared to other parties, such as the defense.


The burden of proof fallacy

What is the burden of proof fallacy

The burden of proof fallacy is a logical fallacy which occurs when someone claims that they don’t have a burden of proof with regards to their own arguments, or when someone attempts to shift their own burden of proof to someone else.

For example, if someone believes in ghosts, the burden of proof rests on them when it comes to showing that ghosts exist. Shifting the burden of proof to others, by claiming that no one has disproved the existence of ghosts, is fallacious, as is asking someone else to prove that ghosts don’t exist.


The burden of proof fallacy and falsifiability

People who use the burden of proof fallacy often make claims that are not falsifiable, meaning that they can’t be proven wrong, and then ask others to disprove those claims, a task which is inherently impossible.

A common example which is used to illustrate this concept is called Russell’s teapot, as it was proposed by philosopher Bertrand Russell:

Many orthodox people speak as though it were the business of sceptics to disprove received dogmas rather than of dogmatists to prove them. This is, of course, a mistake. If I were to suggest that between the Earth and Mars there is a china teapot revolving about the sun in an elliptical orbit, nobody would be able to disprove my assertion provided I were careful to add that the teapot is too small to be revealed even by our most powerful telescopes. But if I were to go on to say that, since my assertion cannot be disproved, it is intolerable presumption on the part of human reason to doubt it, I should rightly be thought to be talking nonsense.

— From “Is There a God?” (1952), by Bertrand Russell, as cited in “The Collected Papers of Bertrand Russell”, Volume 11 (1997)

In short, Russell’s argument is that if he were to suggest that there is a teapot revolving around the sun, nobody would be able to disprove this claim, if he was careful to add that the teapot is too small to be revealed by our most powerful telescopes, or by any other means of observation that we might have.

Essentially, Russell’s point is that a claim should not be taken as true simply because others are unable to disprove it, especially in cases where the claim is phrased in a way that makes the task of disproving it impossible.

The important thing to remember with regards to the connection between the burden of proof and falsifiability is that when a person makes a certain claim, the burden of proof lies with them to prove it. They cannot transfer this burden of proof to others by saying that their claim is right unless proven otherwise, especially in cases where their claim is impossible to disprove in the first place.


Examples of the burden of proof fallacy

There are many different ways in which people can use the burden of proof fallacy. For example:

Marketer: our new diet pills are guaranteed to help you lose weight.

Interviewer: are they safe though?

Marketer: do you have any reason to think that they’re not?

The problem here is that the burden of proof lies with the marketer to prove that their pills are safe, even if the interviewer doesn’t have specific reasons to think that they’re not.

Another example of the burden of proof fallacy is the following:

Bob: climate change is a hoax, you know.

Alice: really? Where’s the proof of that?

Bob: I read it on a website.

In this example, Bob deflects his burden of proof by attributing his statement to a secondary source. While it can sometimes be reasonable to report what a secondary source says on a certain topic, people often use this technique in an attempt to evade their burden of proof, by simply attributing their statements to a different source, without providing any supporting evidence for their statement.

Furthermore, people often combine this technique with the use of vague and ambiguous language, in an attempt to further avoid responsibility for their burden of proof, as in the following example:

Alice: they say that vaccines are bad for you.

Moreover, this technique for avoiding the burden of proof is sometimes combined with denial of commitment, where the speaker attributes the argument that they are making to someone else, while explicitly avoiding taking responsibility for that argument. For example:

Alice: Jenny says that vaccines are bad for you, and that you shouldn’t vaccinate your kids.

Bob: is there any empirical proof that supports this?

Alice: I’m just telling you what Jenny says.

Here, Alice doesn’t take direct responsibility for her statements, but the way in which she phrases them shows implicit support for this incorrect stance. By doing this, Alice conveys her stance on the subject indirectly, without committing to it in a way that would require her to provide proof of her implicit assertions.

While it can be reasonable to make a statement that relies on the authority of others in some cases, this should be done carefully, and in a manner which avoids relying on fallacious reasoning.

Specifically, it can be reasonable to use someone else’s opinion as supporting evidence in order to satisfy your own burden of proof, as long as you acknowledge what you are doing and justify why their opinion on the topic is relevant, while also taking responsibility for the assertions that you made.

Overall, the burden of proof fallacy can appear in various forms, and can be combined with different strategies, such as attributing your statements to a secondary source, or denying direct responsibility for your statements. The one thing that all forms of the burden of proof fallacy have in common is that the person using this fallacy is either avoiding their own burden of proof, or is attempting to shift their burden of proof to someone else.


How to counter the burden of proof fallacy

As we saw above, there are various ways in which people might use the burden of proof fallacy. When you recognize that this had happened, there are three main things that you can do in response:

  • Call out the use of the fallacy. You can do this by demonstrating that your opponents had made certain claims which have a burden of proof, and that they have failed to provide the necessary evidence which is needed in order to support those claims. Furthermore, if your opponent attempts to shift the burden of proof to someone else, you can point this out too.
  • Shift the burden of proof back to your opponent. After demonstrating that your opponent has failed to provide the proof which is needed in order to support their claims, you can explicitly ask them to provide the necessary proof, or to retract those claims.
  • If possible, provide counter-proof yourself. Though the burden of proof may rest with your opponent, in some cases it can be beneficial to simply provide proof which supports your own claims and which contradicts your opponent’s claims, even if they have failed to support those claims in the first place. However, remember that a failure to provide counter-proof on your part does not necessarily constitute as evidence which could be used in order to support their stance, especially if their assertions are phrased in a manner which makes them impossible to disprove.

Note that, as we saw earlier, in proper discourse people have a burden of proof with regards to the claims that they make, and they must either defend those claims or retract them, especially when challenged by their opponent.

However, most discourse is not conducted in a proper manner, and there are many cases where people will assert things without any providing any supporting proof, and then continue to stand by their claims even when called out on their fallacious reasoning, and even when presented with evidence which shows that they are wrong.

When this happens, it is generally preferable to try to apply the principle of charity before addressing your opponent’s arguments, and to assume, at least initially, that their use of the burden of proof fallacy is unintentional, since doing so can lead to a more productive dialogue.

Nevertheless, there are cases where it is unnecessary to give people the benefit of the doubt, such as when the other person is clearly using this form fallacious reasoning intentionally, despite knowing that what they are doing is wrong.

When this happens, and especially when it’s clear that the person you are talking to won’t accept that they’re wrong regardless of what you might say, you should reassess the situation, in order to decide whether you should continue the discussion.

In some cases, you might still want to continue the discussion. For example, this might happen if there is an audience watching the discussion, who might be persuaded by your arguments. However, in cases where there is nothing to gain from discussing the topic, you might benefit more from simply disengaging, since the argument won’t lead anywhere anyway.


Summary and conclusions

  • The burden of proof is the obligation that people have to provide sufficient supporting evidence for any arguments that they make, either as part of their original statements, or in response to their opponent’s questions.
  • Accordingly, the burden of proof fallacy occurs when someone claims that they don’t have a burden of proof with regards to their own arguments, or someone attempts to shift their own burden of proof to someone else.
  • For example, if someone claims that ghosts exist, they have the responsibility to provide evidence which shows that this is true; they cannot simply claim that they are right unless someone else can disprove their theory.
  • People often attempt to evade the burden of proof by making claims that can’t be disproved, or by shifting the responsibility for their claims to a secondary and often vague source, while denying a personal commitment to those claims.
  • To counter the burden of proof fallacy, you should call out the use of the fallacy, shift the burden of proof back to your opponent, and if possible, provide counter-proof which contradicts your opponent, though remember that your inability to provide such proof doesn’t necessarily constitute evidence that your opponent is right.


Sturgeon’s Law: Why Ninety Percent of Everything Is Crap

Sturgeon's Law - Ninety Percent of Everything is Crap


Sturgeon’s law is the adage that “ninety percent of everything is crap”. This represents the belief that in general, most of the work which is produced in any given field is of low quality.

This principle is a beneficial rule of thumb, since keeping it in mind can help improve the way in which you assess and consume information, as well as the way in which you decide what projects to work on. As such, in the following article, you will learn more about Sturgeon’s law, see examples of it in various domains, and understand how you can apply it as a helpful guiding principle in many areas of life.


What is Sturgeon’s law

Sturgeon’s law is the adage that “ninety percent of everything is crap”. This represents the belief that the majority of works which are produced in any given field, whether they’re in the form of books, films, articles, or anything else, are of low quality.


Examples of Sturgeon’s law

There are examples of Sturgeon’s law in various fields. For example:

  • The majority of new books are of low quality, meaning that they aren’t worth reading.
  • The majority of new TV shows are of low quality, meaning that they aren’t worth watching
  • The majority of new products are of low quality, meaning that they aren’t worth buying.

Of course, though Sturgeon’s law claims that “90% of everything is crap”, it’s difficult to quantify the exact portion of works which are of low quality. This is because, in many cases, it’s difficult to conclusively determine what constitutes “low quality” and what doesn’t.

This issue is especially prevalent in cases where the perceived quality of a given work strongly depends on the perspective of the person assessing it. For example, this means that even though there are many books that almost everyone would agree aren’t worth reading, there are quite a few books that some people will think are great while others will think they are terrible, simply because the latter group consists of people who aren’t interested in the topic that those books cover.

This is important to remember when considering the applications of Sturgeon’s law, since in many cases, the definition of what is good and what isn’t good is arbitrary, and depends on people’s personal preferences.


How to apply Sturgeon’s law

So far, we saw what Sturgeon’s law is, together with some examples of how it applies in real life. Next, you will see some helpful advice on how you can implement Sturgeon’s law as a consumer, as a critic, and as a creator, followed by some general guidelines on how to use Sturgeon’s law as effectively as possible.


Applying Sturgeon’s law as a consumer

Applying Sturgeon’s law as a consumer is beneficial, because it can help you be more efficient in terms of how you spend your time, effort, and money when it comes to things such as consuming information or buying products.

In general, a good way to implement Sturgeon’s law is to think of your decision regarding what content to consume or what product to buy as a zero-sum game, where resources that you dedicate to something of subpar quality could instead be directly dedicated to something more worthwhile.

Essentially, this means that every moment you waste filling your mind with low-quality information is a moment you could instead spend on something more useful or enjoyable. Furthermore, low-quality information is often going to continue to take up valuable mental space in the long term, and could even end up displacing higher-quality information.

If you think about it, you will notice that there are many methods which we already use in an attempt to avoid the lower-quality 90% of the material out there. For example, before buying a book, we will often check the reviews that it has, in order to decide whether it suits our interests, and whether other people believe that it’s worth reading.

Search engines, such as Google, are another tool that helps us find high-quality, relevant information that we need. This is crucial, since the internet contains so much information that searches often yield anything from dozens to billions of search results, which we wouldn’t be able to sort through or find on our own.

Note that that the nature of the information that you want to consume will vary based on the situation that you’re in. For example, if you just spent a long day at work and you want to unwind, you might want to watch some silly comedy show in order to unwind, and that’s perfectly acceptable. Just because it’s not serious, that doesn’t mean that it’s not high quality and worthwhile.

In short, the main take away with regards to applying Sturgeon’s law as a consumer is this: most of what’s out there is low quality, whether we’re talking about books, films, TV shows, articles, products, or apps. Don’t waste resources, such as your time and money, on low-quality stuff. Instead, learn to identify the high-quality things, and focus on them as much as possible.


Applying Sturgeon’s law as a critic

Sturgeon’s law can also be beneficial to keep in mind when you’re playing the role of a critic, since it can help you remember to not waste your time criticizing things that are so subpar that they shouldn’t be taken seriously.

For example, in the book “Intuition Pumps and Other Tools for Thinking“, philosopher Daniel Dennett points out to Sturgeon’s law as a reminder to not waste your time looking at the worst works published in a given field, when you can instead spend your time looking at the best, most exemplary works.

Furthermore, applying Sturgeon’s law as a critic can also be valuable when it comes to assessing things in a fair manner. This is because it’s important to keep in mind that a large portion of the works in a given field are likely to be low quality regardless of what the field is, so you shouldn’t judge a field based only on the fact that it has some subpar works, just as you shouldn’t focus solely on the weaker aspects of something when criticizing it.

Accordingly, the application of Sturgeon’s law in this context can be viewed as an extension of the principle of charity, which entails that you should focus on the stronger aspects of a claim when criticizing it.

In short, the main take away with regards to applying Sturgeon’s law as a critic is this: it’s easy to focus on the low-quality aspects of something when criticizing it, but this is often a waste of your time, and is unlikely to lead to a productive dialogue. Therefore, whenever it’s reasonable to do so, try to focus on the stronger aspects of whatever it is you are criticizing, rather than on its weaker aspects.


Applying Sturgeon’s law as a creator

It can also be beneficial to apply Sturgeon’s law if you’re someone who creates content or products, since it can help you figure out what kind of things to focus on in your work.

First, by assessing the current market and by seeing how your own work ranks in comparison to existing products, you can determine whether or not what you’re creating is worth your time, and whether it’s worth making it public. In many cases, unless you can create work that is in the top 10%, there is no point to publishing it, since you’ll just be contributing to the noise, by adding to the bottom 90% of works that most people ignore.

For example, when it comes to online articles, the vast majority of them aren’t read by more than a handful of people, because most of them just regurgitate the same small amount of often-repeated content that other posts on the topic contain contain. Accordingly, as we saw in the section on applying Sturgeon’s law as a consumer, if you search online for a topic, you will often encounter anything from hundreds to millions of results, but almost none of them are going to get any traffic aside from the few that are featured on the first page of the search engine.

There are, of course, exceptions to this, and there can be valid reasons to publish your work even if it won’t be in the top 10% of works that are created in your field. For example, if you feel that publishing your work, even if it’s not great, can be a learning experience for you that will help you improve your abilities in the long term, then you should likely publish it anyway.

In addition to this, another way in which understanding Sturgeon’s law can benefit you as a creator is by helping you realize that only a small minority of the work that you produce is going to have a real impact, and that that’s okay.

Finally, understanding Sturgeon’s law can also be beneficial when it comes to your learning, since it can help you set realistic goals for yourself.

For example, when it comes to improving your writing, Sturgeon’s law can help you understand that a lot of the work that you do is likely going to be low quality, especially at first. As such, when you’re comparing yourself to others, you shouldn’t just compare yourself to the top 10%, since they’re not necessarily representative of the market as a whole (though in general, if you’re going to be successful, they will be your main competition).

In short, the main take away with regards to applying Sturgeon’s law as a creator is this: since most of the works and products in the market are low quality, you should focus on creating something that stands out, and you shouldn’t publish something that just contributes to the general noise. As such, you should generally focus on displaying only your best work, unless you’re publishing something as part of your learning experience, in which case it’s entirely acceptable to produce work that isn’t at the top of the market.


General guidelines for applying Sturgeon’s law

As with any other philosophical principle, Sturgeon’s law should always be applied with some common sense. Most importantly, you should recognize the fact that Sturgeon’s law is simply a helpful rule of thumb, and should be treated as such, rather than as an absolute truth.

The portion of works which is worth your time might sometimes be 10%, but often it will end up being a similar but different number, such as 5% or 20%. As such, instead of trying to quantify Sturgeon’s law using exact metrics, you should instead focus on the underlying concept behind it, which is that only a small minority of the works in any given field are going to be of high quality.

In addition, keep in mind that the best works in a certain field aren’t necessarily randomly distributed between different creators. For example, in the case of literature, if only 10% of new books are worth reading, it’s likely that a significant portion of them are written by the same few authors.

Furthermore, keep in mind that based on Sturgeon’s law, you can expect 10% of the works to be good. However, out of that 10%, only a small portion will be truly great. The more you can focus on the great works, both in terms of the material that you consume and in terms of the work that you produce, the less you will waste your time on unimportant things.

Finally, when using Sturgeon’s law, you should also make sure to not blindly adopt it as a defense for a certain field, or for a certain collection of works. While Sturgeon’s law can be relevant in some cases, you should always mention it in a way that explains why it’s relevant, to avoid relying on fallacious reasoning.

Note: if you found the concept of Sturgeon’s law to be beneficial, you should read about the Pareto principle, which is a similar concept that denotes that a small portion of causes are responsible for a large portion of outcomes.


The origin of Sturgeon’s law

Sturgeon’s law was originally called Sturgeon’s revelation, and was proposed by American author Theodore Sturgeon in the 1950’s, as part of his defense of the quality of science fiction books which were being published.

Specifically, according to the Oxford English Dictionary, Sturgeon’s law was first mentioned publically around 1951, at a lecture in New York University, and was later popularized in 1953, at the WorldCon science-fiction convention. It was mentioned in writing for the first time in 1957, as part of the following quote:

“On that hangs Sturgeon’s revelation. It came to him that SF is indeed ninety-percent crud, but that also—Eureka!—ninety-percent of everything is crud. All things—cars, books, cheeses, hairstyles, people and pins are, to the expert and discerning eye, crud, except for the acceptable tithe which we each happen to like.”

– From issue #49 of Venture Science Fiction

Essentially, Sturgeon’s goal in formulating this aphorism was to point out that even if the majority of science fiction writing is bad, that isn’t necessarily something that should be used to attack science fiction as a genre, since the majority of works and products in any field are also bad.

As such, Sturgeon proclaimed that science-fiction writing isn’t significantly different from other forms of writing, despite the fact that critics often criticized it more harshly. That is, even though it’s true that most science-fiction writing is bad, this isn’t unexpected, since most of the writing which is published in any genre is bad.

Note: though Sturgeon’s law is most commonly quoted as “90% of everything is crap”, it is often also referenced in various similar forms, such as “90% of everything is crud” or “95% of everything is crap”.


Summary and conclusions

  • Sturgeon’s law is the adage that “ninety percent of everything is crap”, which represents the belief that the majority of works which are produced in any given field are of low quality.
  • Sturgeon’s law extends to various fields and types of works, including everything from books and films to cars and video games.
  • Applying Sturgeon’s law can be beneficial when you play the role of a consumer or a critic, since it can help ensure that you spend your resources, in terms of things such as time, money, and effort, on the works and products that will benefit you the most.
  • Applying Sturgeon’s law can also be beneficial if you’re a creator, since it can help you figure out what to focus on in your work, and since it can help you form a realistic assessment of the market.
  • Two beneficial things to remember when applying Sturgeon’s law are that the distribution of high-quality to low-quality works isn’t necessarily going to be 90%-10%, and that out of the minority of the works that are high quality, there is still a difference between works that are simply good, and those that are great.


Nudge: How Small Manipulations Can Make a Big Difference



A nudge is a modification that is made to an environment in order to alter people’s behavior in a predictable way, without forbidding any options or changing their incentives on a significant level.

For example, if a school has the goal of reducing the amount of soda that students drink, then placing water bottles instead of soda cans near the register in the cafeteria counts as a nudge, while banning soda outright does not.

Nudges, in various forms, are widely used in our society, and can be a highly effective tool when implemented properly. In the following article, you will more about nudges, understand how they work, and see what you can do in order to implement them yourself as effectively as possible.


What is a nudge

A nudge is a simple modification that is made to an environment in order to alter people’s behavior in a predictable way, without forbidding any options or changing their incentives on a significant level. As such, nudges are seen as low-cost, behaviorally-informed, choice-preserving solutions to various personal and societal issues.

This means that nudges are generally easy to implement, highly effective, and allow people to make their own choices, which has led to their widespread adoption in both the private and the public sectors, in fields such as finance, health, and education.

Note: though the concept of nudging has appeared in various forms in psychological research throughout history, nudge theory was brought into prominence by behavioral economist Richard Thaler (who later went on to win the Nobel Prize for his work) and legal scholar Cass Sunstein, in their 2008 book “Nudge: Improving Decisions About Health, Wealth, and Happiness“.


Examples of nudges

There are many examples of how nudges can be used in various areas of life:

  • Nudges can help people take better care of their health. For example, sending people a simple reminder to schedule a dental check-up doubled the rate of people who signed up for an appointment.
  • Nudges can get people to make better financial decisions. For example, sending students a few personalized text messages helped many of them remember to refile their application for student aid.
  • Nudges can prompt people to make choices that benefit others. For example, using an opt-out system for organ donations, where people are automatically registered as organ donors unless they choose otherwise, instead of an opt-in system, where people have to actively register to become donors, significantly increased the number of people that are registered as organ donors.
  • Nudges can encourage people to pay better attention to their privacy. For example, reminding social-media users who will see the content that they want to upload helped those users make better decisions regarding what content to post and where to post it.
  • Nudges can get people to take better care of the environment. For example, providing feedback to households about their and their neighbors’ electricity usage led people to reduce their energy consumption.

Furthermore, though many of these examples focus on how nudges can be beneficial in the short-term, research suggests that the impact of nudges can persist long after people encounter the nudge itself. For example, one study showed that the effects of nudges on people’s choice of a pension plan persisted for two decades after the original nudge has affected their decision, suggesting that simple nudges can influence people’s lives in the long-term.


How nudges works

Nudges affect people’s choice architecture, which is the environment where people make decisions, by encouraging them to make a choice that is perceived as beneficial by the person who implemented the nudge.

To understand why nudges work, it’s necessary to first understand how people think and make decisions. In the past, economic and psychological models of human behavior viewed humans as homo economicus, meaning that these models saw people as perfectly-rational and self-interested agents, who always pursue their goals in an optimal manner.

Nudge theory acknowledges the fact that this model is wrong, since people often act in a way that is irrational, sub-optimal, and not necessarily focused on their own self-interests. Accordingly, nudge theory relies on a newer model of human cognition, in which people have two cognitive systems:

  • System 1-fast and automatic cognitive system, which is responsible for intuitive processing.
  • System 2-slow and deliberate cognitive system, which is responsible for conscious reasoning.

Based on this model, people often make decisions that do not align with their goals or with their self-interest, either because they rely on System 1 when they shouldn’t and therefore end up making intuitive decisions that are bad for them, or because they engage System 2 but fail to conduct a valid reasoning process which also causes them to make sub-optimal decisions.

For example, when people encounter a situation that is too complex for them because it offers too many options to choose from, they might intuitively choose to stick with the default option, which saves them from having to fully process all the available options. In this case, the issue is that people rely on System 1 to make a decision that generally necessitates the use of System 2.

Alternatively, when people have too many options to choose from, they might also struggle to choose the best option for them, even if they do engage System 2. In this case, when there is too much information for people to analyze properly or when the information that people have to analyze is too complex, they might end up making a sub-optimal decision, even when they truly try to figure out which option will be best for them.

Based on this model of cognition, nudges work by affecting people’s cognitive systems, and they can either target System 1, when they aim to influence people’s intuitive processing, or they can target System 2, when they aim to influence people’s deliberate reasoning.

For example, altering the default option by giving people water with their meal unless they ask for a soda is a nudge that generally targets System 1, since people’s decision regarding which drink to get is fairly intuitive. On the other hand, changing the way in which complex statistical information is presented in order to make it easier to understand is a nudge that generally targets System 2, since it affects the way in which people conduct their conscious reasoning.

An example of a prevalent form of irrational behavior that nudges can help prevent is people’s value-action gaps, which represent the distinct differences between people’s values and the actions that they undertake in reality based on these values.

For example, researchers who interviewed people in several countries found that between 50% – 90% of people said that they favor energy from renewable resources, but only around 3% actually bothered to use it. That is, despite the fact that the majority of people set clean-energy usage as a goal for themselves, few of them actually took the necessary steps needed in order to pursue this goal.

Here, a simple nudge, in the form of changing people’s default contract to one that involves using renewable energy, was able to increase the number of people who use clean energy by approximately 45%, a success that has been replicated in other studies on the topic. In this case, the nudge likely targeted people’s intuitive System, since many people will decide to simply stick with a default option instead of going through the trouble to pick something else.

Overall, nudges work because people are often irrational in the way they make decisions, which causes them to make decisions that are bad for them. This occurs when people rely on their intuition in cases where they shouldn’t, and when they fail to conduct a valid reasoning process.

Accordingly, nudges can either help people make better intuitive decisions, by targeting their intuitive system (System 1), or they can help them make better deliberate decisions, by targeting their conscious reasoning system (System 2). The specific mechanisms that the nudge use in order to affect these Systems vary in different situations, but they will generally either encourage people to make a better intuitive decision, to use their conscious reasoning instead of intuitions where necessary, or to conduct a more effective conscious reasoning process.


How to nudge

So far, we saw what nudges are and how they work. Next, you will see what types of nudges you can use, followed by a few beneficial guidelines that will help you implement nudges as effectively as possible.


Types of nudges you can use

There are various types of nudges that you can use in order to encourage people to make better decisions. Though nudges can take many forms, the most common types of nudges that people usually implement are the following:

  • Changing the default option. People will often stick with the default option due to inertia, so by changing this option to the optimal one you increase the chance that people will stick with the optimal choice. For example, in the context of organ donations, switching to a default opt-out system, where people are presumed to be organ donors unless specified otherwise, led to double rates of organ donations compared to using an opt-in system as a default, where people have to actively register in order to become a donor.
  • Informing people. Informing people of the benefits of a certain action can increase the likelihood that people will perform it. For example, giving people a simple flyer with information about their employer’s retirement savings plan led people to contribute more to that plan, which allowed them to take advantage of the plan’s substantial tax advantages and of their employer’s matching contributions.
  • Reminding people of the situation. Reminding people about the situation at hand can often encourage them to make better decisions. For example, reminding doctors about the problem of antibiotic resistance in society reduced the amount of unnecessary antibiotics that they prescribed to patients.
  • Reminding people to do something. Reminding people of something that they need to do can prompt them to take action. For example, sending people a reminder that they need to schedule a doctor’s appointment got more people to do so when necessary.
  • Getting people to slow down. Encouraging people to slow down their decision-making process can often help them make better decisions. For example, encouraging people to wait a bit between the moment when they first hit the ‘submit’ button on social media posts and the moment that the post is actually submitted led people to edit and cancel posts that they would have otherwise regretted making public.


Guidelines for good nudges

When designing a nudge, you should first ask yourself what problem you are trying to solve, and how nudges can help you solve it, since identifying these two things is the key to creating an effective nudge.

In addition, there are a few guidelines that you should keep in mind when designing the nudge that you want to use:

  • The nudge should fit your intended audience. For example, you will likely want to use a different nudge if you’re targeting college students than you would if you were targeting seniors in a retirement home, even if your goal is to get both groups to make a similar decision. This is an important factor to consider, since different nudges work differently on different people, as the efficacy of nudges can be influenced by factors such as political views and social class.
  • Different nudges work in different situations. In addition to accounting for who your intended audience is, you should also make sure that the type of nudge that you intend to use is appropriate given the situation in which you intend to use it. Essentially, this means that context plays a big role in how effective a nudge is, and just because a certain nudge worked well in one case doesn’t mean that it will also be effective in other situations.
  • Smaller nudges can sometimes be more effective than bigger ones. For example, changing the default option by only a little can often be more effective than changing it drastically, since a major change might make people feel antagonized, which could cause them to overcome the inertia of sticking with the default option. At the same time, however, it’s important not to fall into the trap of making nudges so minor that they are rendered ineffective.
  • Nudges don’t have to be hidden. Nudges can often work even when people are aware of them. You can decide on how subtle to make your nudge based on what you’re trying to accomplish, as well as on your intended audience and on the context in which the nudge is applied.

Furthermore, there are a few additional guidelines that you should keep in mind with regards to how you present your nudges to others:

  • People should perceive the nudge as beneficial. To ensure that people accept the nudge and to reduce the chance of backlash, you should make sure that all the relevant stakeholders feel that the nudge benefits them in some way. Remember that, in general, the more effective the nudge is, the more likely people are to accept it.
  • People should feel that the nudge respects their ability to choose. The nudge shouldn’t be viewed as infringing on people’s right to choose, or on their autonomy. Remember that for an intervention to count as a nudge, it must respect people’s ability to choose freely.
  • People generally prefer nudges that target their conscious reasoning. There is generally more support for nudges that appeal to people’s conscious thinking, compared to nudges that target their intuition. However, there is some variation with regards to this, and an effective nudge that appeals to people’s intuition can also be well-received under the right circumstances.

In addition, after you set up the nudge, make sure to look at the data and check whether it works.

To do this, you need to start with a clearly defined idea of what metrics you are trying to improve, while also being aware of any potential issues that may arise, and of any other secondary factors which might be influenced by your nudging. As you analyze the data, ask yourself how you can improve the nudge, and be ready to notice any changes that you might not have expected.

Finally, when it comes to implementing nudges, remember that nudges often represent a solution that is only partial. Specifically, though nudges are often effective and easy to implement, they generally can’t completely solve the issues that they are meant to address, and they should therefore be viewed as a way to supplement other efforts which are meant to address those issues.


Remove negative nudges

Negative nudges are mechanisms which are usually created unintentionally, and which cause people to make decisions that are bad for them, by altering their decision-making environment.

There is no inherent difference between ‘positive nudges’ and ‘negative nudges’ in terms of how the nudges work, and the only difference between the two is that positive nudges are nudges that prompt people to make “good” decisions, while negative nudges are nudges that prompt people to make “bad” decisions.

Accordingly, negative nudges usually consist either of barriers which make it harder for people to take action that’s good from them, or of mechanisms which discourage them from doing so.

Some examples of negative nudges are the following:

  • Bad default option. For example, if the default status of organ donors is ‘opt-in’, meaning that people have to actively register in order to become organ donors, that means that many people who would be willing to donate their organs aren’t going to be listed as organ donors.
  • Easy-to-select bad option. For example, if unhealthy foods are easier to reach in the cafeteria, this encourages people to pick them, despite the fact that they’re unhealthy.
  • Hard-to-select good option. For example, if registering to a good retirement plan is difficult or requires a large number of inconvenient steps, fewer people are going to do it, despite the fact that it’s beneficial.

As such, it’s important to identify negative nudges, and remove them where possible.

Note that, in some cases, removing a negative nudge entails inserting a positive nudge in its place. This is what happens, for example, when you replace a negative default option with a positive one.

However, that’s not always the case, and it’s possible to remove a negative nudge without necessarily replacing it with a positive one. This is what happens, for example, when you remove a negative default option, but don’t replace it with any other default option, but instead make users pick an option themselves.


Attitudes toward nudging

As we saw so far, many governments and organizations are happy to use nudges on a large scale, due to their effectiveness, ease of implementation, and choice-preserving nature. Furthermore, there is compelling evidence that the citizens of many countries support the use of nudges in general, though there is, of course, some variation between countries and between individuals with regards to this.

In this context, nudging is often viewed as a form of libertarian paternalism, which represents the idea that it’s acceptable and even desirable to manipulate people when that manipulation prompts them to make choices that are believed to be better for themselves or for society, as long as they still have the full freedom to make whatever choice they want.

However, there have also been some criticisms of the idea of nudging, and some have called for the outright rejection of the use of nudges. The ethics of nudging is still being discussed, and strong arguments have been made both in favor of and against nudging.

The arguments in favor of nudging revolve around the factors that we saw so far, and namely around the fact that nudges can prompt people to make better decisions, while preserving their ability to choose freely.

On the other hand, many of the arguments against nudging revolve around the belief that manipulation of people’s choice is unethical even if it preserves their freedom of choice, and around the belief that nudges prompt people to make decisions that they might not agree are beneficial for them.

Furthermore, there are criticisms regarding how nudging is used in fields such as marketing, where nudges are implemented with the intent of getting people to buy certain products, rather than with the intent of helping them make decisions that are better for them.

Overall, though nudging is generally perceived positively by the population, especially when implemented correctly, there are still those who reject it on an ethical basis, at least in some contexts.

Nevertheless, one thing that people on both sides of the debate can agree on is that if nudges are used, they should be implemented with caution, especially when they are implemented by policymakers on a large scale. Furthermore, when nudges are used in such cases, there should be some form of oversight which ensures that nudges are implemented in a way that respects people’s choices, and gets them to make decisions that are truly better for them.


Summary and conclusions

  • A nudge is a modification that is made to an environment in order to alter people’s behavior in a predictable way, without forbidding any options or changing their incentives on a significant level.
  • Nudges are viewed as effective, low-cost, choice-preserving solutions to various problems, which has led to their adoption in a variety of fields, including healthcare, education, and finance.
  • Examples of nudges include sending people a reminder that they need to schedule a doctor’s appointment, providing people with information regarding how much electricity they use compared to their neighbors, and reminding people what sort of audience is going to see the posts that they upload to social media.
  • The main types of nudges that you can use involve changing the default option, informing people about a certain issue, reminding people about a certain issue that they are already aware of, reminding people to take a certain action, and getting people to slow down their reasoning process.
  • Good nudges should fit their target audience and the situation in which they are applied, and should be viewed by people as being beneficial and as respecting their freedom of choice.


If you found this article interesting, and if you want to learn more about nudges and about how you can implement them, you should read the highly acclaimed book on the topic “Nudge: Improving Decisions About Health, Wealth, and Happiness”.