The Masked-Man Fallacy: Twisting Arguments Through Invalid Substitutions

The Masked-Man Fallacy

 

The masked-man fallacy is a logical fallacy where the substitution of two identical entities leads to a flaw in the logic of an argument. A classic example for this fallacy is the following:

Premise 1: I know who my father is.

Premise 2: I don’t know who the masked-man is.

Fallacious conclusion: the masked-man is not my father.

Intuitively, we know that this conclusion is fallacious, since we cannot be certain that my father is not the masked-man, based on these premises alone. The following article will show you the logical flaw in this type of reasoning, and explain the role that this fallacy plays in everyday arguments.

 

Understanding the masked-man fallacy

The masked-man fallacy is a formal logical fallacy, meaning that its reasoning is rendered invalid by a flaw in its logic. Specifically, this flaw lies in the fact that substituting two identical entities can change the truth value of a statement, when the statement says something about our knowledge of the world.

Simply put, this means that even if character A and character B are the same person, the fact that we know different things about each character means that we cannot freely substitute one for the other. Consider the following example:

Premise 1: Bruce Wayne is Batman.

Premise 2: Batman saved Gotham.

Conclusion: Bruce Wayne saved Gotham.

Since Bruce Wayne is in fact Batman, it is true that if Batman saved Gotham, then we can say that Bruce Wayne saved Gotham. Here, since we are talking about facts, we can substitute these two characters freely.

However, the same doesn’t hold if we’re talking about people’s knowledge of these facts. Consider the following example:

Premise 1: The citizens of Gotham know that Batman saved their city.

Premise 2: Bruce Wayne is Batman.

Fallacious conclusion: The citizens of Gotham know that Bruce Wayne saved their city.

The conclusion in this example is obviously invalid, since the citizens of Gotham don’t necessarily know that Bruce Wayne saved their city. Even though Bruce Wayne and Batman are the same person from a logical perspective, the fact that people know something about one of them, doesn’t mean that they know the same thing about the other.

Note: This fallacy is based on the concept of identity of indiscernibles (also referred to as Leibniz’s law). Essentially, this concept means that if two entities have the exact same properties, then they are identical. However, what people know about an entity is not considered a property of that entity. Therefore, the fact that Bruce Wayne and Batman are identical (from a logical perspective), doesn’t mean that people’s knowledge of them is also identical.

 

The masked-man fallacy in arguments

This fallacy appears in arguments whenever an invalid substitution occurs, as we saw in the examples above. That is, it occurs when someone takes for granted that just because we know something about X, then it means that we also know something about Y, if X and Y are different representations of the same thing.

In arguments and debates, this fallacy is often used in conjunction with a strawman argument to attack people, by stating that their support for a person or an idea based on one aspect, means that they also support a different aspect of that person or idea.

Consider the following theoretical example:

Alice: I support the plan to reduce federal healthcare coverage.

Bob: So you’re saying you support a plan for killing thousands of poor citizens.

Bob is twisting Alice’s stance in order to make it easier to attack, by using an invalid substitution. Specifically, while it is possible that he is indeed correct, and that the plan that Alice is supporting (reducing healthcare coverage) is also a plan with the same outcome that Bob describes (killing thousands), that does not mean that that is what Alice is supporting in her mind.

This distinction is crucial: what people know about something is distinct from the properties of that object. Even if this plan has the same outcome that Bob predicts, it is incorrect to assume that Alice supports this aspect of the plan. If Bob wants to avoid using this fallacy, he can make a similar, but slightly different argument against Alice’s stance:

Alice: I support the plan to reduce federal healthcare coverage.

Bob: The problem is that this plan will lead to the deaths of thousands of poor citizens.

Here, Bob’s argument against the plan itself is similar, but he doesn’t use fallacious reasoning to suggest that Alice directly supports the plan to kill thousands of people. Instead, he argues directly against the plan that Alice supports, without assuming that she supports the aspects of the plan that he finds problematic.

 

Countering the masked-man fallacy

The initial strategy for countering this sort of argument is pointing out the flaw in your opponent’s logic. You do this by showing why your opponent’s reasoning is fallacious, and that just because you support X, does not mean that you also support Y.

However, while doing this is valid from a logical perspective, the problem is that it doesn’t help you argue against whatever point your opponent is trying to make against your stance. Furthermore, if you focus too much on the gap in their reasoning, there is the risk that you will appear to implicitly agree with their overall point, which is whatever negative thing they are saying about your stance.

Therefore, you need to make sure that after briefly highlighting the gap in your opponent’s logic, you move on to defending your main stance. For example:

Alice: I support the plan to reduce federal healthcare coverage.

Bob: So you’re saying you support a plan for killing thousands of poor citizens.

Alice: I am definitely not saying that.  When did you hear me say that I support a plan for killing people? Or did you just make that up?

Bob: Well, you might not think that, but this will be the outcome of the plan to reduce federal healthcare coverage.

Alice: I disagree with what you’re saying. Recent statistics show…

Here, Alice first points out the flaw in Bob’s reasoning, while simultaneously putting him on the defensive. Then, after Bob is forced to adjust his original, fallacious argument, Alice moves on to argue against his main attack on her stance.

 

Using the masked-man fallacy

You can use the masked-man fallacy yourself to attack your opponent’s arguments. You do this by substituting one aspect of their stance, for a different aspect of that stance that is more difficult for them to defend. For example, as we saw above, if they say that they support a plan for reducing healthcare coverage, you can attack it by saying that they support a plan for killing poor citizens.

The difference between doing this and using a logically-sound argument is that you attack the person’s beliefs directly, rather than attacking the stance for which they express support. The advantage of using this fallacy is that it can make for a more powerful and personal argument, that puts the other person on the defensive.

There are, of course, also disadvantages to using this fallacy. The first is that using it leads to a flaw in your reasoning, which you might want to avoid. However, using this fallacy doesn’t weaken your argument too much, since even if your opponent does counter it successfully, you will generally just go back to discussing the original point.

The second and bigger disadvantage of using this fallacy is that by making your attack more personal, you will usually put the person you are arguing with in a defensive mode, where they automatically reject your arguments. This could cause them to support their original stance more strongly, even if you manage to make a good point overall. Because of this, if you actually want to get your point across, be wary about using this type of fallacy in discourse.

 

Summary and conclusions

  • The masked-man fallacy is a logical fallacy where the substitution of two identical entities leads to a flaw in reasoning, as a result of the different knowledge that we have about each entity.
  • For example, if Bruce Wayne is Batman, and the citizens of Gotham know that Batman saved their city, it would be incorrect to assume that the citizens of Gotham know that Bruce Wayne saved their city.
  • In everyday arguments, this technique is used when people assume that because someone supports one aspect of an idea (e.g. reducing healthcare coverage), then it means that they also support other aspects of that idea (e.g. killing poor people). Even though both aspects may be true, it is incorrect to assume that if someone supports one of them, then they also know about and support the other aspects of that idea.
  • You can counter this fallacy by briefly pointing out the flaw in your opponent’s reasoning and putting them on the defensive, before moving on to attack the main point in their argument.
  • You can also use this fallacy yourself, in order to make your arguments against an opponent’s stance more personal and powerful, which will put your opponent on the defensive.

 


The Protégé Effect: How You Can Learn by Teaching Others

The Protégé Effect: How You Can Learn Better by Teaching Others

 

The protégé effect enables us to learn information better when we teach or intend to teach it to others, than when we learn it just for ourselves. The following article will show you how this effect works, and how you can take advantage of it in order to learn more effectively.

 

How the protégé effect works

The protégé effect helps us learn better through several mechanisms:

  • Expecting to teach others encourages the use of effective learning strategies, such as structuring the material and seeking out key pieces of information.
  • Expecting to teach also encourages metacognitive processing, which makes us more aware of our learning process, and helps us improve the way we learn.
  • Learning the material with the intent of teaching often involves the use of additional techniques beyond those used when learning for yourself, which leads to a more comprehensive understanding of the material.
  • Expecting to teach also increases the motivation to learn, meaning that students generally make a greater effort to learn for those that they will teach, than they do for themselves.

Taken together, this all means that students who learn material with the intention of teaching it later, perform better when tested on that material than those who learn it just for themselves.

These benefits are not limited to an academic setting; studies show that preparing to teach can also improve motor learning and enhance information processing when learning a physical task, such as how to play a sport.

Furthermore, there are additional benefits to teaching others, such as improved communication skills, increased confidence, and improved leadership ability. Most importantly, studies show that peer-teaching is also highly-helpful to the students being taught, since they often  learn better when their teacher is someone that they are close to in terms of social and cognitive distance.

Note: interestingly, some studies suggest that the fact that older siblings tend to have a higher IQ than their younger siblings, can be attributed to the fact that the older siblings act as tutors in the family, at an age when they experience significant cognitive development. This represents an example for the powerful potential of the protégé effect, and for its long-term benefits.

 

How to take advantage of the protégé effect

You can benefit from the protégé effect using several techniques:

  • Learn the material as if you are going to teach it to others- this involves getting to a level where you understand the material well enough that you would feel comfortable teaching it to someone else, and answering any questions that they might have on the topic. Try to envision what it would be like if you had to teach the material later, and study accordingly.
  • Pretend to explain the material to someone- you will benefit more from the protégé effect if you at least act as if you are teaching the material to someone. Do it aloud, and pretend that you’re talking to a real person; the more you feel like you’re engaging in the act of teaching, the more you will benefit from the protégé effect.
  • Actually teach the material to other people- you can either do this in a one-on-one setting, or in a group. At the end of the day, knowing that you are actually going to teach someone will give you the greatest boost to motivation, and will help you benefit the most from the experience of teaching.

You can use any combination of these techniques that you want. However, before deciding what to do, make sure to take into account the time and effort cost associated with each technique, and weigh it against the potential benefits.

Specifically, teaching someone in reality will allow you to benefit the most from the protégé effect, but it isn’t always convenient to do this. On the other hand, learning the material with the intention of teaching, and pretending to teach someone, are both slightly less effective methods, but they might offer a higher rate of learning benefits in comparison with the effort that they require.

At the end of the day, the right choice depends on your personal situation, preferences, and goals. Ask yourself which method you think will help you achieve the best results, and use that. Feel free to adjust whenever you feel it’s necessary.

 

Summary and conclusions

  • The protégé effect enables us to learn information better when we teach or intend to teach it to others, than when we learn it just for ourselves.
  • This effect improves our ability to learn, by encouraging the use of effective learning strategies, and by increasing our motivation to study the material.
  • Teaching also offers additional benefits, such as improved communication skills, increased confidence, and improved leadership ability.
  • To take advantage of this effect, you can either learn the material as if you were going to teach it, pretend to teach it to someone, or teach it to other people in reality.
  • The more you feel like you are engaging in the act of teaching, the more you will benefit from the protégé effect. However, when choosing which technique to use, take into account how much effort it will require to set it up, and decide what to do based on your personal preferences and goals.

 


Authority Bias: the Milgram Obedience Experiment

Authority Bias: the Milgram Experiment and the Dangers of Blind Obedience

 

The authority bias is the tendency to obey the orders of an authority figure, even when you strongly believe that there is something wrong with those orders. Experiments show that this tendency is much more powerful than most people assume, and in the following article you will learn how this bias affects people, and how you can fight against its dangerous influence.

 

The Milgram obedience experiment

The first and most infamous study on the authority bias was conducted in 1961 by Stanley Milgram, a professor of psychology at Yale University, in what became known as the “Milgram experiment”.

The goal of this experiment, which was inspired in part by the events of the Holocaust, was to see whether people are willing to follow orders from an authority figure, when those orders violate their moral beliefs.

The procedure for the experiment was simple yet chilling:

  • There were three individuals involved: the first was the Experimenter, who served as the authority figure running the experiment. The second was the Teacher, who was the subject of the experiment. The third was the Learner, who pretended to be another subject in the experiment, but who was in fact an actor.
  • The experiment started with the subject and the actor each drawing a slip of paper, to determine whether they would play the role of the teacher or the learner. However, in reality both slips said “Teacher”, and the actor would lie and say that his said “Learner”, thus guaranteeing that the subject would always play the role of the teacher.
  • After assigning roles, the teacher and the learner were taken into an adjacent room, and the learner was strapped into what looked like an electric chair, with an electrode attached to his wrist. The subject was told that the straps were there to prevent excessive movement while the learner was being shocked, though in reality the goal was to make it impossible for the learner to escape the situation himself.
  • The subject of the experiment was then shown how to operate an authentic-looking shock generator, with 30 switches that go from 15 volts up to 450 volts. These switches were labeled with verbal designations starting with “Slight Shock” and up to “Danger: Severe Shock”, with the last two switches past that simply marked as “XXX”. Before starting, the subject was given a sample 45-volt shock, in order to convince him that the shock generator was real, and to demonstrate the pain of getting shocked by it, though in reality no shocks were delivered in the experiment beyond this one.
  • The learning task itself was relatively simple: the teacher, who was the subject of the experiment, read a list of word-pairs to the learner, who was the actor strapped into the electric chair. Then, the teacher read aloud the first word out of each pair, and the learner had to pick one of four options, using a signal box, in order to indicate the second word in the pair.
  • The subject was told to administer an electric shock each time the learner picked the wrong answer. Furthermore, he was told to increase the intensity of the shock each time this happened, by moving to the next switch in the generator, and to announce the shock level aloud each time, in order to ensure that he remained cognizant of this increase.
  • The subject was told that once he finished going through the list, he needed to start over again, and continue administering shocks until the learner managed to remember all the pairs correctly.

In practice, this is what happened once the experiment started:

…no vocal response or other sign of protest is heard from the learner until Shock Level 300 is reached. When the 300-volt shock is administered, the learner pounds on the wall of the room in which he is bound to the electric chair. The pounding can be heard by the subject. From this point on, the learner’s answers no longer appear on the four-way panel. At this juncture, subjects ordinarily turn to the experimenter for guidance. The experimenter instructs the subject to treat the absence of a response as a wrong answer, and to shock the subject according to the usual schedule. He advises the subjects to allow 10 seconds before considering no response as a wrong answer, and to increase the shock level one step each time the learner fails to respond correctly. The learner’s pounding is repeated after the 315-volt shock is administered; afterwards he is not heard from, nor do his answers reappear on the four-way signal box.

At any point during the experiment, the subject could indicate that they wish to stop. Any time this happened, the experimenter would tell the subject the following things, in order, using a firm but polite tone:

Please continue.

The experiment requires that you continue.

It is absolutely essential that you continue.

You have no other choice, you must go on.

If, after saying all 4 lines, the subject still refused to carry on with the experiment, the experiment was stopped.

 

Results of the experiment

Before starting the experiment, Milgram ran a short poll, asking people what portion of the subjects they believed would be willing to go up to the highest shock level. On average, people thought that only approximately 1 in 100 subjects would be willing to do so.

In reality, out of the 40 subjects in the study, 26 obeyed the experimenter’s orders to the end, and continued punishing the victim until they reached the highest level possible on the shock generator, at which point the experiment was stopped by the experimenter.

Out of the 14 subjects who defied the experimenter, every single one was willing to go above the volt-level labeled “Very Strong Shock”, contrary to prior expectations. In addition, only 5 of the 14 stopped at the 300-volt level, which is when the victim starts banging on the wall after getting shocked.

Furthermore, after the experiment was over, subjects were asked to rank how painful they thought their last shocks were to the learner, on a scale from 1 (“not painful at all”) to 14 (“extremely painful”). The most common response was 14, and the mean response was 13.4, indicating that subjects honestly believed that they were causing extreme pain to the learner, even as they continued administering shocks.

This is not to say that subjects were comfortable shocking the victim. In fact, nearly all of them appeared to be under extreme stress:

Subjects were observed to sweat, tremble, stutter, bite their lips, groan, and dig their fingernails into their flesh. One sign of tension was the regular occurrence of nervous laughing fits. Fourteen of the 40 subjects showed definite signs of nervous laughter and smiling. The laughter seemed entirely out of place, even bizarre. Full-blown, uncontrollable seizures were observed for 3 subjects. On one occasion we observed a seizure so violently convulsive that it was necessary to call a halt to the experiment. In the post-experimental interviews subjects took pains to point out that they were not sadistic types, and that the laughter did not mean they enjoyed shocking the victim

The conflict between what subjects’ conscience told them and what they ended up doing is striking, because it shows us that they obeyed the experimenter’s orders not because they enjoyed them, but because they could not bring themselves to disobey:

I observed a mature and initially poised businessman enter the laboratory smiling and confident. Within 20 minutes he was reduced to a twitching, stuttering wreck, who was rapidly approaching a point of nervous collapse. He constantly pulled on his earlobe, and twisted his hands. At one point he pushed his fist into his forehead and muttered: “Oh God, let’s stop it.” And yet he continued to respond to every word of the experimenter, and obeyed to the end.

Even the people who defied the experimenter’s orders were often apologetic for doing so. One subject said:

He’s banging in there. I’m gonna chicken out. I’d like to continue, but I can’t do that to a man. . . . I’m sorry I can’t do that to a man. I’ll hurt his heart. You take your check. .. . No really, I couldn’t do it.

Overall, the interesting thing is that both people who defied the experimenter, as well as those who obeyed him to the end, knew that continuing to administer shocks was the wrong thing to do. But while some of them kept going, others decided to stop:

I think he’s trying to communicate, he’s knocking. . . . Well it’s not fair to shock the guy . . . these are terrific volts. I don’t think this is very humane. . . . Oh, I can’t go on with this; no, this isn’t right. It’s a hell of an experiment. The guy is suffering in there. No, I don’t want to go on. This is crazy. [Subject refused to administer more shocks.]

This shows that it’s not just about having a conscience which tells you right from wrong. Rather, it’s also about having the willingness to act, and to do the right thing when you know you should do it.

 

Replications and variations

You might assume that this all happened because the researchers selected a certain type of person for the experiment. In reality however, the subjects came from a wide range of backgrounds. They were between the ages of 20 and 50, representing occupations such as salesman, engineer, teacher, and laborer, and ranging in educational level from someone who had not finished elementary school, to those who had doctorates and other professional degrees.

Furthermore, these results were replicated by other researchers. Their studies examined various populations, including people from completely different cultures than the original study, as well children as young as 6. In every case, the researchers found a similar pattern of behavior.

Interestingly, Milgram himself conducted a number of follow-up experiments, with different variations of the original experiment, which lead to different rates of disobedience:

  • Ensuring that the subject could hear the victim screaming in agony and begging to be released barely had an effect on defiance rates, which only went up from 34% to 38%.
  • Placing the subject in the same room as the victim brought up the defiance rate to 60%.
  • Having the subject force the victim’s hand onto the shock-plate while electrocuting him brought the defiance rate further up, but still only to 70%.
  • Removing the experimenter from the room where the subject was and having him give instructions by telephone, brought up the defiance rate to 78%, even though the subject could only hear the banging on the wall in this condition. Interestingly, some subjects in this case also administered weaker shocks than they were supposed to, and lied to the experimenter about doing so.

All these studies also tried to answer the question of who is likely to obey, and who is likely to be defiant. However, while we know that various personality traits can affect this choice, the way in which they do so remains relatively unclear. The only thing we know for certain is just how willing most people are to follow orders which are given by an authority figure, even when they know that these orders are wrong.

 

How to mitigate the authority bias

These experiments illustrate the dangerous power of using authority to convince others to do your bidding, which can work even when the authority is mostly implied, and there is no punishment for disobedience.

While the experiments themselves were very dramatic, in our everyday life the influence of the authority bias is less notable, though it still plays an important role in the way we make decisions. One study, for example, found that people are more likely to discriminate against minorities in hiring situations, if they receive justification for doing so from an authority figure.

The main problem is that most people immediately assume that “that’s not something I would do”. While this may be true, it is difficult to know what you would do in reality, until you’re actually in such a situation. We saw this in the experiments above, where there was a huge difference between the amount of people who were predicted to obey, and the amount of people who obeyed in reality.

Therefore, you need to be aware of this bias, and of the fact you might feel compelled to obey authority figures, even when you know you shouldn’t. When you find yourself in such a scenario, where there is a conflict between what you’re told to do, and what your logic and conscience tell you is right to do, remember that all the people in the experiment knew that what they were doing was wrong. The difference between those who obeyed their orders and those who defied them rested on a person’s ability to take action and stop the experiment. If you are afraid to act, ask yourself why you’re afraid, and what’s stopping you from taking action.

Furthermore, keep in mind the following techniques, which can help you take action and defy orders, in cases where you otherwise wouldn’t:

  • As we saw earlier, people were much more like to be defiant when the experimenter wasn’t in the same room as them. This indicates that creating distance between yourself and the authority figure can help you take a stand when necessary. The farther you are from them, either physically or emotionally, the easier you will find it to defy them.
  • Convincing yourself that the authority figure giving is illegitimate also increases the likelihood of disobeying them. As such, the less legitimate you believe the authority figure is, the more likely you will be to defy them when necessary. You can convince yourself of their illegitimacy by asking yourself things such as what power they hold over you in reality, or who gave them their authority in the first place.
  • As we saw in Milgram’s follow-up experiments, when the subjects were in the same room as the victim, they were much more likely to disobey the order to electrocute him. This indicates that reducing the physical and emotional distance between yourself and the victim can help you be more defiant when necessary. You can reduce your emotional distance to the victim by trying to put yourself in their shoes and imagine how they feel, or by imagining how you would feel about your actions if the victim was someone you are close to. In the Milgram experiment, many of the subjects continued to administer shocks while looking away, in an attempt to disassociate themselves from the victim, whom they knew was suffering. Don’t let that happen to you.

 

Summary and conclusions

  • The authority bias is a tendency to obey the orders of an authority figure, even when you believe that there is something wrong with those orders.
  • The goal of the Milgram experiment was to see whether people would obey the order to electrocute someone, even if that person was begging them to stop.
  • Though Milgram predicted that only a few people would be willing to obey, in reality the majority of the people were willing to continue shocking the victim until they were told to stop by the experimenter. These people did so even though they believed that they were inflicting great pain on their victim, and even though they thought that what they were doing was wrong, which led them to experience great distress throughout the process.
  • These findings were replicated by various researchers, who performed their experiments on people from widely-different backgrounds.
  • This bias can affect people in their everyday life, albeit in a less dramatic form. You can mitigate its influence by increasing the distance between yourself and the authority figure, by convincing yourself that the authority figure is illegitimate, and by reducing the distance between yourself and the victims of your actions.

 

If you found this experiment interesting, and want to learn more about the authority bias and its implications, take a look at Milgram’s highly-praised book “Obedience to Authority: An Experimental View“.