Critical Thinker Academy: Learn to Think Like a Philosopher

How to improve your grades, advance in your job and expand your mind -- by learning how to think for yourself!

Video Error

We tried several times to play your video but there was an unforeseen error. We have notified our engineers.

Visit https://helpx.adobe.com/flash-player.html to check that Flash is enabled on your browser.

With Flash enabled, please try again in a few minutes or contact support.

$35
Take This Course
  • Lectures 157
  • Video 14.5 hours
  • Skill Level All Levels
  • Languages English, captions
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 11/2013 English Closed captions available

Course Description

For long-term success in school, business and life, learning HOW to think is far more important than learning WHAT to think.

Yet rather than serve as the core of any education worthy of a rational human being, we have relegated the teaching of logic, argument analysis and critical reasoning to specialty courses in universities that reach too few students, too late in their education.

In this course I share my growing understanding of these topics, with a focus on what is practically important and useful for developing as independent critical thinkers.

Currently the course contains over 135 videos totaling over 14 hours of viewing time!

Each LECTURE has a COMPLETE TRANSCRIPT in the lecture description tab!

Each SECTION has a downloadable PDF ebook that contains full transcripts for the lectures in that section, in a format that is convenient for printing or viewing offline, for a total of 600 pages!

Among the topics you will learn:

  • why critical thinking is important
  • the difference between logic and argumentation
  • what makes an argument good or bad
  • the importance of background knowledge for critical thinking
  • techniques of argument analysis and reconstruction
  • what our growing understanding of the human mind tells us about how we actually form beliefs and make decisions
  • how to write a good argumentative essay
  • how to cite sources and avoid plagiarism in your writing

and much more! This content is drawn from a variety of teaching resources I've developed over the past few years, including a video podcast.

It's important for you to know that I am continuing to add videos and course modules on a regular basis. This course will continue to grow and grow -- I have a LOT OF GROUND that I want to cover! This is ONLY THE BEGINNING!

What are the requirements?

  • An interest in improving one's critical thinking skills. That's it!

What am I going to get from this course?

  • introduce students to fundamental concepts of critical thinking (logic, argument analysis, rhetoric, reasoning with probabilities, the importance of background knowledge, etc.)
  • explore the importance of critical thinking for personal development, participation in democratic society, and the pursuit of wisdom
  • explore the role that critical thinking principles play in good essay writing
  • and much more!

What is the target audience?

  • anyone who thinks that critical thinking is important and would like to learn more about it
  • anyone who is required to think and write argumentatively
  • anyone interested in the psychology of belief and judgment
  • anyone interested in philosophy and who would like to learn more about philosophical ideas and methods
  • anyone taking a logic or philosophy class who would like to brush up on their basic logic and argumentation skills

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Introductions
What People Are Saying ...
Preview
15 pages
How to Use The Course Materials
Preview
Article
Section 2: Why Critical Thinking is Important
Why Critical Thinking is Important - PDF Ebook
Article
07:08

Transcript

To begin, I think a martial arts analogy is useful here. Why do people sign up for martial arts classes? Well, some do it just for the exercise, some enjoy the sporty aspects, but obviously a lot of people take martial arts because they want to be able to defend themselves against physical attacks ... they do it to learn "self-defense".

Why do we feel the need to learn self-defense? Because we know that, no matter how kind-hearted or cautious we are, the world is a big place with lots of different kinds of people in it, and we might find ourselves in situations where we're confronted by people who aren't as kind-hearted or as cautious as we are, and where violence is a real possibility. And in those situations we want to able to protect ourselves and avoid getting hurt.

The situation is exactly the same when it comes to critical thinking, but people don't often think of it in this way. In this case we're not talking about people wanting to do us physical harm. We're talking about people wanting to influence our beliefs, and our values, and our actions. We're talking about people with a vested interest in getting us to believe what they want us to believe, to value what they want us to value, and to do what they want us to do.

Who are we talking about? We all have an interest in exerting our influence on the world. Parents have a special interest in exerting influence on their children, peers have influence over peers … there's no escaping it, it's part of being human.

But in thinking about the self-defense analogy, I'm thinking more about powerful social institutions, like political parties and advertising companies, who you can think of as being in the "influence business", whose job it is to get you to think and do what another person or group or institution wants you to think and do, and who have enormous resources and expertise at their disposal to be effective at their job.

What's crucially important to understand about these kinds of institutions is that, as institutions, they don't have any interest in your personal well-being, for your sake. They care about your well-being only to the extent that it affects their goals and interests.

In the case of politics the goal is to acquire enough support to gain and maintain political power, right? In the case of advertising the goal is to sell you a product or service. In both cases, your needs and interests only matter to them insofar as you're instrumental to meeting these goals.

Now don't get me wrong. There certainly are people in politics and people in business and advertising who are good-hearted and genuinely want to serve your interests. My point is just that the institutions themselves don't and can't care about you in this way, they can't know or care about what matters to you as an individual, what fulfills you, what's meaningful to you, what you value.

So what we have are these powerful institutions, that have a logic and dynamic of their own, that are in the "influence business". They survive not by threat of physical violence, but by getting us to believe and value what they want us to believe and value, so that we then behave in ways that conform to and reinforce their goals.

But of course they're not alone. They're in competition with other powerful institutions, with similar incentives and resources, to gain influence overs our beliefs and values.

So we find ourselves constantly bombarded by influence messages and being pulled in different directions, and being asked to take sides on ideological issues: Republican or Democrat, conservative or liberal, Protestant or Catholic, religious or non-religious, Coke or Pepsi, Mac or PC, and on and on and on.

This is the world we live in, where it's not an exaggeration to say that powerful social forces are engaged in a pitched battle for influence over our beliefs and values.

But wait, we're not done, it gets better!

The art of influence may be as old as human society, but over the past hundred years, and especially over the past thirty years, there has also arisen a science of influence that has dramatically increased the power of institutions to gain control and influence over our minds and actions.

I'm talking about cognitive and behavioral and social psychology, I'm talking about behavioral economics, I'm talking about a host of scientific specializations dedicated to understanding, predicting and influencing human behavior.

It's important for all of us to understand that governments, businesses and political parties hire PhDs in these fields to help them craft their influence strategies, or they outsource it to third-party firms who specialize in this kind of strategic consulting.
And here's the other important thing to understand. Influence strategies can often be very effective in getting you to believe or do something, but your beliefs may still be completely unjustified from a rational standpoint, and your actions may have nothing to do with your own true rational self-interest.

Think of the techniques used by effective sales people, think of the rhetoric of charismatic leaders, think of the crude appeals to emotion and the manufacture of discontent that is the bread and butter of advertising. All very effective, and all of it as likely as not to run counter to your most genuine needs and interests.

This is the environment that we find ourselves in. Now, for most of us it doesn't strike us in quite this way. I think this is because we're so socialized to it, it's ubiquitous. But it's there, and acknowledging it is important if our goal is to become independent thinkers who can legitimately claim responsibility and ownership of our beliefs and values.

So, the first reason I'm offering for caring about critical thinking, is self-defense — self-defense against the sophisticated manipulations, the bad arguments and the non-arguments that are the weapons of choice in the battle for influence.

What am I saying here? I'm saying that a good education in principles of critical thinking can help to sensitize us to the presence of these weapons, and immunize us, to a certain extent, from their effects, by learning to discriminate between good and bad reasons for belief and action.

But this isn't the end of the road. This is just the first step in becoming an independent critical thinker.

Our goal isn't just to detect manipulative rhetoric and fallacious reasoning whenever we encounter it. That's important, that's vital, but it's not our ultimate goal.

Our ultimate goal is to be able to construct good reasons for the positive beliefs we hold. We want to be able to justify and claim ownership of the worldview that guides our understanding of the world and our interactions with other people and that informs our choices.

05:48

Transcript

In part 1 I talked about critical thinking in terms of self-defense, as a means of protecting ourselves from the false rhetoric and bad arguments that are often used by people and institutions to get us to believe and do things that aren’t really in our best interest. This is the situation when tools of persuasion are directed at us, when we’re on the receiving end of an argument from a friend, or an advertising pitch, or a sermon, or a political speech, or a newspaper editorial, or whatever, and we have to distinguish between good reasons to believe something and bad reasons to believe something.

Now I want to talk about the flip side of the situation, when you’re the one doing the persuading, you’re the one in the position of having to give the argument or write the pitch or deliver the sermon or write the editorial.

What I want to claim is simple: if someone is well-versed in the elements of critical thinking then they’re more likely to be effective persuaders in situations like these. This is what I mean by “empowerment” — we’re empowered by our ability to organize our thoughts in a logical way, and to craft an argument that gives our audience the strongest reasons possible to accept our conclusion.

But as good critical thinkers we’re also going to be empowered by our understanding of human psychology and the psychology of influence and persuasion, so that when we give this argument, we’re in a position to maximize its chances of being heard and acknowledged and responded to.

Now, not everyone associates critical thinking with these kinds of positive qualities, and especially this last bit about the psychology of persuasion, or at least it’s not the first thing that comes to mind when they think of critical thinking. There are a couple of reasons for this.

One reason is that more often people associate critical thinking with the “self-defense” aspects that we talked about above.

Another reason (and one that I think is more interesting to talk about) is a view that many people have — that critical thinking is, and should be, about good logic and good argumentation, and that’s it. They associate the psychology of belief and persuasion with the techniques of manipulative rhetoric that we’re warning people against. From this point of view, logic and argumentation is “white magic”, it’s what the good guys use. And rhetorical techniques and psychological strategies is “black magic” or “dark magic”, it’s what the bad guys use — these are the tools used by advertisers and politicians and spin doctors to manipulate information and control public opinion.

I understand this point of view, and I agree that it’s important to clearly distinguish good argumentation from persuasive rhetoric. They’re not the same thing.

However, I think it’s a mistake to think that the theory and techniques of persuasive rhetoric do not belong, do not have a proper place, in the critical thinker’s toolkit. They do. But they’re just like any tool, they can be used for good ends or for bad ends. What ends you use them for is up to you, the tools themselves aren’t to blame for the evil uses to which they’re put.

But more importantly, the psychological dimension is unavoidable. Every argument you give in the real world, to a real audience, is defined in part by the social and psychological context within which the argument is given. And in that context, the persuasive power of the argument is determined by an array of factors. The logic matters, but the logic needs to reflect the argumentative context of what is at issue, who you’re trying to persuade, what they bring to the table in terms of background beliefs and values, why they care about the issue, what’s at stake in the argument, and so on.

And these are psychological and social factors. If you don’t understand these factors, if you can’t get inside the head of your intended audience, if you can’t see the issue through their eyes, and tap into what they care about, what motivates them, then you’ll never be able to persuade them to accept your point of view.

Good argumentation isn’t just about good logic, it’s also about good psychology.

Now, just to make the point a different way, think about what goes into crafting a great argumentative essay and delivering it as a speech. We outline the argument in a kind of shorthand, focusing on the key premises, identifying background assumptions that might be contentious, anticipating possible objections, coming up with replies to those objections, until we’ve got the logical structure of what we want to say laid out.

Then we’ve got to think about HOW we want to present this. We’ve got to make choices about wording, about the order of presentation, about pacing, about vocabulary, about sentence and paragraph structure, and so on.

Then we’ve got to think about delivery (this is a speech after all) and we’ve got to think about speaking technique and all those factors that go into public speaking, and all the while we’re motivated by the goal of making this the most effective and persuasive argument we can.

Now, in this context, it seems a mistake to think about this thought process in terms of white magic and black magic, right? Thinking about rhetorical technique and the psychology of persuasion is an inseparable part of the crafting of a great persuasive argument.

So, my view is that a good training in critical thinking needs to pay attention not only to logic and argumentation in the abstract, but also to logic and argumentation in real-world contexts, where rhetorical choices and psychological factors inevitably come into play when you’re engaged in argumentation with real people.

It follows, then, that if you’re well-versed in all these aspects of critical thinking, then you’re going to be in a better position to have your voice heard, to be effective in the role of influencer.

10:40

Transcript

In the last two lectures we’ve been talking about the value of critical thinking in terms of rational self-interest. On the one hand, it can help to protect us from bad arguments and manipulative rhetoric, and in so-doing give us a space to think and deliberate about what we really believe and care about. That’s critical thinking functioning as a kind of self-defense. It can help us with our “defensive game”.

We also talked about how it can help us with our “offensive game”, and by that I mean that it can help us to be more articulate and effective spokespeople for our own goals and values; it can give us a voice and strength of influence that we might not otherwise have. That’s what I’m calling critical thinking in the service of personal empowerment.

Now, this is all well and good, but the value of critical thinking doesn’t stop with individual self-interest. If we look past our noses we see that we’re not isolated islands, that we actually live in community with other people, and these communities form a society with institutions and governments that are designed to serve the needs of the people.

Or at least, that’s the way it’s supposed to work in so-called “liberal democratic societies”. What I want to talk about on this episode is the role of critical thinking specifically in liberal democratic societies, and what duties we as citizens have to try to be critical and independent thinkers.


What I Mean By “Liberal Democracy”

As a point of clarification, when I say “liberal democratic society” I’m not using the term “liberal” in the sense in which the term is commonly used in political discussions, where it’s contrasted with “conservative”, like when we say that Michael Moore is a liberal and Rush Limbaugh is a conservative; or pro-choice activists are liberal and pro-life activists are conservative; or support for universal health care is liberal and opposition to it is conservative. That’s not what the term “liberal” means in this context.

When we call a society a “liberal democratic” society, we’re using the older sense of the term, where “liberal” refers to liberty or freedom, and “democratic” refers to rule or governance by the people. So we’re talking about forms of government where political power is vested, ideally, in the people, and they get to elect officials whose job it is to represent their interests and make policies and laws that serve those interests. These are democratic societies, rule by the people.

What makes them liberal democracies is a specific conception of what the role of the state is with respect to its citizens. And part of this role is captured by what is sometimes called the doctrine of liberal neutrality. What does this mean? “Liberal neutrality” is the view that individual citizens should be free to pursue their own conception of the good life, their own conception of what is ultimately good and valuable and important. That is, the state doesn’t impose any particular conception of the good life on its citizens, it’s “neutral” on this question. That’s what “neutrality” means here.

So what’s the job of the state? The job of the state is to ensure that you have as much freedom as possible to pursue your own good, your own happiness, consistent with everyone else having the same freedom to pursue their own good. That’s the “liberal” part, the “freedom” part.

Now, when it’s phrased like this, liberalism sounds more like classical 18th and 19th century liberalism, which is closer to libertarianism than to modern 20th and 21st century liberalism. That’s partly right. We’re talking about liberalism as it was understood by people like Thomas Hobbes (1588-1679) and John Locke (1632-1704) and Thomas Jefferson (1743-1826) and John Stuart Mill (1806-1873), which has a libertarian slant to it.

But it’s not entirely right. This concept of “liberal neutrality” is also consistent with the “welfare liberalism” of someone like John Rawls (1921-2002) and contemporary “progressive liberals”, who argue that justice requires a certain amount of redistribution of wealth from the rich to the poor. What’s at issue between welfare liberals and libertarians is just what is required for individuals to actually have and be able to exercise their basic rights and freedoms. There’s a real disagreement here, but it’s a disagreement about what liberal freedom demands and how it can best be achieved, not about the value of freedom per se, and not about the importance of liberal neutrality; on these issues, they’re in agreement, they’re all variations on a theme within family of liberal democratic political philosophies.


Critical Thinking in Liberal Democracies

So, getting back to critical thinking, there are two questions I want to ask:

  1. what role does critical thinking play in maintaining liberal democracies?
  2. do citizens of liberal democracies have a civic duty to cultivate their critical thinking faculties?

I’ll give you my answers up front: the answer to 1 is “a BIG ONE”, and the answer to 2 is “YES”. To my mind this is obviously true, but I know it’s not to a lot of people (certainly not my students) so here goes.

Ideally, democracy is government by the people, for the people. We, the people, are charged with electing officials to represent our needs and interests, and if we don’t like how they govern, we have the power and the responsibility to vote them out of office. As citizens we’re not normally involved in drafting laws and policies, but we’re responsible for assessing the laws and policies that our elected and appointed officials propose and vote on. And doing this well requires both an informed citizenry and a citizenry capable of critically assessing arguments, pro and con, that pertain to the laws and policies in question.

So that’s one obvious reason why critical thinking is important. It’s important because it’s necessary to participate fully in the democratic process.

But I think it goes even deeper than that. There’s always been a kind of tension between democratic rule and classical liberalism. Liberalism emphasizes respect for individual rights and liberties, freedom of thought and expression, and so on. But democracy carries with it the potential for mob rule, the tyranny of the majority or the privileged classes, and the state-sanctioned oppression of minority classes. Think of the status of women, or people of color, or indigenous peoples, or gays and lesbians, in ostensibly democratic countries over the past 150 years.

Now, when we think about the women’s rights movement, or the civil rights movement, or other liberation movements, I want us to consider the importance of independent critical thought in making those movements possible.

When it was illegal for women and people of color to vote and participate as equal citizens government and society, the cultural norms and social pressures of the time made it easier to follow along and not question the rationale for those norms.

But think now about what it would mean to ask, at that time: how can this situation be rationally justified? What are the arguments for this position? Are they good arguments? Are all the premises plausible? Does the conclusion follow? Do they rely on background assumptions that can be challenged?

In asking these questions we’re just doing elementary argument analysis, but in this context, these are subversive, dangerous questions — they threaten an entire social order.

So it’s not surprising that in social groups that are deeply invested in perpetuating an oppressive social order, this kind of critical inquiry isn’t going to be valued. It’s going to be controlled or suppressed if it’s seen as asking people to question the foundations of the established social order. George Orwell wrote about this at length in his essays and novels. Socrates was put to death for asking questions that challenged the worldview of the established authorities of the day.

Now, my claim is not that teaching critical thinking will magically wipe away oppression and human injustice. Of course there’s no guarantee that simply raising these questions is going to cause the scales to fall from people’s eyes and see the error of their ways.

My claim is simply that it’s harder for oppressive policies and beliefs to gain a foothold in a democratic society that openly supports the value of critical thinking and Socratic inquiry. Not that it’s impossible, just that it’s harder.

So, to the extent that we care about freedom and equality and justice and the ideals of liberal democratic governance, I think we should also care about critical thinking and the values of Socratic inquiry. I think they go hand-in-hand, you can’t value one without valuing the other.

Now, I also think it’s obvious that we in fact live in a very non-ideal world, where principles of liberal democratic governance can be undermined by internal corruption, excessive nationalism, corporate lobbying, geo-political threats, globalization, war, resource scarcity, you name it.

But I think these challenges just make the case for critical thinking even stronger. If liberal democracy survives and flourishes through this century, it will be because we somehow managed to foster and exercise our capacity for critical and creative thinking in tackling these problems.


Do We Have a Duty to Cultivate Our Critical Thinking Skills?

We’ve been talking about the importance of critical thinking for liberal democracy. The second question I wanted to ask was, in light of all this, do we as citizens have a duty to cultivate our rational, critical thinking faculties?

I hope it’s clear now why I think the answer is “yes”. It’s not a legal duty of course, we can’t force anyone to study logic and argumentation, any more than we can force people to vote on election day. But it’s plausible to think that we still have some kind of moral duty to vote; the system just doesn’t work properly unless enough of us participate. I think the same is true with critical thinking skills. The system just doesn’t work properly unless enough of us take up the challenge of staying informed and thinking critically about the issues that matter to us.

Now, the distressing thing to me is that despite all this, we don’t (generally) teach critical thinking in the public school system. Even here in the US, the home of liberal arts education that stresses general knowledge and thinking skills, only a tiny fraction of students are ever exposed to basic principles of logic and argument analysis, beyond the elementary theorem-proving you might do in a geometry class. You’ll find very little, if any, discussion of the distinction between good and bad arguments in the public school curriculum, from kindergarten to 12th grade.

It’s a bit better in college and university, where you’ll actually find dedicated critical thinking courses, but it’s still the case that nationally only a fraction of students are required to take such courses.

And the trends in higher education aren’t very positive in this regard. Globally, we’re seeing less and less support for the liberal arts and humanities, and more and more support for business, management and applied science and technology. The trend is toward education with economic impacts in mind, rather than human development in mind. I see this in my own university in the United States, but there’s evidence for a global shift in educational philosophy in Europe and India and other parts of the world, that is marginalizing the humanities even more than it does in the US*.

I’m worried that these trends may have the unfortunate side-effect of actually stifling the development of important critical thinking skills. I think this is bad for business in the long run, but I’m even more worried that it’s bad for liberal democracy in the long run.


__________________
* For more on these issues, see Martha Nussbaum, Not For Profit: Why Democracy Needs the Humanities (2010, Oxford University Press).

09:16

Transcript

In this lecture I want to talk about the importance of critical thinking for philosophical thinking and the pursuit of “philosophical wisdom”. What I’m going to say here is probably obvious to anyone who’s studied philosophy for any length of time, but for those who haven’t, I hope this will be helpful in understanding what the study of philosophy is all about.

A Definition of “Philosophy”

When you look at the word “philosophy”, you see that it has two Greek roots — “philo”, which means love, and “sophia”, which means wisdom. So philosophy is, literally, the love of wisdom, and a philosopher is a lover of wisdom.

Now, what do we mean by “wisdom”? It’s a term that doesn’t have a precise meaning, so for the sake of discussion I’m just going to stipulate a meaning that suits our purposes.

Wisdom, I think we can all agree, involves knowledge. But we want to distinguish the knowledge that the wise person has from mere information, or knowledge of empirical facts.

Here’s my slogan definition: to have wisdom is to have Knowledge of the True and the Good, and to act wisely is to act on the basis of this knowledge.

This isn’t the only way to define wisdom, but it’s helpful for us because it captures something that’s distinctive about philosophical wisdom, the wisdom associated with philosophical insight or understanding.

Let’s unpack this slogan a bit further. Wisdom involves knowledge of the True. This expression, “the True”, with a capital T, is just a shorthand for reality, the way things actually are, behind the scenes.

Wisdom also involves knowledge of the Good, and by that I mean knowledge of what has intrinsic value, and therefore worth pursuing for its own sake. I’m actually using this as an umbrella term for anything have to do with values and ultimate goals.

So, putting this all together, the wise person has (i) knowledge of reality, the way things really are, and (ii) knowledge of what is ultimately good and valuable, and (iii) is willing and able to act and make decisions in accordance with this knowledge.

The philosopher is someone who loves and pursues this kind of wisdom. But we can’t be done yet. Because if we just leave it at this, then there are lots of people who would qualify as philosophers who we wouldn’t normally want to place in this category. Philosophy, as a discipline, is devoted to the pursuit of wisdom, but it’s not the only game in town. There are other professions, other disciplines, other intellectual traditions, that are also in the “wisdom game”, who claim to have wisdom, to have a deep knowledge of the True and the Good.

Other Wisdom Traditions

I’m thinking here specifically of two traditions.

One is the tradition of Western religion that focuses on REVELATION as the primary means of acquiring wisdom. In this tradition —and here I’m trying to be specific, in the tradition of REVEALED religion — wisdom is given to human beings, revealed to us, through divine acts, like the divinely inspired writing of texts, like the books of the old and new Testament bibles, or the Koran; or sometimes through visions or personal communication with a divine presence of some kind. Either way, human beings aren’t responsible for this wisdom, human beings didn’t work it out for themselves by deliberation and reason — it’s delivered to us.

In this context, we wouldn’t describe this wisdom as philosophical wisdom, as a product of philosophical reflection. We should distinguish this sort of revealed wisdom from the wisdom that might be obtained through philosophical inquiry.

The other wisdom tradition that I’m thinking of is the one we associate with various forms of MYSTICISM. Now, I’m not comfortable generalizing and lumping all forms of mysticism together, but for our purposes I sort of have to, but all I’m talking about is those traditions where the goal is some form of enlightenment, an awareness of deep metaphysical truths, and this enlightenment is experienced as a firm of direct intuition or insight, a conscious awareness of a deep, ultimate, transcendent reality; and where the methods for attaining this state of awareness don’t involve, primarily, rational argumentation, but rather various forms of disciplined practice, like meditation, or other ways of getting your mind to transcend the rational ego, so that your mind becomes suitably receptive to this arational (not irrational) awareness.

The wisdom that comes out of this tradition, I want to say, should be distinguished from the wisdom that comes from philosophical reflection.

Philosophical Wisdom

So, what exactly is the secret sauce that makes for philosophical wisdom? In a nutshell, the secret sauce is RATIONAL ARGUMENTATION.

In philosophy, the primary means by which wisdom is acquired is through rational argumentation, which is an inherently public and social activity. Someone offers reasons for holding a belief, these reasons are presented in some public way, through speech or writing, so that anyone could, in principle, examine them and assess them, objections are raised, replies are offered, and so on, and the dialectic evolves over time.

This is the tradition that begins with the pre-Socratic Greek philosophers; this is the tradition that flourishes with Socrates, Plato and Aristotle; this is the tradition that continues into the Medieval era with Augustine and Anselm and Aquinas and many others; this is the tradition associated with the writings of Descartes, Leibniz, Spinoza, Locke, Berkeley, Hume, Kant, Hegel, Nietzsche, Russell, Wittgenstein, and so on, up to the present day. There may not be much that all these philosophers agree upon, but argumentation is central to their methodology. Logic and argumentation are central among the tools that philosophers use to construct and evaluate their views.

So, having said all that, let’s get back to our main question, which is “What is the relationship of critical thinking to the pursuit of philosophical wisdom?”.

The answer, of course, is that it’s CENTRAL to the pursuit of philosophical wisdom.

Logic and argumentation are key elements of critical thinking. It’s like asking “what is the role of mathematics in the practice of physics?” The answer is “everything”, its foundational! Mathematics is the language through which the claims of physical theories are expressed. Logic and argumentation and critical thinking are to philosophy what mathematics is to physics. If you’re ignorant of the former than you simply can’t understand what’s going on in the latter, much less contribute to it.

So that’s my final word on the importance of critical thinking. If you have any interest in reading or understanding what philosophers have written and said about the big questions, then you’ll need to acquire some basic critical thinking skills. And if you want to think critically and independently about the big questions for yourself, it’s unavoidable.

Disclaimers

Now, before I wrap up this discussion I want to offer a few disclaimers and qualifications, since I know if I leave it here I’ll get jumped on.

1. Nothing that I’ve said here implies sharp boundaries between philosophy, religion and mysticism.

In each of these you’ll find overlap and interpenetration. There are philosophical traditions in every branch of Western religion, Judaism, Christianity and Islam, that focus on argumentation. There are philosophical traditions in every branch of Eastern religion. There are also mystical traditions within every branch Western religion. And there are plenty of philosophers within the Western tradition that have been influenced by various forms of revealed religion or mysticism. Nothing that I’ve said here implies anything to the contrary.

2. Nothing I’ve said here implies that the philosophical approach to wisdom is superior to the religious or the mystical, or that the philosophical approach is incompatible with revealed religion or mysticism.

That may or may not be true, but I’m just saying that nothing I’ve said here entails a position on this one way or another.

3. Nothing that I’ve said here implies any particular philosophical position on any particular question (as far as I can tell).

In particular, it doesn’t rule out various forms of skepticism. It may be that the rational pursuit of wisdom leads us to conclude that our knowledge of the True and the Good is extremely limited, that we can’t know the True and the Good with any certainty. But that too would be a kind of wisdom, wouldn’t it —the wisdom of knowing that you don’t know, and maybe can’t know? This kind of thinking has a long and venerable pedigree in philosophy too. But there’s no need to prejudge such questions.

4. Nothing I’ve said here implies that all philosophers have the same views about the role and importance of logic and argumentation in philosophy.

There are some philosophical traditions that want to problematize logic itself, and some that view the goal of philosophy as something quite different from the search for wisdom in any traditional sense of that word. But I don’t think the existence of these sorts of philosophical traditions undermines much of what I’ve said here. Philosophers who hold these critical positions typically don’t go around saying that they came to them through some mystical insight or by the revealed word of God. They hold these positions because they’ve thought about them, they’ve subjected them to rational scrutiny, and they try to convince others through rational discourse of one form or another. In this respect we’re all playing on the same field, if not always following the same rules.

Section 3: The Five Pillars of Critical Thinking
The Five Pillars of Critical Thinking - PDF Ebook
Article
10:10

Transcript

The title of this episode is “The Five Essential Components of Critical Thinking”. This episode is actually kind of important because it’s going to function as a sort of roadmap for organizing future podcast episodes. Just about everything I have to say about critical thinking falls into one or another of these five component areas.

So in this episode I want to make a first initial pass over the five areas, and then I want to begin a second pass that unpacks things a bit further in each of the areas. This second pass is going to take a couple of episodes, so I hope you’ll bear with me.

Okay, here’s the first pass.

The first component that we need to understand to be effective critical thinkers is, not surprising, LOGIC.

The second component is ARGUMENTATION. Sometimes the term “logic” is used broadly to include what I’m calling “argumentation”, but it’s more helpful to distinguish the two -- logic and argumentation are not the same thing. We’ll talk more about this distinction in this episode

The third component that we as critical thinkers need to understand is RHETORIC. This might surprise some of my fellow philosophers who associate rhetoric with the art of persuasion without any special regard for good reasoning or the truth, but trust me, it belongs here.

The fourth essential component of critical thinking is BACKGROUND KNOWLEDGE. One of the dirty secrets of critical thinking instruction is that all the logic and argumentation skills in the world will not make up for ignorance -- if you don’t know what you’re talking about, your arguments are going to suck, no matter how good your skills at logical analysis.

And finally, the fifth essential component of critical thinking is the cultivation of a certain set of ATTITUDES and VALUES -- attitudes toward yourself and to other people, attitudes toward uncertainty and doubt, attitudes toward the value of truth and knowledge, and so on. Critical thinking text books don’t talk about this very much, but it’s an absolutely essential component of critical thinking.

Okay, that’s a first pass through the five areas or components of critical thinking. My claim is that you can’t really be an effective critical thinker unless you commit to working on all five areas, because they’re mutually dependent on one another to work properly. It will take me several episodes to really make this case, but I’m going to put it out there anyway so it can jostle around in the back of your mind.

Logic and Argumentation

Okay, let’s start back at the top and dig a little deeper into the first two areas. We talked about logic and argumentation, but what exactly is logic, and how does it differ from argumentation?

Well, one way to think of this is to treat argumentation as the broader category and logic as an important component of argumentation. The central organizing question for argumentation is, what does it mean to have good reasons to believe something? It’s in the theory of argumentation that we try to answer this question. It’s here where we define what an argument is and isn’t, and try to come up with standards or norms for distinguishing good arguments from bad arguments.

Now, when we frame it this way, the theory of argumentation is actually a branch of philosophy -- it’s a subfield of the broader discipline that philosophers call “epistemology”, which is the philosophical study of knowledge, and it intersects with another subfield of epistemology, which is the theory of rationality, of what it means to think and act rationally.

Now, as you might expect, there’s more than one way to tackle these questions, and what you’ll find if you survey the literature are theories of argumentation, and theories of rationality.

But there’s general agreement that if an argument is to provide good reasons for an audience to accept its conclusion, then it has to satisfy at least the following two conditions:

1. the premises must all be plausible to the intended audience

2. the conclusion must follow from the premises

The first condition has to do with whether the audience thinks the premises are true or not. The second premise has to do with the logical relationships between the premises and the conclusion.

Logic: The Science of What Follows from What

This second condition is what LOGIC is all about. It’s the discipline that deals with the question of what follows from what.

Let me give you an example. If I believe that “All swans are white”, and my buddy Jack tells me he owns a swan, then I’m logically committed to the belief that Jack’s swan is also white. If I were to deny this, then I’d be contradicting myself, since if Jack’s swan isn’t white, then it can’t be true that ALL swans are white. This is an example of logical inconsistency -- a set of claims is logically inconsistent if they can’t all be true at the same time.

The concepts that we just used here -- the concepts of logical entailment, and of consistency and contradiction -- these concepts are defined within the field of logic.

Now, an important feature of logical relationships is that they don’t depend on the specific meanings of the terms involved. I can rephrase the example we just gave without referring to swans. If I believe that “All X are Y”, and my buddy Jack tells me he owns an X, then I’m logically committed to the belief that Jack’s X is also Y. And this will be true no matter what we substitute for X and Y.

Logical relations hold even when the claims involved are false. If I believed that “All rabbits speak Italian”, and if Jack said he owned a rabbit then I’d be committed to the the belief that Jack’s rabbit spoke Italian, and if I denied it, I’d still be contradicting myself.

These examples show that logical properties are really FORMAL properties, and when we’re doing logical analysis of an argument, all we’re doing is investigating the formal or structural properties of the relationship between the premises and the conclusion, we’re not interested in the content of what’s being asserted, or whether the claims involved are even true or not.

An Example of Logical Analysis

Logical analysis can be a surprisingly powerful and versatile tool in argumentation. It’s effective because it trades on our natural desire to avoid contradictions in our reasoning, and its at the root of some standard argumentative techniques, like “proof by contradiction” or “reductio ad absurdum”, which is Latin for “to reduce to the absurd”.

Example: My friend Steven tells me that gay couples shouldn’t have a legal right to marry because, in his view, this legal right is grounded on what he calls the ”proper function of the institution of marriage”, which is to provide the healthiest and most nurturing environment for the raising of children, and more specifically, the biological offspring of the parents, since the biological family unit is the core of our social system. So, because same sex couples can’t have biological offspring, their union can’t fulfill the proper function of marriage, and therefore they shouldn’t have a legal right to marry.

Now, looking at this argument from a purely logical standpoint, we can say two things. First, the conclusion does follow from those premises. If we grant all of Steven’s assumptions (and there are many) it does follow that gay couples should not have the legal right to marry. This is a logical property of the argument.

And notice that it has nothing to do with whether the assumptions are true or even plausible; when we’re doing logical analysis, we’re only interested in what follows if we grant the assumptions.

Note also that in saying that the conclusion follows from the premises, we’re not saying that the conclusion is true, or that the argument is a good one overall. It has good logic, but we haven’t said anything about the plausibility of the premises yet.

Now, the second observation we can make about the logic is that the scope of this argument is actually very broad, much broader than Steven seems to realize. Why? Because from the very same assumptions it follows not only that gay couples shouldn’t have the right to marry, but also that a good number of heterosexual couples shouldn’t have the right to marry either -- most obviously, couples who are sterile or who are too old to have children. Why? Because without the possibility of having biological children, their unions can’t fulfill the “proper function of marriage” either.

Now, in pointing this out, we’re not challenging any of Steven’s assumptions, we’re just drawing out the logical consequences of those assumptions. Left as it is, Steven’s argument entails certain conclusions that aren’t very attractive -- Steven is unlikely to accept the conclusion that we should deny sterile couples the legal right to marry. But if he’s going to reject this conclusion then he’s forced as a matter of brute logic to reconsider his assumptions; otherwise he’s guilty of inconsistency -- not all of his beliefs can be true at the same time.

And here’s an important point: logic can tell us whether a set of beliefs is consistent or inconsistent, but by itself it can’t tell us which belief to modify in order to remove the inconsistency. That’s a choice that Steven has to make, and he can have reasons for preferring one way over another, but which choice he makes won’t be dictated by logic alone.

Argument Analysis: More Than Just Logic

Now, when we move to the level of argument analysis, we’re not just analyzing the logic of the argument, we’re also assessing the truth or falsity of the premises themselves. We’re asking, are these assumptions plausible? Or are they contentious? Do they need supporting evidence or argumentation to back them up? We’re also asking whether the argument ignores certain facts or evidence that would count against it.

So, it’s at the level of argument analysis that we would ask questions like, are we willing to grant that the legal right to marry should be grounded in a couple’s capacity to have biological children? If this doesn’t strike us as obviously true, what arguments could be given for it? How are legal rights normally grounded in our legal system? Should we distinguish between legal rights and moral rights in this case? And so on.

It’s also at this level that we might want to consider the most popular counter-arguments in support of same-sex marriage, and see how the issues raised in those arguments might bear on the reasoning that Steven is using here.


So, to sum up, one way in which argument analysis differs from purely logical analysis is that argument analysis is also concerned with assessing the truth or falsity of the premises. In purely logical analysis we bracket this issue and just ask ourselves whether, if all the premises were true, would the conclusion follow?

15:16

Transcript

The main goal of this lecture is to help you better understand what’s important about logic from a critical thinking standpoint.

Logic is a science, it’s a technical discipline that you can study for its own sake, but most of what professional logicians study is irrelevant for critical thinking purposes. I want to help you figure out what’s important about logic, what parts are important to learn and what parts you can safely leave to the specialists.

i. What You Learn in a Typical Formal (Symbolic) Logic Course

Let me start by describing the sort of thing you’ll be taught in an introductory course in formal or symbolic logic. These are normally taught by faculty in philosophy departments. At most universities a student majoring in philosophy has to take a course like this, and in most MA and PhD programs students have to pass an exam on formal logic to get their degree. But you’ll also find courses like these taught in computer science departments, and sometimes in linguistics departments if there’s an emphasis in the program on formal linguistics, and in most mathematics departments there’s at least one course in mathematical logic or foundations of mathematics that covers more intermediate and advanced topics.

So what are you taught in these classes?

Well, in your first class you’ll be told that logic is the discipline that studies principles of correct reasoning, rules for determining what follows from what. More specifically, logic is concerned with providing symbolic models of correct reasoning.

What makes this kind of analysis possible is that reasoning is something that we do in language and through language. In language we can distinguish between syntax and semantics. Syntax is about the rules and relationships that underwrite the grammar of a language, how the symbols of a language combine to make meaningful, grammatically well-formed statements. Semantics is about how those symbols acquire meaning in the first place, and how the meanings of statements relate to the meanings of the component parts of a statement, the basic symbolic vocabulary.

So, in language we can distinguish the form of a linguistic assertion from the content of that assertion. This form-content distinction is essential, because logical inference is about form, it’s not about content.

One of the first things you do in a logic class is learn how to rewrite statements and arguments in symbolic form, so that you can then evaluate the reasoning on a purely formal level. How you do this depends on the kind of arguments you want to study. What you learn in a first logic course is that there’s more than one system of logic, there are multiple systems, and each of these systems captures different aspects of the structure of natural language and argumentation in natural language.

ii. Categorical Logic

For example, you’ll learn that the first formal system of logic was developed by Aristotle, and it’s known as categorical logic, or the logic of Aristotelian syllogisms. Categorical statements are statements of the form “All A are B”, “Some A are B”, “No A are B”, “All A are not-B”, and so on. The As and the Bs refer to categories of things, so when I say “All whales are mammals”, this statement expresses a relationship between the category of things that are whales, and the category of things that a mammals; namely, it asserts that the category of mammals contains, as a subset, the category of whales.

You’ll then be taught a set of techniques for diagramming and evaluating arguments that use premises like this, usually a modification of the Venn diagram method to model the relationships between categories, and you can then see at a glance whether a categorical argument uses a valid or an invalid logical form.

iii. Propositional Logic

Now, an important limitation of Aristotle’s system is that it doesn’t really deal with compound statements like

“I’ll have the chicken salad or the lasagna for lunch”,

or

If Jack finishes his homework then he can go to the movies”.

Here we have statements that are made up of two smaller component statements (“A or B”, and “If A then B”, respectively). The truth or falsity of the whole compound statement depends on the truth or falsity of the component statements that make it up.

It was the Stoic philosophers in the third century BC who first worked out a system for reasoning with compound statements like this. This was the start of what we now call propositional logic, or sentential logic, and it’s called this because the smallest unit of logical analysis is the whole statement, and we use letters to symbolize whole statements. So the statement “Jack finishes his homework” could be symbolized with the letter “H”, and “Jack can go to the movies ” could be symbolized with the letter M, and we could represent the conditional claim as “If H then M”.

So in propositional logic, or sentential logic, we have a set of rules for evaluating what are called “truth-functional” arguments, arguments where the inferences turn on the truth-functional structure of the premises. I know that sounds very abstract, but the ideas are simple enough. They’re what lie behind simple arguments like this:

“Either the brother is the murderer, or the girlfriend is the murderer. But the girlfriend was out of the country at the time the crime was committed. So it follows that the brother must be the murderer.”

This argument has the simple form “Either A or B, not-B, therefore A”, which is a valid argument form in propositional logic:

1. A or B
2. not-B
Therefore, A

It’s the basic argument form used in any kind of detective or forensic or diagnostic work, where we use evidence to eliminate alternative possibilities.

iv. Predicate Logic

The next thing you’ll learn in a symbolic logic course is what’s called predicate logic. Predicate logic is related to categorical logic in that it too tries to unpack and symbolize the internal structure of propositions, in this case the relationship between subject terms and predicate terms.

So in predicate logic you might represent a claim like

“All whales are mammals.”

as

“for all x, if x is a whale then x is a mammal”

and you’d use letters to symbolize the predicate expressions “ x is a whale” and “x is a mammal”. Then you’ll learn a bunch of proof techniques for demonstrating whether arguments symbolized in this way are valid or invalid.

And that about wraps up most first courses in symbolic logic. You’ve learned three logical systems, Aristotelian categorical logic, modern propositional logic, and modern predicate logic. And you’ve learned how to symbolize fragments of natural language in each of these formal languages, rewrite arguments in symbolic form, and then evaluate the logical validity of those arguments using a variety of formal techniques.

You’ll also be told that this is just the tip of the iceberg. There are many different formal systems in logic that are used to analyze different fragments of natural language. For example, we often talk about certain statements being necessarily true, or possibly true, and the logic that governs inferences that deal with these concepts is called “modal logic”, and we can go on and on -- there are “temporal” logics, “relevance” logics, “deontic” logics, and so on. These are the sorts of systems that professional logicians study.

v. How Much Formal Logic Do Critical Thinkers Need to Know?

It’s not hard to see how the study of formal languages and logical systems might be of intrinsic philosophical interest, and it’s not hard to see how it might be of practical interest to people working in computer science or artificial intelligence or formal linguistics or the foundations of mathematics. But what does any of this have to do with critical thinking?

Will learning how to symbolize sentences and prove theorems in any of these logical systems help you to become a better critical thinker? Will it help you do better at detecting bad arguments when you encounter them, and constructing good arguments of your own? Will it help you be a more persuasive speaker and writer?

My answer, you shouldn't be surprised to hear, is both “yes” and “no”.

Why no? Here are two reasons.

One reason has to do with the fact that we as human beings are generally very bad at transferring skills that we learn in classroom exercises to applied contexts outside the classroom. This is true across a wide range of disciplines, it’s been studied by learning psychologists for many years. You can teach a physics student how to solve problems using Newton’s laws of motion, but outside the classroom, when asked to reason about a real physical situation on the fly, they’ll often default to the physical intuitions about motion, force and inertia that they had prior to taking the class, which are much closer, it turns out, to Aristotle’s physics than to Newton’s physics. Most people with training in statistics don’t seem to fare much better than the rest of us at real-world tasks requiring reasoning about probabilities and uncertainty. And there are similar results in biology, economics, history, and so on. So this gives us some reason to question just how effective taking a single course on formal logic will be in improving critical thinking skills.

But a more specific reason for skepticism has to do with the fact that most of the skills you learn in a formal logic class just aren’t relevant to real world critical thinking contexts. For example, in propositional logic you learn how to use the method of “truth-tables” to prove that an argument form is valid or invalid, but that involves actually constructing a table and looking at all the possible combinations of truth-values for all the different component propositions. What you’re doing there is learning an algorithmic procedure for testing validity. Now, it’s theoretically very interesting that you can do this, but it’s not something you will ever find yourself doing in the real world. And there are tons of examples like this.

So these are some reasons to question whether a background in formal logic, by itself, is going to do much to improve your critical thinking skills. And I’m not the only one saying this. Here’s a quote from philosopher John Heil, who is the author of a textbook on formal logic called First-Order Logic: A Concise Introduction :

I have thus far omitted mention of one reason widely cited for taking up the study of logic. By working at logic, we might expect to enhance our reasoning skills, thereby improving our performance on cognitive tasks generally. I have not emphasized this supposed benefit, however, because I am skeptical that there is much to it. The empirical evidence casts doubts on the notion that training in logic leads to improvement in ordinary reasoning tasks of the sort we encounter outside the classroom. (Heil, p. 4)

That’s pretty sobering coming from the author of a textbook on logic. I understand his point. But I’m not quite as pessimistic as this. I think that some training in logic is essential for effective critical thinking. The important question is, which parts of logic are important for critical thinking, and which parts aren’t, and why?

vi. Five Benefits of Studying Formal Logic

To answer this I’m going to list five real-world benefits, five ways that the study of logic can really improve your critical thinking skills.

1. You Learn Clarity and Precision in the Use of Language

Taking sentences in ordinary language and symbolizing them in logic might seem like a pointless exercise, but one thing this exercise does is make you aware that language does have a logical structure, and that seemingly minor changes in this structure can have a dramatic impact on the meaning of what you’re saying. Studying formal logic (and this is universally true among the philosophers I talked to about this ) makes you appreciate and value clarity and precision in your use of language. In argumentation you want to have control over what you’re saying, you want to say exactly what you mean and mean exactly what you say, and a background in logic really can help you develop this awareness. I have students all the time who are surprised when I point out on a quiz or a test or an essay that what they meant to say is not what they in fact said, and the issue turns on the fact that, for example, they mistakenly think that “If A then B” means the same thing as “If B then A”. A little formal logic can really help to sensitize you to these kinds of nuances in meaning.

2. You Learn Fundamental Concepts That are Important for Argument Analysis

A background in logic gives you a basic vocabulary for talking and thinking about arguments. When you study logic you learn a bunch of concepts that are central to argument analysis -- concepts like the difference between a valid argument and an invalid argument, a strong argument and weak argument, a sound argument and an unsound argument, a deductive argument and an inductive argument, and so on. With this vocabulary in hand you can ask questions that you could never ask before, you can devise strategies for refutation that you would never have considered before, and you can communicate this understanding to your audience in a more effective way.

3. You Will Master the Logical Properties the Most Widely Used Argument Forms

In everyday reasoning contexts we generally use only a handful of really simple argument forms, and these are commonly encountered in any study of categorical and propositional logic. Argument forms like “If A then B, A, therefore B”, or “Either A or B, not-B, therefore A”, or “All A are B; x is an A, therefore, x is a B”. If you master the logical properties of those common arguments forms, you have most of the tools you’ll ever need for doing logical analysis of real-world arguments. And you can teach these to twelve year olds, they’re not hard to learn.

4. You Will Master the Most Common Logical Fallacies

Once you’ve mastered this small handful of basic argument forms, then you’re in a good position to begin studying fallacies of argumentation, which no one disputes is important for critical thinking. Some of these fallacies are reducible to logical fallacies, and some aren’t. Now you’ll be in a position to understand the difference, which is important for getting a handle on the literature on fallacies.

5. You Will Understand The Fundamental Role that Consistency and Contradiction Play in Argumentation

I consider this the most important benefit of learning logic for critical thinking purposes. The central concepts of logic are the concepts of consistency and contradiction, and these two concepts are at the heart of the relationship between logic and the psychology of persuasion.

If we are made consciously aware that we hold inconsistent beliefs — beliefs that all together entail a logical contradiction — then our natural response, for one reason or another, is to recoil. We see that they can’t all be true at the same time, and we feel compelled to look for a way to restore the consistency of our beliefs, by rejecting or modifying one or more of them.Logical persuasion relies on the fact that people do, in fact, internalize this principle of non-contradiction. If we didn’t, if we encountered people who didn’t, who were generally happy to indulge in self-contradiction when it was pointed out to them, then it would be hard to see how they could be moved by rational argumentation of any kind.

What the study of logic gives us is both a sensitivity to this fact, and a set of analytical tools that can help to reveal contradictory beliefs when they’re there, by demonstrating to people the logical consequences of their own beliefs, consequences that they may not be aware of, but that can be shown by following a chain of reasoning.

Whenever I think of this aspect of logic my mind leaps to a quote from the Greek mathematician and scientist Archimedes, who said “Give me a lever long enough, and a fulcrum on which to place it, and I shall move the world.” I think of logic in the same way, it provides the lever and the fulcrum that can move people from even the most stubborn position.

Now, I’m not an idealist about this, we shouldn’t expect people to give up their most cherished beliefs just because they’re shown to be inconsistent with other things they believe. They can restore consistency by making changes elsewhere in their belief system, and that’s normally what they’ll do. But the fact that they felt compelled to make a change at all is the point that I’m getting at here.

So, what’s the conclusion of all this, what’s the take-away message? The take-away message is that if you want to develop as an independent critical thinker, you shouldn’t ignore logic, you should devote some time to studying elementary logic. But when you study logic your focus should be on understanding basic logical concepts, developing what I call “logical literacy”, and mastering the small handful of basic argument forms that occur over and over again in real-world reasoning contexts. Just about everything else you’ll learn in a formal logic course won’t be of much direct use to you for critical thinking purposes.

At the Critical Thinker Academy I have a tutorial courses called “Basic Concepts in Propositional Logic” and “Common Valid and Invalid Argument Forms”, which introduce what I think are the important logical concepts, based on the criteria I just gave here. I don’t cover proof methods or derivations, I focus on developing logical vocabulary and providing the foundation for understanding the handful of argument forms that we generally see over and over in argument analysis. Anyone taking a symbolic logic course will find this material very incomplete, but that’s on purpose, I’ve intentionally selected the bits that I think are actually helpful from a critical thinking standpoint and ignored the rest.

06:23

Transcript

Just to recap, I’m working down my list of the five essential components of critical thinking: logic, argumentation, rhetoric, background knowledge, and character. My claim is that effective critical thinking requires that we develop some understanding and skills in all five components, because they’re all mutually dependent — for any of them to work right, they need to work together as a team, so if you’re weak in one area it hurts the whole team, not just that one area.

In this lecture I want to look at the relationship between argumentation and rhetoric, the second and third components on our list.

i. Two Traditions of Argument Theory

What’s really interesting about argumentation is that two very different disciplines want to claim argumentation as part of their respective fields. Philosophers study logic and argumentation, they view it as one of the central tools of their discipline. I teach logic and argumentation in a philosophy department. But when I look outside my office window to the English department, they’ve got folks studying argumentation as well. They teach it in basic composition classes, and they have specialists in Speech Communication who study argumentation as a form of persuasive speech. Yet philosophy and english, or philosophy and speech communication, are very different fields. So what’s going on here?

There’s a way to tell this story that focuses on the long history of dispute between philosophers and rhetoricians over the proper use of argumentation in persuasive speech. On one side you have figures like Plato who, in some of his dialogues, seems to want to draw a sharp distinction between philosophical argumentation and the art of persuasion per se, and who has a fairly negative view of people who teach the arts of persuasion without any regard for truth or wisdom.

But then on the other side you have figures like Aristotle, Plato’s most famous student, who writes books on logic and philosophical argumentation AND books on rhetoric and persuasive speech, and who think that there’s actually a close relationship between the two, that in some sense they presuppose or are dependent on one another.

The discipline of philosophy has, over most of its history, sided with Plato (it seems to me) but I’m on Aristotle’s side on this one.

But I don’t want to tell that long story. Instead I want to give you a simple way of thinking about what rhetoric is in relation to argumentation, and why both the rhetorical tradition and the philosophical tradition are indispensable for the kind of argumentation that critical thinkers should care about.

ii. Argumentation: Persuasion for Good Reasons

We’ll start with a simple question. What is rhetoric? Rhetoric is the art of persuasive speech. When you study rhetoric you’re studying how language -- in the form of speeches, writing, books, pictures, movies — any medium where symbolic communication of some kind is taking place — how any of these forms of discourse manage to be persuasive, how they influence the minds and actions of people.

If you’re very skilled at rhetoric, then you’re someone who knows HOW to use language to effectively persuade and influence. You understand your audience, and how to put them in a psychological and emotional state that is receptive and open to your message. You understand the speaking situation that you’re in and what types of speech are most effective in that situation. You know how to deploy arguments to maximize their effectiveness, and you know when to switch gears and use other means of persuasion, like appeals to emotion or tradition, or whatever might be required to achieve your desired goal in that particular situation.

Studying rhetoric can help you develop these skills.

The problem, from a philosophical or a critical thinking standpoint, is that this set of skills can be used for good or for evil. They can be used to pursue knowledge and wisdom through reason, or they can be used to suppress reason by appealing to emotion or tradition or group-think in a way that bypasses critical reflection.

Now, this is where a certain conception of argumentation rooted in the philosophical tradition steps in. Yes, argumentation is a part of rhetoric -- argumentation uses language to persuade. But the proper goal of argumentation from this perspective isn’t JUST to persuade. The proper goal of argumentation is to persuade FOR GOOD REASONS.

Underline this in your mental notes. Not every argument that succeeds at persuading its audience is a GOOD argument. On the contrary, people are OFTEN persuaded by very BAD arguments, in the sense that the argument did not give the audience GOOD REASONS to accept the conclusion. This is the whole motivation for the study of fallacies ― fallacies are argument types that are recognized to be bad, but that are often persuasive nonetheless.

Philosophy needs a distinction between merely persuasive arguments and genuinely good arguments, and critical thinking does too, for the same reason. Our goal isn’t just to persuade. Our goal is to understand an issue from all sides with clarity, to understand the strengths and weaknesses of different positions on the issue, and eventually to get to the TRUTH of the matter, or as close as we can come to the truth. Our goal is not to win arguments. Our goal is to understand ourselves and the world better.

So, the image of argumentation that I want to leave you with is this. Imagine philosophy and rhetoric not as suspicious enemies, but rather as respectful partners, or even loving parents. And out of their blessed union comes argumentation, the child of philosophy and rhetoric.

From rhetoric we get a proper and realistic understanding of how argument can motivate, influence and persuade. From philosophy we get an understanding of what it means to persuade for good reasons, and what the proper goals of argumentation are, if we seek truth and understanding, and not just winning.

Unfortunately there aren’t many critical thinking or argumentation texts that really embrace both perspectives. In the standard critical thinking texts written by philosophers trained in logic you won’t see much discussion of rhetorical theory or rhetorical strategy. And in your standard rhetoric texts you won’t see much discussion of theories of rationality and normative standards for good argumentation. You do see some discussion of fallacies, but they’re usually not analyzed in the same way you would in a philosophy text.

So, if you want to get educated on argumentation and effective persuasion from the rhetorical tradition, you really need to diversify your sources.

06:10

Transcript

In this lecture I want to talk about the fourth item on our list of the five essential components of critical thinking, five areas of study or personal development that you need to pay attention to if you really want to develop as an independent critical thinker.

Just to review, our list includes logic, argumentation, rhetoric, background knowledge, and finally a certain set of attitudes and values. Here we’re going to talk about the importance of background knowledge to critical thinking.

I titled this episode “Critical Thinking’s Dirty Secret”, but what I really mean is Critical Thinking instruction’s dirty secret. For anyone who teaches critical thinking, or for the industries devoted to cranking out textbooks on critical thinking, a guiding premise of the whole enterprise is that critical thinking skills can actually be taught, and the crude version of this view is that if students can master some formal and informal logic and some fallacies, they’ll be better critical thinkers.

The dirty secret of critical thinking instruction, which everyone knows if they’ve done it for a while, is that while logic and argument analysis are necessary components of effective critical thinking, they aren’t sufficient, not by a long shot.

What’s missing is the importance of background knowledge. Background knowledge informs critical thinking at multiple levels, and in my view it’s among the most important components of critical thinking. But you can’t teach background knowledge in a one-semester critical thinking course. Or at least, you’re very limited in what you can teach.

That’s the dirty secret that most textbooks avoid talking about. The most important component of critical thinking can’t be taught, at least not in the way you can teach, say, formal logic and fallacies. Background knowledge comes from learning and living in the world and paying attention to what’s going on. Mastering this component of critical thinking requires a dedication to life-long learning, a genuine openness to different points of view, and a certain humility in the face of all that you don’t know. This isn’t a set of skills you can master with worksheets and worked examples. This is a philosophy, this is a lifestyle choice. Textbooks don’t talk about this. Or at least not as much as they should.

There are at least two importantly different types of background knowledge that are relevant to critical thinking, and they each deserve attention. In this lecture I want to focus on background knowledge involved in evaluating arguments on a specific subject matter. Next lecture I’m going to talk about the kind of attitude you need to have if you want to really understand the background on all sides of an issue.

Background Knowledge of Subject Matter

Okay, let’s remind ourselves of the basic elements of argument analysis. You’re given an argument ― it’s a set of premises from which a conclusion is supposed to follow.

i. Argument Analysis: Does the Conclusion Follow?

Argument analysis involves two distinct steps. The first is evaluating the logic of the argument. When you’re evaluating logic you’re asking yourself, does the conclusion follow from these premises? The key thing to remember about this step is that it’s an exercise in hypothetical reasoning. You assume, for the sake of argument, that the premises are all true, and you ask yourself, if the premises are true, how likely is it that the conclusion is also true? This is a question that you can answer using basic logical techniques ― you can teach this. Students can learn to distinguish good logic from bad logic with textbook exercises, and they can learn to extract the logical structure of real-world arguments and evaluate the logic. If the logic is bad then the argument is bad. But if the logic is good, then we move to the second step in argument analysis.

ii. Argument Analysis: Do We Have Good Reason to Accept the Premises?

The second step is where you ask the question, but are the premises in fact true? Do I actually have good reasons to accept them? If we, or the intended audience of the argument, doesn’t have good reasons to accept them, then we have reason to reject the argument, even if the logic is good, because we’ll judge that it rests on faulty or contentious or implausible premises.

Now here’s the point I want to emphasize: This step, where you assess the truth or falsity of the premises, is not something you can teach in a logic or critical thinking class. And the reason why is obvious. This is a question of background knowledge. Premises are judged plausible or implausible relative to your background knowledge, what you bring to the table in terms of your knowledge of the subject matter. If you’re ignorant about the subject matter then you’ll make lousy judgments, it’s that simple.

iii. An Example: The Mars Hoax

Let me give you an example. We had a friend over to our house and she mentioned that she’d received an email about a spectacular appearance of Mars in the night sky that we should be on the lookout for some time in August. The claim was that Mars was going to be so unusually close to the earth at that time and that it would appear as large as the Moon to the naked eye. It’ll look like the earth has two moons.

Now, my “bs” detector went off right away. I just knew this was false, there’s no way that Mars is ever going to look as big as the Moon. Unless it’s knocked out of its orbit, it’s astronomically impossible. But our friend wasn’t an amateur astronomer, it didn’t raise any red flags for her, she didn’t have any reason to doubt it. In this case, her lack of background knowledge made her susceptible to a hoax, and no amount of training in formal logic would have made her less susceptible to this hoax.

By the way, if you’re interested (and I hadn’t heard of this before this incident) if you google “mars hoax” you’ll see that this has been circulating for quite a while.

iv. The Moral: All the Logic in the World Won’t Make Up for Ignorance

The moral of this story is simple. If you want to become an independent critical thinker about a particular subject matter, you need to learn something about that subject matter. If you want to argue about Obama’s health care plan you need to learn something about the plan and the economics and politics and ethics of health care. If you want to argue about how to address global climate change you need to learn something about the scientific and economic and political and ethical issues that bear on this question. And similarly for issues in history, religion, culture, media, science, philosophy ― you’ve got to learn something about the subject matter. All the logic in the world won’t make up for ignorance.

Now, I know this is easier said than done, but there’s no avoiding it, no escaping it. Critical thinking demands a commitment to life-long learning. This is part of what it means to take responsibility for your own beliefs.

08:36

Transcript

Last lecture we talked about the importance of background knowledge for critical thinking. There are different types of background knowledge that are relevant to critical thinking in different ways. For example, let’s say the issue I want to learn more about is whether there is any scientific support for ESP, extrasensory perception. My friend says there is, I’m not so sure, I want to know the scoop on this.

Well, one kind of background knowledge that I need to understand before I can properly evaluate this issue is disciplinary or subject-matter knowledge -- in this case I would need to learn something about the history of parapsychological research, the classic case-studies, the methodologies that are used to test paranormal claims, and so on. A trip to a good library can help out with this.

Another kind of background knowledge that is relevant is the intellectual history of the issue, and by that I mean an understanding of the different arguments and positions that have historically been influential in the debate. This has to include both the pro and con arguments, the views of both the defenders and the skeptics. Textbooks aren’t always good at giving this, you might need to look for more specialized sources for this kind of information.

And a third kind of background knowledge that is relevant (for critical thinking in general) is a kind of meta-knowledge. This is knowledge of how human beings acquire knowledge at all, of our strengths and limitations as critical reasoners, of how our minds work, of our cognitive biases and intellectual habits. This is background knowledge related to the psychology of belief and judgment. In this case it would be relevant in helping us understand, for example, why belief in ESP is so widespread among the general public, but compelling scientific evidence for it is so much harder to come by.

In this lecture I want to explore in more detail what it means to acquire a good background in the second kind of knowledge, a good background in the intellectual history of a debate, a good familiarity with the pro and con sides of an argument.

i. Becoming Educated on an Issue

How should a person who is dedicated to critical thinking go about becoming educated about the different sides of an issue?

I want to argue that this task is in fact very demanding, much more so than we normally realize. I want to argue that this requires a certain kind of psychological orientation, that it requires an attitude of openness to different points of view that can be very challenging to sustain, but one that we absolutely have to cultivate if we’re really serious about critical thinking.

Part of becoming educated on an issue involves learning how other people view that issue, and coming to some understanding of that point of view, even if it’s not your own.

Let’s say the issue is the environmental impact of human population growth. More people means more mouths to feed and more consumption of resources, and most environmentalists will tell you that there are natural limits to resource use, and when you hit those limits the results are bad ― you get pollution, habitat destruction, species extinction, and degradation of the resource base that leads to poverty, political instability, and so on ― bad stuff. So, one position is that population growth aggravates all these problems and therefore we should be supporting policies that curb population growth.

Now, in reading up on this question you might be surprised to come across people who think that restricting population growth is entirely the wrong way to address these environmental problems. You’ll find people ― smart people, educated people, who publish articles and books on this topic ― who think that, in reality there are no natural limits to resource use; that the earth can sustain an increasing human population into the foreseeable future, and that, contrary to the popular consensus among environmentally minded people, human beings are actually on the cusp of an age of unprecedented prosperity and improvement in the human condition.

This couldn’t be farther from the mainstream environmental position. And it may not be your position, your first reaction might be to dismiss it. But if you want to claim that you’re critically informed about an issue, and you come across a position that may be diametrically opposed to yours, but that informed, intelligent people find compelling, then the intellectually responsible thing to do, I think, is to spend at least some time trying to understand why those people find it compelling.

ii. The Easier Way: Looking for Weaknesses in the Opposition Viewpoint

Now, how should we acquire this understanding? Well, there’s an easier way, and there’s a harder way.

The easier way is to set out looking for reasons why their position is faulty or mistaken, from your point of view. This is easier because it requires less intellectual work on your part. If the goal is just to identify areas of weakness in a position, frankly, that’s not that hard to do. Any interesting position is going to have weaknesses somewhere; even your own, cherished views have weaknesses which a critic, if they wanted to, could exploit in a debate.

No, the danger with this approach is that it’s driven more by the desire to refute a position than to actually understand it. By looking at a position through such a selective prism you’re more likely to misrepresent it and deceive yourself into thinking you’ve understood it when you really haven’t. You also run the risk of missing the odd kernel of truth or good idea that might be there, waiting to be acknowledged.

The underlying problem with this kind of oppositional investigative strategy is that it’s driven by the wrong kind of motive. It’s driven by the desire to win an argument. But that’s not the goal of critical thinking. The goal of critical thinking it to take intellectual ownership of your beliefs and come to a closer understanding of the truth of the matter.

Now, if that’s our goal, then there’s a better way to try and understand positions that differ from our own. It’s a more demanding way, but it’s better.

iii. The Harder Way: Embracing the Perspective of the Opposing Viewpoint

The better way to do this is through an exercise in authentic role-playing. And this is where the title of this lecture gets its name, “what critical thinkers can learn from actors”. Actors need to understand the background and the mindset and the motivations of the characters they play if they’re to play them with authenticity and integrity. Good actors need to be able to slip into the skin of a character and view the world through that character’s eyes, even if those eyes are very different from their own. They need to cultivate the ability to empty themselves, to forget who they are, temporarily, so that another persona can live through them.

As a critical thinker, you need to cultivate a very similar set of skills. You want to understand an argument or a position that someone else strongly believes. Then you need to try, as best you can, to put yourself in that person’s head-space. You need to identify the beliefs and values and background assumptions that really motivate their position. You need to dwell in that position for a while. And then, from that position, you need to be able to faithfully reconstruct the reasoning that leads to their conclusions.

The test of your understanding, the real test, is to be confident that you could explain the reasoning of the other position to a proponent of that position, to their complete satisfaction. If I’m a religious believer arguing with an atheist, and I want to engage them in a productive discussion, I want to be able to sit across from them and articulate the rationale for their atheist position, and their objections to theism, to their satisfaction. You want them to be able to say, afterward, “yes, you understand my position; that’s exactly why I hold the beliefs I do”. That’s a position of true understanding, and that’s also, as it turns out, the position with the greatest potential for rational dialogue and persuasion.

So, background knowledge requires not just that we understand the foundations of our own position. It also requires understanding the foundations of positions different from our own. And to achieve this, we need to cultivate a willingness and an ability to see the world through other people’s eyes.

Now, I’m the first to admit that this kind of understanding can be difficult to achieve. We don’t always find it easy to put ourselves in other people’s shoes, especially the shoes of people we disagree with. And it’s harder to do the more important the issue is to us. We naturally feel protective of the beliefs that we regard as central to our identity, and dwelling for any length of time in the mind and point of view of someone who doesn’t share those beliefs can be challenging on many levels. But it’s the kind of understanding that we have to strive for, if we’re really serious about critical thinking.

This issue is also a good illustration of a point I’ve made in previous episodes but hadn’t yet discussed much. The fifth item on my list of five essential components of critical thinking is ”Character”, or “Attitudes and Values”, and this is a good example of the sort of thing I’m referring to when I talk about attitudes and values. In this case, in order to ensure that we have a proper understanding of all sides of an issue, we have to value truth and understanding above winning, and we have to adopt a certain attitude toward alternative viewpoints and the people who hold them.

And this attitude is more than just respect. Respect might be a necessary attitude to have if we want to really understand a position, but it’s definitely not sufficient. For genuine understanding we need something more intimate, a kind of openness and an authentic intellectual curiosity about how other people view the world. I don’t have a good single word to describe this virtue, but I hope I’ve conveyed a sense of what’s involved in it and why it’s important for critical thinking.

Section 4: About Your Instructor and the Critical Thinker Academy
About Your Instructor and the Critical Thinker Academy
06:28
Section 5: Cognitive Biases and Critical Thinking
Cognitive Biases and Critical Thinking - PDF Ebook
Article
11:33

Transcript

1. Cognitive Biases: What They Are and Why They’re Important

Everyone agrees that logic and argumentation are important for critical thinking. One of the things that I try to emphasize in my “five pillars” model of critical thinking is that background knowledge is also very important.

There are different types of background knowledge that are relevant to critical thinking in different ways. One of the most important types of background knowledge is knowledge of how our minds actually work — how human beings actually think and reason, how we actually form beliefs, how we actually make decisions.

There are a lot different scientific fields that study how our minds actually work. These include behavioral psychology, social psychology, cognitive psychology, cognitive neuroscience, and other fields. Over the past forty years we’ve learned an awful lot about human reasoning and decision making.

A lot of this research was stimulated by the work of two important researchers, Daniel Kahneman and Amos Tversky, going back to the early 1970s. They laid the foundations for what is now called the “biases and heuristics” tradition in psychology.

i. Normative vs. Descriptive Theories of Human Reasoning

To get a feel for the importance of this research, let’s back up a bit. When studying human reasoning you can ask two sorts of question. One is a purely description question — how DO human beings IN FACT reason? The other is a prescriptive or normative question — how SHOULD human beings reason? What’s the difference between good reasoning and bad reasoning?

When we study logic and argumentation, we’re learning a set of rules and concepts that attempt to answer this second question — how should we reason. These are elements of a broader normative theory of rationality, of what it means to reason well. Actually what we have is not a one big theory of rationality, but a bunch of more narrowly focused theories that target reasoning in specific domains or under specific conditions.

So for example when we’re interested in truth-preserving inferences, we appeal to deductive logic and interpret the rules of deductive logic as norms for reasoning in this area.

When we’re interested in reasoning about chance and uncertainty, we appeal to probability theory and statistics to give us norms for correct reasoning in this area.

If we’re talking about making choices that will maximize certain goals under conditions of uncertainty then we appeal to formal decision theory. If we add the situation where other actors are making choices that can effect the outcomes of our decisions, then we’re moving into what’s called game theory.

ii. The Gap Between How We SHOULD Reason and How DO Reason

So, over time, we’ve developed a number of different theories of rationality that give us norms for correct reasoning in different domains.

This is great of course, these are very powerful and useful tools.

Now, when it comes to the study of how human reasoning actually works, before Kahneman and Tversky’s work in the 1970s, there was a widely shared view that, more often than not, the mind, or the brain, processes information in ways that mimic the formal models of reasoning and decision making that were familiar from our normative models of reasoning, from formal logic, probability theory and decision theory.

What Kahneman and Tversky showed is that, more often than not, this is NOT the way our minds work — they showed that there’s a gap between how our normative theories say we should reason, and how we in fact reason.

This gap can manifest itself in different ways, and there’s no one single explanation for it. One reason, for example, is that in real-world situations, the reasoning processes prescribed by our normative theories of rationality can be computationally very intensive. Our brains would need to process an awful lot of information to implement our best normative theories of reasoning. But that kind of information-processing takes time, and in the real world we often need to make decisions much quicker, sometimes in milli-seconds. You can imagine this time pressure being even more intense if you think about the situations facing our homo sapien ancestors; if there’s a big animal charging you and you wait too long to figure out what to do, you’re dead.

iii. Biases and Heuristics (Rules of Thumb)

So, the speculation is that our brains have evolved various short-cut mechanisms for making decisions, especially when the problems we’re facing are complex, we have incomplete information, and there’s risk involved. In these situations we sample the information available to us, we focus on just those bits that are most relevant to our decision task, and we make a decision based on a rule of thumb, or a shortcut, that does the job.

These rules of thumb are the “heuristics” in the “biases and heuristics” literature.

Two important things to note: One is that we’re usually not consciously aware of the heuristics that we’re using, or the information that we’re focusing on. Most of this is going on below the surface.

The second thing to note is these heuristics aren’t designed to give us the best solutions to our decision problems, all things considered. What they’re designed to do is give us solutions that are “good enough” for our immediate purposes.

But “good enough” might mean “good enough in our ancestral environments where these cognitive mechanisms evolved”. In contexts that are more removed from those ancestral environments, we can end up making systematically bad choices or errors in reasoning, because we’re automatically, subconsciously invoking the heuristic in a situation where that heuristic isn’t necessarily the best rule to follow.

So, the term “bias” in this context refers to this systematic gap between how we’re actually disposed to behave or reason, and how we ought to behave or reason, by the standards of some normative theory of reasoning or decision making. The “heuristic” is the rule of thumb that we’re using to make the decision or the judgment; the “bias” is the predictable effect of using that rule of thumb in situations where it doesn’t give an optimal result.


iv. An Example: The Anchoring Effect

This is all pretty general, so let me give you an example of a cognitive bias and its related heuristic. This is known as the anchoring heuristic, or the anchoring effect.

Kahneman and Tversky did a famous experiment in the early 1970s where they asked a group of subjects to estimate the percentage of countries in Africa that are members of the United Nations. Of course most aren’t going to know this, for most of us this is just going to be a guess.

But for one group of subjects, they asked the question “Is this percentage more or less than 10%?”. For another group of subjects, they asked the question “Is it more or less than 65%?”

The average of the answers of the two groups differed significantly. In the first group, the average answer was 25%. In the second group, the average answer was 45%. The second group estimated higher than the first group.

Why? Well, this is what seems to be going on. If subjects are exposed to a higher number, their estimates were “anchored” to that number. Give them a high number, they estimate higher; give them a lower number, they estimate lower.

So, the idea behind this anchoring heuristic is that when people are asked to estimate a probability or an uncertain number, rather than try to perform a complex calculation in their heads, they start with an implicitly suggested reference point — the anchor — and make adjustments from that reference point to reach their estimate. This is a shortcut, it’s a rule of thumb.

Now, you might think in this case, it’s not just the number, it’s the way the question is phrased that biases the estimates. The subjects are assuming that the researchers know the answer and that the reference number is therefore related in some way to the actual answer. But researchers have redone this experiment many times in different ways.

In one version, for example, the subjects are asked the same question, to estimate the percentage of African nations in the UN, but before they answer, the researcher spins a roulette wheel in front of the group, wait for it to land on a number so they can all see the number, and then ask them if the percentage of African nations is larger or smaller than the number on the wheel.

The results are the same. If the number is high people estimate how, if the number is low people estimate low. And in this case the subjects couldn’t possibly assume that the number on the roulette wheel had any relation to the actual percentage of African nations in the UN. But their estimates where anchored to this number anyway.

Results like these have proven to be really important for understanding how human beings process information and make judgments on the basis of information. The anchoring effect shows up in strategic negotiation behavior, consumer shopping behavior, in the behavior of stock and real estate markets — it shows up everywhere, it’s a very widespread and robust effect.

Note, also, that this behavior is, by the standards of our normative theories of correct reasoning, systematically irrational.

v. Why This is Important

So this is an example of a cognitive bias. Now, this would be interesting but not deeply significant if the anchoring effect was the only cognitive bias that we’ve discovered. But if you go to the wikipedia entry under “list of cognitive biases” you’ll find a page that lists just over a hundred of these biases, and the list is not exhaustive. I encourage everyone to check it out.

So what’s the upshot of all this for us as critical thinkers?

At the very least, we all need to acquire a certain level of cognitive bias “literacy”. We don’t need to become experts, but we should all be able to recognize the most important and most discussed cognitive biases. We should all know what “confirmation bias” is, what “base-rate bias” is, what the “gambler’s fallacy” is, and so on. These are just as important as understanding the standard logical fallacies.

Why? Because as critical thinkers, we need to be aware of the processes that influence our judgments, especially if those processes systematically bias us in ways that make us prone to error and bad decisions.

Also, we want to be on the lookout for conscious manipulation and exploitation of these biases by people who are in the influence business, whose job it is to make people think and act in ways that further their interests, rather than your interests. We know that marketing firms and political campaigns hire experts in these areas to help them craft their messages.

vi. A Hypothetical Example: How to Use the Anchoring Effect to Manipulate Public Opinion

Let me give you a hypothetical example (though I know some people who’d say it’s not hypothetical). Let’s say you’re a media adviser to a government that has just conducted a major military strike on a foreign country, and there were civilian casualties resulting from the strike.

Now, if the number of civilians killed is high, then that’s bad for the government, it’ll be harder to maintain popular support for this action. Let’s say our intelligence indicates that the number of casualties is in the thousands. This is not a good number, it’s going to be hard to sell this action if that’s the number that everyone reads in the news the next day.

So, as an adviser to this government, what do you recommend? I’ll tell you what I’d do, if all I cared about was furthering the government’s interests.

I’d say “Mr. President, ” (or whoever is in charge) “we need to issue a statement before the press gets a hold of this. In this statement, we need to say that the number of estimated casualties resulting from this strike is low, maybe a hundred or two hundred at the most.”

Now, why would I advise this? Because I know about the anchoring effect. I know that the public’s estimate of the real number of casualties is going to be anchored to the first number they’re exposed to, and if that number is low, they’ll estimate low. And even if data eventually comes out with numbers that are higher, the public’s estimates will still be lower than they would be if we didn’t get in there first and feed them that low number.

That’s what I would do, if all I cared about was manipulating public opinion.

Now, this is a hypothetical example, but trust me when I tell you that decisions like these are made every day, under the advice of professionals who are experts in psychological literature.

So there’s a broader issue at stake. This is the kind of background knowledge that is important if our ultimate goal is to be able to claim ownership and responsibility for our own beliefs and values.

25:10

Transcript

Cognitive Biases and the Authority of Science

In this lecture I want to look at another reason for learning about cognitive biases. This reason has to do with a more theoretical question: What is science? And more specifically: What is the essence of modern scientific methodology?

This is an old question. It’s a philosophical question, it’s a question about the basis and rationale of the scientific method.

It’s also a contentious question. People disagree about what science is and what its methods are. If you’re familiar with the philosophy of science literature, or the science studies literature, then you know that there’s a lot of debate about these questions. And there’s debate about a whole family of related questions, like “Is science objective?”, and “What reason do we have to think that science gives us a more accurate picture of the world than any other worldview or belief system?” and “Is there even any such thing as THE scientific method?”.

Now, I don’t presume to settle these big questions here. At some point in the future I’ll do a whole course in the Critical Thinker Academy on philosophy of science issues, but my goals for this lecture are much less ambitious.

What I want to do is focus on that last question, “is there such a thing as THE scientific method”? And I want to talk about the relevance of cognitive biases for this question, and what this means for the authority of science.


i. Is There a Scientific Method?

Is there such a thing as the scientific method? Scientists seem to think so, or least, science textbooks seem to think so. If you open a high school or university science textbook you’ll often see a first chapter that discusses the nature of science and scientific reasoning. You might even see little flowcharts that are supposed to encapsulate the scientific method. Just google “the scientific method” and look at the results under “images”, you’ll see what I mean.

But if you look at the history and philosophy of science literature, and if you look across the full range of natural and social sciences, the picture is very different. What you see is a variety of scientific methods used within specific fields, but no single method that is common to all the sciences. And you also see (not always but often) a considerable gap between the idealized textbook descriptions of scientific methodology and the actual processes, the actual histories that lead up to the adoption or rejection of a theory within a scientific community.

These observations lead some people to talk about “the scientific method” as a kind of institutionalized myth, a story we tell ourselves about how science works, but that doesn’t really correspond to the reality. And it leads some to challenge the objectivity and authority of science, to challenge the presumption, so common in the developed West, that scientific knowledge is superior to other forms of knowledge, that the modern scientific worldview gives our best truest description of the workings of the natural world.

For those who want to defend the superiority of scientific knowledge, the philosophical challenge is how to argue for this view without defaulting to the naive, mythic view of science and scientific reasoning that has been so thoroughly debunked by scholars.

I admit that as a philosopher of science I’m in this camp, this is a challenge for me. I cringe when I hear pro-science types make broad pronouncements about how science works. “Science is about the search for universal laws!” “Science is about about causal explanations!” “Science is about testing hypotheses that make falsifiable predictions!”. You can find exceptions to all of these without looking very hard.

Yet I still think that there’s something special about scientific knowledge, something distinctive that gives it a privileged status. I don’t think this is a trivial position to defend, I think the criticisms of the traditional view of science need to be taken seriously.

What I do think — and this gets us to the main point of this lecture, finally — what I do think is that an understanding of cognitive biases can give us a useful perspective on this question, and can help us think about why science is important and why scientific methods are what they are.

ii. Science: A Tool for Reducing the Systematic Errors Caused By Cognitive Biases

Okay, with all that build-up you’d expect that what I have to say about science and cognitive biases is going to be really deep and sophisticated, but it’s not, it’s really pretty simple. It can be summarized in two points:

  1. Human beings are prone to biases that lead to error.
  2. Scientific methodology aims to neutralize the effects of these biases, and thereby reduce error.

Let’s look at these two points in order.

First, human beings are prone to biases that lead to error. What sorts of biases? There are lots, but I’m thinking of two kinds of biases in particular.

(1) We are pattern recognition machines

The first is related to the fact that we’re pattern recognition machines. Human beings see patterns everywhere. We’re bombarded by sensory data and our natural disposition is to seek out patterns in the data and attribute meaning to those patterns.

An obvious example of this is our tendency to see faces in anything that remotely resembles the configuration of features on a human face. An electrical outlet looks like a face, the front grill of a car looks like a face, the knots of a tree look like a face.

Now, it’s not surprising that we see faces everywhere. Face recognition is important for social primates like us, it’s very important that we learn to read the facial cues of other people to know whether they’re friendly or a threat, whether they’re content or displeased, and so on. So we’ve evolved this very specialized and sensitive mechanism for facial recognition, and as a result we’re basically hard-wired for it.

But this means that we’re prone to a certain kind of error as well, where we see faces in inanimate things that don’t really have faces.

This is just one kind of pattern recognition mechanism working in our brains, there are lots of different kinds working all the time, looking for patterns that might be meaningful, and imposing meaning on those patterns if they trigger the right cues. Michael Shermer calls this feature of our mental functioning patternicity.

Now, in general, our ability to identify patterns is an extraordinarily valuable tool. It’s essential for survival, for extracting meaningful, relevant information from our physical environment and from our social environment.

But there’s a price we pay for this amazing ability — sometimes we attribute meaning to patterns which they don’t have, and sometimes we see patterns in what is actually patternless noise. And it can be hard to not see these patterns when we expect to see them; our expectation, or the subconscious workings of our brain, can sometimes impose a meaningful pattern on data, or on sensory experience, even when there’s nothing there.

Psychologists have a bunch of names for this phenomenon. For visual images, like seeing faces in clouds, or the Virgin Mary on a slice of burnt toast, it’s called pareidolia. The general phenomenon is called apophenia, which is defined as the experience of seeing meaningful patterns or connections in random or meaningless data.

Here’s a fun example.

Have you ever heard of backmasking? This is when you play an audio recording backwards, usually these are songs, and you hear meaningful speech when it’s played backwards.

When you do this deliberately to an audio recording it’s called “backmasking” or “backward masking”. It was controversial in the 1980s when a number of Christian groups in the United States claimed that heavy metal rock bands were intentionally putting Satanic messages into their songs which people could hear if they played the songs backwards.

What’s interesting about backmasking is that people will often hear meaningful words or phrases in songs that are played backward, even when they were never intentionally put there. And the effect is amplified when you’re given some cues about what to look for in the audio.

The very best place on the internet to experience this effect is Jeff Milner’s Backmasking site, at jeffmilner.com/backmasking.

On the video and audio versions of this lecture I play a sample from Jeff’s site to illustrate the effect, but I can’t do that in a document. You can check Jeff’s site for yourself, but what I did is play a sample from the Britney Spear’s song “Him Me Baby One More Time”, first forwards and then backwards. The backwards version sounds like incoherent noise to most ears. But when you prime the listener’s expectations by showing them this set of lyrics:

“Sleep with me, I’m not too young.”

… and play the backwards sample again, it is impossible NOT to hear those words. It’s a striking effect (again, I urge you to go check it out).

Now trust me, Britney Spears didn’t intentionally backmask those lyrics. This is meaningless noise being interpreted as meaningful information, and we’re all prone to it, and no amount of mental effort can make you not hear those lyrics.

So, what does this have to do with science? My general point is that we’re hardwired to see patterns in data and in observable phenomena. Sometimes these patterns are meaningful and point to underlying regularities. If they’re regular, repeatable regularities, then we’ve got the basis for a predictively useful scientific generalization. Our knowledge of the basic laws of nature begins with identifying these basic patterns in observable data.

But sometimes these patterns are not meaningful, sometimes they’re just noise, but we mistake them for meaningful patterns. This is an error, but it can be hard to see the error, or correct for the error, especially if we’re really invested in the reality of those patterns.

One thing that scientific methodology offers us is a set of tools for distinguishing meaningful from meaningless patterns. These tools come in different forms, but the obvious ones are the standard protocols for controlled experimental studies, and statistical analysis of data from these studies, the stuff you would learn in a good “research methods” class in a science department. I’m not going to elaborate on those methods here, but ideally that’s one of the things you end up learning in those courses, how to distinguish meaningful from meaningless patterns.

So that’s the first kind of error that we’re prone to.

(2) We suck at weighing evidence

The second kind of error that we’re prone to has to do with how human beings are naturally disposed to weigh evidence, and more specifically how we weigh evidence as it bears on the truth or falsity of a hypothesis.

The general error is this: if we think the hypothesis is true, or would like it to be true, then we tend to remember or focus only on the evidence that would count in favor of the hypothesis, and ignore or dismiss evidence that would count against it.

The result is that as we build up a set of data that is lopsided in favor of the hypothesis, that is biased toward confirmation. That’s why this phenomenon is called confirmation bias, and there’s a huge psychological literature on this.

Actually, “confirmation bias” can refer to a cluster of related phenomena. You can have:

  1. Biased search for information: This is where people test hypotheses in a one-sided way, by only searching for evidence that is consistent with the hypothesis that they happen to hold. They ask, “what would you expect to see if this hypothesis was true?”, and look for evidence to support this prediction, rather than ask “what would you expect to see if this hypothesis was false?”, and look for evidence that would falsify the hypothesis.
  2. Biased interpretation of evidence: this is where you give two groups the SAME information, the SAME evidence, but they interpret the evidence differently depending on their prior beliefs. So if you strongly believe some hypothesis, then you’ll be inclined to think, for example, that studies that support that hypothesis are well-conducted and convincing, but for studies that don’t support your hypothesis, you’ll be inclined to think that they’re not well-conducted, and not convincing. One way to think of this is that people set higher standards of evidence for hypotheses that go against their current expectations, and they have lower standards of evidence for those that support their expectations.
  3. Biased memory: Even if someone has gone ahead and tried to collect evidence in a neutral, unbiased way, they may still remember it selectively, so you end up recalling more of the confirming evidence than the disconfirming evidence, and it skews the evidence in favor of your expectations.

I want to emphasize that these biasing effects I’m describing are well documented in the cognitive science literature, it’s part of the cognitive biases literature that goes back several decades now and it’s an ongoing area of research.

(3) Science is what we do to keep us from lying to ourselves

Okay, so what’s the upshot of this for our understanding of science? The upshot is that, left to our own devices, human beings are prone to error in the weighing of evidence. What’s the error? The error is thinking that we’re making a judgment based on a complete body of evidence, when the body of evidence we’re considering has actually been filtered and skewed by confirmation bias. And that’s going to lead to error in judgments about how well supported a hypothesis actually is.

And this is where scientific methodology comes in. The physicist Richard Feynman once said that “science is what we do to keep us from lying to ourselves”, and I think there’s a lot of truth to this. One of the functions of scientific methodology is to neutralize the effects of confirmation bias by forcing us to search for and weigh a complete body of evidence, one that includes not only confirming evidence, but also disconfirming evidence, evidence that would count against the hypothesis in question.


iii. Example: When is A correlated with B?

Let me give you a simple example to illustrate. Let’s say you walk into a health care clinic and you see a flyer for a psychotherapist’s practice. The flyer says that Dr. Jones has a 90% success rate in treating mental health problems. You read a little more closely, and it says that of all the patients he sees who come to his clinic complaining of psychological or mental health problems, 90% of them report an improvement in their condition within two weeks of beginning treatment with Dr. Jones.

Now, for the sake of argument, let’s assume this figure is accurate. If 100 people walk through his door suffering from some mental health problem, then if you survey them two weeks after beginning treatments with Dr. Jones, roughly 90 of them will say that their condition has improved.

Here’s the first question: Does this evidence support the conclusion that Dr. Jones’s treatment is causally responsible for the improvement in their condition?

I know from experience that if I ask my science students this question, about one third will say that it does support this causal hypothesis, and about two thirds will say “no”, it doesn’t, because they’ve been told many times in their psychology classes that correlation does not imply causation. Maybe there’s some other factor involved that’s responsible for the correlation, maybe its the placebo effect, whatever. I think a survey of the general public would give a much stronger result, with a lot more people thinking that this evidence supports the claim that Dr. Jones’ treatments are causally responsible for the improvement in condition.

Now, what if I ask this question? Let’s grant that the correlation doesn’t necessarily imply causation. But does the evidence given even support the weaker claim of a correlation? Does the fact that 90% of patients report improvement of condition support the claim that there’s at least a correlation between Dr. Jones’ treatments and the improvement in condition? And by correlation I just mean that if you go to Dr. Jones’ clinic with a mental health problem, you’re statistically more likely to report improvement in your condition than if you didn’t seek out treatment.

If I ask this question to my science students, almost all of them will say that the evidence supports some kind of correlation. When I ask them how strong a correlation, more than half will say that it’s 90% or close to 90% — that you’re 90% more likely to report an improvement in your condition if you go see Dr. Jones.

Now, for those of you playing along at home, what do you think the answer is? Would you be surprised to hear that this evidence doesn’t even support a claim of correlation? In fact, it gives us no reason to think there’s any kind of correlation whatsoever, much less a 90% correlation!

Why is this? It’s because we’re only looking at confirming evidence, we have no information about potentially disconfirming evidence. To establish a correlation between Dr. Jones’ treatments and improvement in condition, we would need to compare two numbers: the probability that a person will improve if they seek out treatment with Dr. Jones, and the probability that they will improve anyway, on their own, without seeking treatment from Dr. Jones.

If it turned out, for example, that 90% of people will report improvement in their mental health problems within two weeks anyway, without seeking treatment, then your odds of improving are the same whether you see Dr. Jones or not, and the correlation is zero, there’s no positive correlation at all.

Or maybe there’s an 80% rate of improvement without treatment, so if the rate of improvement is 90%, then at best this would support a weak positive correlation of 10%, which is still very different from a 90% correlation.

So, this evidence in Dr. Jones’ flyer, even if it’s all 100% accurate, not only does it not support the hypothesis that his treatments are the cause of the improvement in his patients’ condition, it doesn’t even support a correlation of any kind between his treatments and improvement in his patients’ condition. There just isn’t enough information to justify these claims. But almost none of my science students, when they’re given this hypothetical case, will see that this is the case; most will say that the evidence supports a very strong correlation. This because they’re looking at an incomplete body of evidence and drawing a hasty inference, but they don’t realize until it’s pointed out to them that the evidence is incomplete.

Now, this is a very simple example, but it helps to illustrate the general take-away point, which is that on our own we suck at weighing evidence. And that highlights the function and the value of statistical methods and scientific protocols — they force us to seek out and take into consideration types of evidence that are relevant to the truth of the hypothesis, but that we would otherwise ignore, or dismiss, or downplay, due to confirmation bias.

Now, I’ve been focusing on confirmation bias here, because I think it’s particularly important, but of course it’s not the only source of bias in science, there are lots of them, but they just highlight the general point.

In sciences that deal with living subjects, for example, there are biases that can result from expectation effects, like the placebo effect, where the mere expectation of a treatment will elicit a physiological response from a test subject. You can correct for this bias by using blind, randomized control groups, where subjects don’t know whether they’re receiving a treatment or just a placebo, so you can estimate the size of the placebo effect and use that to estimate the effect size due to the treatment itself.

And there are also experimenter effects, where the experimenter’s knowledge and expectations can unintentionally influence how subjects respond and how observations are interpreted. You neutralize these kinds of biases by using double-blind studies, where the researchers who directly interact with subjects and do data analysis don’t know whether a given subject is in the test group or the control group.

The point is that all these protocols are in place because human beings are prone to error, and we need these protocols to neutralize or correct for these errors.


iv. What Makes Science Special

I’d like to wrap up this lecture with a final comment on the bigger question that motivated this topic in the first place. The question was whether there’s a way of defending the superiority of science as a source of knowledge about the world, without resorting to a mythic view of how science works.

I think the answer is “yes”, but it’s a qualified “yes”. I think it’s clear that if we didn’t follow these scientific protocols, our knowledge of the world would be less reliable than it is, and it’s clear why this is so when you think of these protocols as methods for neutralizing the effects of cognitive biases.

But saying this doesn’t mean that scientists always follow these protocols. Science is a complex social practice, and there are lots of things that can interfere with or prevent these protocols from being properly implemented. The highest quality studies are often the most expensive studies to conduct, so funding can be a limiting factor. The highest quality studies might also take many years, maybe even decades to conduct, so time constraints can be another factor.

And in some fields proper controlled studies might be just be impossible to conduct. In genetics, for example, there are experimental ways of measuring the heritability of a trait, which is the percentage of the variation in the trait that can be accounted for by genetic variation in the population. You can do these controlled experiments to directly measure heritability on fruit flies, but you can’t do them on human populations because they would be unethical to conduct. So for humans we’re forced to rely on more indirect methods of estimating heritability that are more limited and more vulnerable to biases.

So I admit that in some ways, this discussion of scientific methods still has an air of mythology about it, in the sense that it sets up an ideal that may never be perfectly realized in practice. But on the other hand, we still retain a notion of what makes science special; namely, that it’s an institutionalized social practice that is committed to these ideals, that strives to reduce the distorting effects of biases when it can. And in this respect it’s distinctive, there’s no other social institution that functions quite the same way.

When I talk about science in this way, my students sometimes think that I’m defending a mythic view of science as “value-free”, as though science when its properly conducted doesn’t make make any subjective value judgments, or is free of value biases. But that doesn’t follow. What this view entails is that scientific methodology is committed to the elimination of certain kinds of biases; namely, those that are conducive to error in the identification of meaningful patterns and in the weighing of evidence. But that’s only part of what science is about. Science is very much a value-laden enterprise, and good science is just as value-laden as bad science. What distinguishes good from bad science isn’t the absence of value judgments, it’s the kind of value judgments that are in play.


v. Implications for the Philosophy of Science?

The other comment I want to make is that accepting this view of the scientific process still leaves open most of the big philosophical questions about the nature of science.

For example, I might be what philosophers of science call a scientific realist, which means that I think that one of the proper goals of a scientific theory is to describe the world beyond what we can directly measure and observe, and that sometimes science is successful in doing this.

You, on the other hand, might be what philosophers of science call an instrumentalist about theories: you think that science doesn’t aim to describe the world beyond the realm of observable phenomena at all. You think that scientific theories are just instruments for helping us organize and predict observable phenomena and the results of experiments, and that’s how they should be judged; and we should treat the theoretical parts of our theories as nothing more than useful fictions, or at least be agnostic about their truth.

This debate between scientific realists and instrumentalists is one of those big philosophical questions about science. Some version of it goes back to the Greeks. My point is that either view is compatible with thinking of scientific methodology as a set of tools for neutralizing cognitive biases. I’m not saying that both are equally plausible views, all I’m saying is that they’re equally compatible with everything I’ve said about scientific methodology in this podcast.

vi. The Take-Away Message

So, the take-away message of this position is really fairly limited. It doesn’t imply much for the big philosophical questions about the nature of science. But it does suggest a certain kind of attitude toward science, and the authority of science.

It suggests, first of all, that people should be very cautious about relying on their intuitions in judging a scientific issue. Our intuitions are just not reliable. One kid developing autism after a vaccination does not imply that the vaccine was the cause of the autism. But we all know that, the sample size is too small. What about 6000 kids? Our intuition tells us that if 6000 kids develop autism after being vaccinated, that’s at least evidence for a strong correlation, right?

WRONG. It’s evidence, but it’s lop-sided, it’s an incomplete body of data. Think of the Dr. Jones example.

When you actually look at a more complete body of data, including background rates of autism, the evidence is clear: there is no statistically significant correlation between vaccination and the development of autism. The number of reported cases of autism has certainly shot up over the past thirty years, but part of this is attributable to changes in diagnostic practices; how much of an increase there’s been in the actual prevalence of the condition is still unclear, but there’s no evidence that it’s linked to vaccinations. Multiple studies from different scientific bodies agree on this conclusion.

Now, I know that a lot of people in the anti-vaccine movement resist this conclusion, and there are a lot of conflicted parents who see this as a tug-of-war between equal sides, an anti-vaccine side and a pro-vaccine side. But the take-away message of this lecture is that the sides are not equal, and we shouldn’t view them as equal. Human beings, left to their own devices, will see correlations where there aren’t any, and attribute meaning to correlations that are actually meaningless. The more invested you are in the outcome, the more likely it is that you’ll be led into error. Only a proper scientific study can resolve the issue, and when multiple studies converge on the same conclusion, then the rational thing to do, provisionally, is to accept the scientific consensus.

We may all have strong intuitions the other way, but once we realize how prone to error we really are, and how scientific methods are designed precisely to avoid those errors, then I think we do have a compelling argument for accepting the authority of science, especially when there’s a consensus on the issue among the relevant experts in the mainstream scientific community.

23:41

Transcript

Confirmation Bias and the Evolution of Reason

In this lecture I’m going to follow up on our previous discussion of confirmation bias by looking at an interesting attempt to explain confirmation bias drawn from the field of evolutionary psychology. In the commentary on this theory I hit on a bunch of topics, including the relationship between rhetoric and argumentation, the nature of biological functions, and an important virtue that I call epistemic humility.

i. Cognitive Bias Research: Phenomena vs Theory

Last lecture I talked a lot about confirmation bias, this tendency we have to filter and interpret evidence in ways that reinforce our beliefs and expectations. And I argued that one way to think about science and scientific methodology is as a set of procedures that function to neutralize the distorting effects of confirmation bias (among other cognitive biases) by forcing us to seek out and weigh even the evidence that might count against our beliefs and expectations.

There are two sides to cognitive bias research. There’s the science that describes the effects of these biases on our judgment and behavior, and there’s the science that tries to explain why we behave in this way, that tries to uncover the psychological or neurological or social mechanisms that generate the behavior.

Everyone agrees that confirmation bias is a very real phenomenon, the descriptive part is well established. But not everyone agrees on the explanation for why we’re so prone to this bias, what mechanisms are at work to generate it. And when there’s disagreement at this level, there can be disagreement about how to best counter-act the effects of confirmation bias. So as critical thinkers we should be interested in these debates, for this reason, and because they’re relevant on a deeper level to how we should think of ourselves as rational beings.

In this lecture I want to talk about an interesting approach to these questions that has generated some recent buzz. It comes out of the field of evolutionary psychology, which is a branch of psychology that tries to explain and predict our cognitive functioning by showing how various aspects of this functioning can be viewed as evolutionary adaptations. This approach views these cognitive processes as products of natural selection, meaning that they were selected for because they helped our evolutionary ancestors survive and reproduce. What I’m going to be talking about is a view of human reason from this evolutionary perspective that offers an interesting explanation for the phenomenon of confirmation bias.


ii. An Evolutionary Explanation of Confirmation Bias

So from this perspective, we can ask the broad question, why did human reason evolve? And by “reason” here I mean a very specific ability, namely, the ability to generate and evaluate arguments, to follow a chain of inferences, and to construct and evaluate chains of inferences that lead to specific conclusions. Human rationality can be defined much more broadly than this, but we’re focusing on this specific component of rationality for the time being.

Now, if we assume that this ability to construct and evaluate arguments is an evolutionary adaptation of some kind, the question then becomes, what is this ability an adaptation FOR?

The Simple and Obvious Story

Well, here’s one simple and obvious way to think about it. Our ability to construct and evaluate arguments evolved because it has survival value, and it has survival value because it helps us to arrive at truer beliefs about the world, and to make better decisions that further our goals. This ability to reason is a general purpose tool for constructing more accurate representations of the world and making more useful and effective decisions. We assume that in general, ancestral humans that are better able to reason in this way will have a survival advantage over those that don’t. So we expect that a higher percentage of individuals with this trait will survive and reproduce, and over time the trait will come to dominate the population, and that’s why the trait evolved and persists in human populations.

Confirmation Bias: A Problem for the Simple and Obvious Story

Now, if we accept this simple story, then we have an immediate problem. The problem is that when we look at the psychological literature, we see that human beings are often very bad at following and evaluating arguments, and they’re often very bad at making decisions. This is the take-home message of a good deal of the cognitive biases research that’s been conducted over the past 40 years!

To take the obvious example, human beings are systematically prone to confirmation bias. Confirmation bias leads us to disproportionately accept arguments that support our beliefs and reject arguments that challenge our beliefs, and this leads to errors in judgment; we think our beliefs are more justified than they really are.

Now, from this simple evolutionary stance, the existence of confirmation bias is a bit of a puzzle. If reason evolved to improve the quality of our individual beliefs and decisions, then what explains the persistence of confirmation bias, and other cognitive biases that undermine the quality of our individual beliefs and decisions?

The Argumentative Theory of Reason

The new approach to these questions that is getting some recent attention tries to resolve this puzzle about confirmation bias. It’s known as the argumentative theory of reason, and it claims that the central adaptive function of human reasoning is to generate and evaluate arguments within a social setting, to generate arguments that will convince others of your point of view, and to develop critical faculties for evaluating the arguments of others.

Now this might not seem like a radical hypothesis, but I want you to note the contrast between this view and the previous one we just described. The previous view was that the central adaptive function of human reason was to generate more accurate beliefs and make better decisions for individuals. In other words, the function is to improve the fit between individual beliefs and the world, resulting in a survival advantage to the individual.

The argumentative theory of reason rejects this view, or at leasts wants to seriously modify this view. It says that human reason evolved to serve social functions within human social groups.

What are these functions?

Well, imagine two ancestral humans who are trying to work together, to collaborate to find food, defend against aggressors, raise children, and so on. This all works fine when both parties agree on what they want and how to achieve it. But if a disagreement arises, then their ability to work together is compromised. They need to be able to resolve their disagreement to get back on track.

Now let’s imagine that this pair of ancestral humans lacks the ability to articulate the reasons for their respective views, or the ability to evaluate the reasons of the other. They’re stuck, they can’t resolve their disagreement, and because of this, their collaboration will probably fail and their partnership will dissolve.

Now imagine another pair of ancestral humans in the same situation who have the ability to articulate and evaluate their reasons to one another. They have the potential to resolve their disagreement through mutual persuasion. This pair is more likely to survive as a pair, and to reap the benefits of collaboration.

And for this reason, this pair will likely out-compete groups or individuals who lack the ability to argue with one another. And according to this theory, that’s the primary reason why the ability to reason evolved in human populations — to serve the needs of collaboration within social settings, not to improve the quality of individual beliefs or to track the truth about the world.


How the Argumentative Theory Explains Confirmation Bias

The argumentative theory of reason is the product of two french researchers, the well-known anthropologist and cognitive psychologist Dan Sperber and his former student Hugo Mercier.

One of the reasons they offer in support of their theory is that it helps to explain the existence of confirmation bias.

How does it do this? Let’s walk through this, it’s kind of interesting.

In a social setting where there are lots of different individuals with different beliefs and values, everyone is required to play two roles at different times, the role of the convincer — the one who is giving the argument that is intending to persuade — and the role of the convincee — the one who is the recipient of the argument and the intended object of persuasion.

Now, if your goal as a convincer is to use reasons to persuade others, then a bias toward confirming arguments and confirming evidence is going to serve you well. As a convincer your goal isn’t the impartial weighing of evidence to get at the truth, it’s to assemble reasons and evidence that will do the job of persuading others to accept your conclusion.

Things are different when you’re the convincee, the one who is the object of persuasion. In this context you can imagine two extreme cases for how you should handle these attempts to persuade you. On the one hand, you could decide to accept everything that other people tell you. But that’s not going to serve your needs very well — you’ll be pulled in different directions, you won’t have stable beliefs, and you won’t be effective at asserting your own point of view.

On the other hand, you could decide to reject everything that other people tell you that doesn’t conform to your beliefs. This is the ultra-dogmatic position, you stick to your guns come what may. This has some obvious advantages. You’ll have a stable belief system, you’ll attract collaborators who think they way you do, and so on.

But it’s still not ideal, because the ultra-dogmatic stance runs the risk of rejecting arguments with true conclusions that would actually improve their condition if they were to accept them.

A better compromise position is one where there’s a default dogmatism, where your initial reaction is to resist arguments that challenge your beliefs, but this default dogmatism is tempered by a willingness and ability to evaluate arguments on their merits, and thereby make yourself open to rational persuasion.

From an evolutionary standpoint, this compromise position, which you might call “moderate dogmatism”, seems to offer the maximum benefits for individuals and for groups.

Now when you combine the optimal strategy of the convincer, which is biased toward arguments and evidence that support your beliefs, and the optimal strategy of the convincee, which is biased against arguments and evidence that challenge your beliefs, you end up with an overall strategy that looks an awful lot like the confirmation bias that psychologists have been documenting for decades.

And this is what Sperber and Mercier are saying. When we think of reason as serving the goals of social persuasion, confirmation bias shouldn’t be viewed as a deviation from the proper function of human reason. Rather, it’s actually a constitutive part of this proper function, this is what it evolved FOR. To use a computer software analogy, confirmation bias isn’t a “bug”, it’s a “feature”.


Consequences of This View

Now, I don’t know if this view is right. But I do find it provocative and worth thinking about.

I mentioned earlier that different views on confirmation bias can result in different views of how best to neutralize or counter-act it. Sperber and Mercier argue that their view has some obvious consequences along these lines.

For one, their view implies that the worst case scenario is individuals reasoning alone in isolation. Under these conditions we’re most prone to the distorting effects of confirmation bias.

A much better situation is when individuals reason in groups about a particular issue. This way everyone can play both the role of the convincer and the convincee, and we can take advantage of our natural ability to evaluate the quality of other people’s arguments, and others can evaluate the quality of our arguments. We should expect that reasoning in groups like this will result in higher quality judgments than reasoning in isolation, and lots of studies on collective reasoning do bear this out.

But of course not all groups are equal. If a group is very homogeneous, with lots of shared beliefs and values and background assumptions, then the benefits of group reasoning are more limited because there are collective confirmation biases that won’t be challenged.

So, if we’re looking to maximize the quality of our judgments and our decisions, the better situation is when the groups are not so homogenous, when there’s a genuine diversity of opinion represented within the group, and you can expect that at least some people will start off disagreeing with you. Under these conditions, the benefits of group reasoning and group argumentation are greatest — the result will be judgments that are least likely to be distorted by confirmation bias.

This is one of the conclusions that Sperber and Mercier arrive at. I don’t think you need to accept the whole evolutionary framework to understand the benefits of group argumentation, but it is an interesting explanation for why the quality of argumentation varies in this way, from lowest involving individuals in isolation to highest involving heterogeneous groups.

Take-Away Messages

So, what are the take-away messages of this story for us as critical thinkers? I have three in mind:

1. It can be very enlightening to take an evolutionary perspective on the origins and nature of human reason, but we need to be careful about how to do this.

Almost everyone thinks that evolutionary biology can and should shed light on psychology. But we need to be careful about this. “Evolutionary psychology” in the form that Sperber and Mercier practice it has its critics. Telling a compelling story about the possible evolutionary advantages of a trait isn’t enough to establish that the trait really is an adaptation that was selected for because of those advantages. Evolutionary views need to be defended against non-evolutionary alternatives, and specific evolutionary stories need to be defended against alternative evolutionary stories. But this is just to say that we need to be careful about the science. For now I’m just going to mention this and put aside the question of the status of the science.

2. Some may see troubling (skeptical) implications in Sperber and Mercier’s theory. I don’t think these skeptical worries are warranted, but the shift in perspective does have implications worth thinking about.

What are these troubling skeptical implications? Well, remember that the theory involves a shift from thinking of the primary function of human reason as improving the quality of our individual beliefs and decisions, to thinking of the primary function as persuasion within human social groups.

One possible worry is that this shift threatens to collapse the distinction between argumentation and rhetoric that optimists about reason would like to maintain. Rhetoric is about persuasion, broadly construed. Argumentation is traditionally viewed as either distinct from rhetoric, or as a special branch of rhetoric that focuses on persuasion for “good reasons”. It’s the “good reasons” part that distinguishes argumentation from mere persuasion, and that connects it to the broader concepts of truth and rational justified belief that we emphasize in Western science and philosophy.

So, some might conclude that if Sperber and Mercier are right that the primary evolutionary function of human reason is social persuasion, then it turns out that argumentation really is just about persuasion, and not about truth and developing rationally justified beliefs. Some might think that it warrants a form of skepticism about our ability to improve the quality of our beliefs and decisions through reason and argument.

Now, my reading of Sperber and Mercier’s theory is that it doesn’t support this kind of skepticism about human reason. I’ll offer two reasons why.

First, the story that Sperber and Mercier give about the evolutionary function of reason explicitly builds in a capacity for critically assessing the quality of arguments. But this capacity is grounded in our roles as convincees, not as convincers. From the perspective of someone on the receiving end of an argument, the capacity to distinguish good from bad arguments is genuinely adaptive, and their theory predicts that human reason retains this capacity. What Sperber and Mercier add is that human reason is also adapted for social persuasion, and the goals of social persuasion are sometimes in conflict with the goals of truth-finding and belief-justification. So it’s a messier, less unified view of human reason that they’re advocating, but it’s not inconsistent with the traditional view of reason as a means of distinguishing good from bad reasons for belief.

The second point I want to raise has to do with this talk of evolutionary functions. Let’s say, for the sake of argument, that human reason evolved primarily for social persuasion, that our ability to construct and follow arguments was selected for the advantages it afforded in terms of social persuasion and social collaboration. In other words, we’re granting that it didn’t evolve for the purpose of improving the quality of our individual beliefs and decisions. Sperber and Mercier don’t actually go this far, but let’s assume it for the sake of argument.

Now, does it follow, if this is the case, that human reason does not today have the function of improving the quality of individual beliefs and decisions? It’s tempting to say “yes” given the way I’ve set it up, but the answer is “no”, this doesn’t follow.

To think this way is to make a basic mistake about the nature of biological functions. It’s the fallacy of assuming that, if a trait evolved for a given function, then it can only serve the function that it evolved for.

In biology there are lots of examples of traits that originally evolved for a given function, but that later were used for other purposes and became part of the natural function of the trait. The classic example is bird feathers. Almost everyone agrees that feathers originally evolved to aid in heat regulation, not for flight — it was only later that they were co-opted and shaped by natural selection for flight. They were also co-opted for display and selection of mates, so that’s three functions right there.

So, even if we granted that human reason evolved initially as a tool for social persuasion in the service of collaboration, it doesn’t follow that our capacity for reason couldn’t have been co-opted for other purposes and shaped by natural selection for other functions, like improving the reliability and accuracy of our beliefs about the world, or for other functions entirely.

Okay, so what’s my third take-away message?

3. Background knowledge like this helps us develop strategies and attitudes that can improve the quality of our judgments and decisions.

This message is one that I’ve been trying to emphasize in these lectures, and it has to do with the importance of background knowledge for critical thinking. In previous lectures I’ve distinguished three different kinds of background knowledge that are relevant for critical thinking.

  1. Background knowledge related to subject-matter, the topic under discussion.
  2. Background knowledge related to the intellectual history of debate over a particular issue, the history of argumentation that has shaped the way people view the issue.
  3. Background knowledge related to how the human mind works, how we actually form beliefs and make decisions.

The views on the evolution of human reasoning that we’ve been talking about here — along with the two previous lectures on cognitive biases — fit into this third category of background knowledge

From my perspective, it’s clear that the more we understand how our minds actually work, the greater is our ability to bring critical thought to bear on our judgments and decisions.

The irony, however, is that sometimes what we learn is discouraging. We learn that we’re less rational than we think we are, we’re more prone to errors and over-confidence than we think we are.

But my point is that this is still useful information, because it can help us develop strategies for managing our irrationality in a rational way. And it can help us identify the sorts of psychological attitudes and dispositions that are most conducive to critical thinking.


Epistemic Humility

For example, if we know that we’re prone to confirmation bias, but that this bias can be neutralized by following certain scientific protocols, or by reasoning together in diverse groups, then this can lead to strategies for effectively managing and reducing the effects of this bias.

Also, if we know that we’re prone to confirmation bias, and we know that we’re also prone to overconfidence, then it helps us to identify certain attitudes, or virtues, that should be cultivated to help avoid the effects of these biases.

One of these attitudes is what I like to call epistemic humility. “Epistemic” is a philosopher’s term that means “pertaining to knowledge”, so in this respect I’m talking about humility regarding the status of our knowledge and our capacity to reason well.

Now, this isn’t the same as skepticism about knowledge — to be epistemically humble isn’t necessarily to doubt our knowledge, or to deny the possibility of knowledge. It’s rather to adopt an epistemic stance that is appropriate to, and that acknowledges, our situation as fallible, limited beings that are prone to overconfidence and error.

The degree to which we’re prone to error will vary from context to context. The key idea here is that the quality of our judgments is highest when our epistemic stance — the attitude we take toward our own status and capacities as knowers — properly matches the epistemic environment in which we find ourselves. For example, if we’re reasoning all by ourselves, this is a different epistemic environment than if we’re reasoning with a diverse group of people. Given what we know about reasoning, it’s appropriate to adopt a greater degree of epistemic humility when we’re reasoning by ourselves than when we’re reasoning with a diverse group.

And sometimes the appropriate stance is simply to not trust our own judgements at all. That’s an extreme form of humility, but in the right circumstances it can be the most rational stance to take.

A classic example of this kind of rational humility can be found in the Greek story of Odysseus and the Sirens. The Sirens were these mythic female creatures who sang these beautiful songs that lured sailors into the water or to crash their boats onto the shores of their island.

Odysseus was very curious to know what the Siren song sounded like, but he understood that we may not be able to resist their song. So he did something very clever; he had all his sailors plug their ears with beeswax and tie him to the mast. He ordered his men to leave him tied tightly to the mast, no matter how much he might beg to untie him.

When he heard the Siren’s beautiful song he was overcome by it and he desperately wanted to jump into the sea to join them, and as he predicted he ordered the sailors to untie him. But they refused based on his earlier orders. So, as a result of his strong sense of rational humility regarding his own capacity to resist persuasion, Odysseus was able to experience the Siren’s song and come out unscathed, where other men with less humility were lured to their deaths.

For critical thinkers the moral of this parable is clear. Though it may seem counter-intuitive, by accepting and even embracing our limitations and failings as cognitive agents, rather than denying them or struggling against them, it’s possible to improve the quality of our judgments and make more rational decisions than we would otherwise. But to pull this off we need to cultivate the right kind of epistemic virtues that are informed by the right kind of background knowledge, and through knowledge and experience, learn to develop the appropriate judgment about the right level of epistemic humility to adopt in any particular circumstance.

Section 6: Special Topics
Critical Thinking About Conspiracies - PDF Ebook
Article
14:16

1. What is a Conspiracy Theory?

What is a conspiracy theory?

In a nutshell, a conspiracy theory is a theory that purports to explain an historical event or a series of historical events by appealing to a conspiracy of some kind.

So, what's a conspiracy?

Minimally, to have a conspiracy you need a group of people (you can't conspire all by yourself) and that group of people makes plans and acts in secrecy, hidden from public view. Secrecy is an essential feature of conspiracies.

By this definition, there doesn't have to be anything sinister to make something a conspiracy. On this definition, when we plan a surprise birthday party for a friend, we're engaged in conspiracy. When we work at keeping the "magic of Santa Clause" alive for our kids, we're actively promoting and maintaining a conspiracy.

But in the sense in which the term "conspiracy theory" is normally used there's an assumption that the conspirators have sinister or at least objectionable aims of some kind — to engineer events and control public perception, to acquire money and power and influence, and so on. It's probably a mistake to build this in to the definition of a conspiracy, but in the cases we'll be looking at, the label "conspiracy" usually comes with a normative judgment that there's something objectionable about the aims of the conspirators.

Also, a conspiracy explanation of some historical event is usually contrasted with other explanations that have "official status" at the time and place in question. For example, the official explanation of the collapse of the Twin Towers of the World Trade Center on 9/11 is that it was the result of a surprise attack involving two hijacked planes that were flown into the towers. 9/11 conspiracy theories challenge this official story. The most popular claim that the US government knew about the planned attacks and let them happen, and that the government may even have helped engineer the collapse by rigging the buildings with explosives to ensure that they fell when the planes hit them. So you've got the official explanation, and you've got an alternative explanation which involves a conspiracy among various parties, including the US government, to stage the event and a conspiracy to cover up the government's involvement in the event. This gives a nice example of the kinds of kinds of theories that we're talking about.

Another good example is the moon landing conspiracy theories, which claim that the first moon landings were faked. The idea is that during the 1960s, the US was in a race with the Soviet Union to develop space technology and be the first to land on the moon, but the moon landing turned out to be impossible, it was technologically too challenging. The political cost of this failure was perceived as too high, so the government, or NASA, or both, decided to fake the 1969 moon landing on a sound stage. The rocket was launched, but it re-entered the atmosphere soon after the launch and was ditched in the ocean. The actual moon landing pictures and video were faked in a studio.

The argument that conspiracy theorists make is that there are lots of anomalies in the photographs and videos that are hard to explain if they were actually taken on the moon where there is no atmosphere and only one dominant light source, but could easily be explained if they'd been shot on a sound stage. So once again we've got an official version of the story, and we've got an alternative version of the story that invokes a conspiracy.

2. Narrow vs Global (or Grand) Conspiracy Theories

Conspiracy theories come in different scales, depending on the size and extent of the conspiracy, both in space and in time. The moon landing hoax conspiracy is fairly narrow in scope, since it's targeted at a specific historical event, but the conspiracy to cover up the hoax would necessarily have a long reach in time, getting on now close to 45 years since the event in question. But this is still small compared to what are sometimes called "global" or "grand" conspiracy theories.

Global conspiracies involve conspiracies among agents and institutions that stretch across the globe and that have influence in almost all aspects of modern life — governments, media, multi-national corporations, religious institutions, and so on. In terms of time scales, they can extend over decades, or centuries, or even millennia.

Among global conspiracy theories, the most prominent are so-called "New World Order" theories. Now, this term doesn't pick out a single theory, it's really an umbrella term for a bunch of different theories that share a set of core beliefs. The basic core belief is that there exists a secretive power elite that is conspiring to eventually rule the world through a single world government that will replace sovereign nation-states. This group operates in secret but has enormous influence through a wide network of public internationalist organizations that themselves have ties to the higher level organization.

Among conspiracy theorists the list of front groups and institutions changes depending on who you ask, but the names you commonly hear include the US Federal Reserve, the League of Nations, the International Monetary Fund, the World Bank, the UN, the World Health Organization, The European Union, the World Trade Organization, and so on. The agendas of these international institutions are established in secret meetings associated with elite figures and international power-brokers like the Bilderberg Group, the Council on Foreign Affairs, the Trilateral Commission, and so on.

Because these institutions have so much power and influence, New World Order theorists are disposed to see almost all of the major geopolitical events of the 20th and 21st century — including all the major wars — as the product of an orchestrated plan to eventually bring about a single world government that, while publicly espousing ideals of international citizenship and global community, would in fact exert a totalitarian control that strips away individual liberties.

These kinds of New World Order theories are popular among certain libertarian and conservative Christian groups, but they’re also increasingly popular among New Age thinkers on both the far right and the far left. These groups vary in who their views of the actual identities and ultimate motivations of the secret elite, but structurally they have a lot in common.

3. Why Conspiracy Theories are Interesting (from a critical thinking perspective)

This gives us some idea of what we’re talking about when we talk about conspiracy theories. I think you’ll agree that the topic is fascinating in its own right, but why is this topic interesting from a critical thinking perspective?

I think there are some obvious reasons why they’re interesting, and some not-so-obvious reasons.

(1) Should We Believe Them?

Belief in conspiracy theories of one form or another is very widespread among the general public, and have become increasingly so over the past 20 years. Surveys indicate, for example, that a sizeable minority of the general population believes that George Bush and/or the CIA knew about the Sept 11 attacks in advance but intentionally did nothing to prevent them, while a smaller (but still not insignificant) proportion believes the US government actually had a hand in orchestrating the attacks, with the express intent of using the attacks as a pretext for launching a global war on terror and justifying military incursions into the Middle East.

So the widespread belief in conspiracies of one form or another is an interesting phenomenon all by itself. A natural question to ask is, are there good reasons to believe any of these conspiracy theories? What attitude should we, as critical thinkers, have toward these theories?

(2) How to Justify the Default Skepticism of Most Academics

Among philosophers and indeed, I think, most mainstream academics, the most common attitude is a default skepticism about conspiracy theories of this sort. Among professional skeptics who advocate for scientific rationality it’s mostly taken for granted that belief in conspiracies is unjustified and irrational. Not all conspiracies obviously, but the big conspiracies like the moon hoax, or the intentional demolition of the Twin Towers by the US government, or global conspiracy theories that posit that a small, secret elite is engineering world events — these are viewed by most academics as fundamentally implausible, if not outright delusional.

Now if this is right, then a question to ask is, how exactly is this default skepticism justified? Are there good reasons to adopt this view?

(3) The Relationship Between Conspiracy Theorists and Critical Thinking

There’s another reason why conspiracy theories are interesting from a critical thinking perspective. It has to do with the attitude that conspiracy theorists have toward critical thinking itself.

It turns out that many people who endorse conspiracy theories are also passionate advocates for critical thinking education. They promote the teaching of logic and argumentation skills, they want us to understand informal fallacies and the difference between justified and unjustified beliefs. I know this because I receive emails from people, on both the right and the left, who say they’re admirers of the podcast, they love what I’m doing, they want to spread the word about the importance of critical thinking, and who follow up with “have you ever looked at this website? I think you might find it interesting”, and the website they refer me to is associated with some socio-political movement that is predicated on a conspiracy theory of some kind.

For some of these groups, the more global and all-encompassing the posited conspiracy is, the more importance they place on logic and critical thinking skills. What’s fascinating (and ironic) is that this runs exactly counter to what, for lack of a better term I’ll call the “establishment” view of conspiracies and critical thinking, which is that the more global and all-encompassing the conspiracy is, the less plausible it is, and the more irrational it is to believe it.

Let me just pause and elaborate a bit about this last point, because I think I understand what’s going on in the minds of conspiracy theorists, and I think it’s interesting.

For a conspiracy theorist, to identify a conspiracy you need to be able to see beyond and behind the official story, which by hypothesis will be a misrepresentation of reality, a tissue of facts, half-truths and outright deceptions that is designed to hide the truth. Seeing past the lies and deceptions requires the ability to identify inconsistencies in the official story, follow sometimes long chains of inferences, and identify fallacies and techniques of intentional deception. Hence, identifying a conspiracy, seeing it for what it is, is an act of successful critical thinking. That’s the way conspiracy theorists see it.

And consequently the more broad and encompassing the conspiracy, the greater the challenge for critical thinking, requiring even more vigilance and more rigor in our thinking.

Now, what makes the identification of conspiracies even harder still (and I’ve learned that this is a widely shared view, especially among New World Order theorists) is that one of the techniques that the powerful elite have used to maintain control is by a deliberate program of “dumbing down” the general public, by suppressing the development of critical thinking capacities among the unwashed masses.

They’ll point to the distractions of Hollywood, or professional sports, or music and video games as tools for keeping us distracted and entertained. And they’ll point to the systematic elimination of critical thinking education from the public school curriculum.

For example, classical liberal arts education used to be structured around three core disciplines that constituted the “trivium”, which included grammar, logic and rhetoric. This core was part of basic instruction in how to think, how to learn and how to communicate one’s understanding of a given subject matter. This material isn’t taught in public schools anymore, and they’ll ask, why not?

Here’s why not (they’ll say): The purposes of public schooling have been subordinated to the purposes of the elite ruling classes, which require citizens who are conformist, obedient, dependent on external guidance and direction, intellectually incurious, and easily controlled. We can’t have the masses going around thinking for themselves and criticizing the status quo, that’s not part of the elite game plan.

So, critical thinking education for the general public becomes important (again, on this view) because its suppression has been instrumental in maintaining and perpetuating the conspiracy.

It follows, then, that if the goal is to unmask the conspiracy and free people’s minds to see reality as it truly is, then we need to promote critical thinking skills, and make critical thinking resources more available to the general public. And that’s why so many conspiracy theory proponents are also strong advocates of critical thinking education.

You can see why someone in my position would find this view both attractive and ironic at the same time. It’s a very romantic vision of the critical thinker as an anti-establishment culture warrior, whose special knowledge and skills can help us to see the behind the veil of appearances, free our minds and begin to live a life outside the “Matrix”, as it were. I love this! It makes me feel like Morpheus.

At yet at the same time, my skeptic friends view these sorts of grand conspiracy theories as classic examples of irrational and misguided belief. They see a call for critical thinking education from these people as bizarrely ironic as astrologers or flat-earthers advocating for better science education in the public schools!

Well, that wraps up my introduction to this topic. Next we’ll take a closer look at some of the more common arguments for default scepticism about conspiracy theories.

11:23

The Argument for Default Skepticism About Global Conspiracies

1. Default Skepticism

Let’s remind ourselves where we left off. I was saying that among so-called “professional skeptics” — figures associated with the skeptical movement that encourages scientific rationality and secular values, especially in the public square — there’s an attitude toward conspiracy hypotheses that is widely shared, that I called “default skepticism”.

“Default skepticism” about conspiracy theories is the view that there’s something about conspiracy hypotheses that justifies a certain default skeptical stance toward them, and the stance can be summed up like this: the grander the conspiracy that is required, the less likely it is that it’s true.

“Grandness” isn’t well-defined, but it’s some function of the size and scope and depth of the conspiracy: how many people are involved and are required or forced to keep quiet, how long they have to keep quiet, and how many layers of social and institutional structure are implicated.

So, a conspiracy among, say, the members of a political party to leak negative information about an opposition candidate might involve only a small number of people at a particular level of organization. That’s a conspiracy, but it’s not a particularly grand conspiracy. By contrast, a conspiracy among pharmaceutical companies to hide the cure for cancer for the past twenty-five years; or a conspiracy among government and law enforcement officials to hide the existence of a crashed alien spacecraft in Area 51 for the past 60 years — these are much grander theories. Default skepticism would tell us that these grander theories are intrinsically less plausible, less likely to be true, than the smaller scale theories.

In some ways you can view this stance as a particular instance of a slogan that Carl Sagan popularized: “extraordinary claims require extraordinary evidence”. Default skepticism about conspiracy hypotheses is motivated by the belief that there’s something extraordinary about these hypotheses, that scales with the grandness of the conspiracy.

But of course it’s not an argument to just say that conspiracy hypotheses are by their nature implausible. We want to understand what it is about conspiracy hypotheses that would justify this judgment, and in particular, the judgment that the grander the hypothesis, the more extraordinary the claim is and consequently the less likely it is to be true. What reasons do skeptics give to support this conclusion?

I’m going to try and unpack this one aspect of the skeptical stance, try to identify what it is that they take to be extraordinary about conspiracy hypotheses.

2. What is Implausible About Grand Conspiracies

The aspect of conspiracy theories that I want to highlight is best illustrated by talking through an example that most people are already inclined to disbelieve, the moon landing hoax theory. Jesse Ventura is generally sympathetic to conspiracy theories, he covered the topic on his Conspiracy Theory television show, and even he made a point of distancing himself from the idea that NASA and the US government faked the moon landings on a sound stage.

Of course there are lots of parts of the moon hoax story that are plausible. It’s plausible that NASA didn’t have the time or technological ability to fulfill John F. Kennedy’s promise that he made in a speech in 1961 to put a man on the moon before the end of the decade. I’m not saying it’s true, I’m saying that everyone understood that it was a serious technological challenge and that it was reasonable to doubt whether it would or could succeed in this time frame.

Also, it’s plausible that the video of astronauts on the moon could be reproduced on a sound stage. That’s the least implausible part of the theory, in my opinion.

And there’s no doubt that NASA felt enormous pressure to pull this off, given the state of the Cold War with the Soviet Union and the political costs of either a failed mission or of the Soviets getting there first.

And given all this, it’s also very plausible that the idea of faking the missions would have crossed somebody’s mind at some point in the process; it would be a very natural thought to have under the circumstances, I would think.

Those are the plausible aspects of the moon hoax theory. But what we’re interested in is what skeptics regard as the implausible aspects of the theory. To get at these, let’s shift gears and consider what would have to be true if the theory is right and the landings were faked, and the conspiracy not only to fake the landings, but to keep it secret from the general public, was in fact successfully maintained for over 40 years, up to the present day.

Well, you’d need everyone directly involved in the conspiracy to agree to keep their mouths shut. Not just the top players in NASA and the US government, but all the subordinates, the engineers, the scientists, the crew, the staff — everyone. You’d also need everyone who was indirectly involved — those implicated in the cover-up in some way, but not directly aware that a cover-up was going on — you’d need all these people to remain in the dark, or to keep silent if they came to know about it.

Now let’s think about human nature. Isn’t it likely that at least some of the people directly involved would be anxious and conflicted about what was going on? The urge to tell someone would be strong, wouldn’t it? In at least some people? Wouldn’t it be likely that some of those people would feel compelled to confide in their spouses or friends? But if so, those spouses and friends would also have to have kept their mouths’ shut.

But wouldn’t there also be a strong urge among at least some of these people, either directly or indirectly involved, to blow the whistle on this? If this story broke, and it was confirmed, there’s no doubt it would have been the story of the decade, if not the century. Whoever broke it would be world-famous, overnight.

On the outside, journalists who suspected a hoax or a cover-up would have an enormous incentive to pursue this story, it would catapult them to fame as well. Think of Woodward and Bernstein, who became household names after they broke the Watergate scandal.

So there are psychological and financial incentive from both sides to break this story, from people on the inside and people on the outside.

Now, with these plausible background assumptions in mind, let’s consider the psychology of the high level officials who would have authorized and orchestrated the conspiracy. Let’s assume that they did in fact come to the conclusion that an actual moon landing was impossible or would fail. Imagine them beforehand, deliberating over whether to fake the landing, or find some other way to call off or postpone the mission. Sure, there would be a political cost to admitting that the US couldn’t do it on the schedule that they had promised. But think about the political cost of the world coming to know that the US had faked the landing! The political fallout from that disclosure would be many orders of magnitude worse. It’s not a stretch to imagine that this would be the end of NASA and the end of the government administration that authorized the hoax. It would have a devastating impact not only on international perception and confidence in the United States, but also on the confidence of the American people in their own government.

But according to the moon hoax conspiracy theory, we’re required to imagine that the officials who authorized this came to the conclusion that the benefits of this course of action outweighed the potential risks. Now, if we’re right about our basic background assumptions, then this could only be a rational choice if they also judged the risk of the conspiracy failing to be small, EXTRAORDINARILY small. They would need enormous confidence that this information could be almost be hermetically contained and controlled.

But what reason do we have to think that information can be controlled in this way, or that people can be controlled in this way?

And here, now, is the core premise underlying default skepticism. If anything, all evidence seems to point in the other direction.

Information is intrinsically difficult to control. Indirect effects and unintended consequences are the norm in complex social systems, not the exception. Unpredictable information leaks are themselves common and predictable. That’s why governments and corporations have special offices dedicated to public relations and spin control; these are predicated on the assumption that information leaks are inevitable, and they need to be prepared to adaptively respond to those leaks.

3. An Example of This Kind of Reasoning

Here’s an example that illustrates this line of reasoning. The summer 2011 issue of Skeptical Inquirer magazine was the follow-up to the previous issue that focused on debunking 9/11 conspiracy theories that assert that the World Trade Center came down as a result of a controlled demolition. In the letters section of this issue a reader writes the following:

I found it easy to dismiss the demolition theories without requiring analytical refutation. These theories introduce far greater problems to address than anything they purport to solve. It seems the “mastermind” would almost certainly have to be the President of the United States. What would we do to such a President if he were found out? Clearly he would have to believe that absolute secrecy could be maintained forever. We can’t even keep secrets in the CIA! Many people would have to be involved over an extended time period: demolition experts, hijackers, FBI and Interpol investigators, etc. What if one of the WTC aircraft hijackings had been thwarted by the passengers as in the case of Flight 93? One of the towers would still be standing with demolitions evidence that would have to be removed. Even a flight cancellation or a delay would have created havoc. What if WTC 7 hadn’t caught fire? Certainly this could not have been guaranteed. Would they have proceeded with the demolition? What if the demolition triggers failed due to damage from the aircraft? The list could go on for pages. All of this so we could go to war in Afghanistan? Give me a break!

The premise that underwrites this line of reasoning is a general one, it’s not specific to 9/11 theories. The idea is this: For it to be rational to try and implement a grand conspiracy of this sort, you would need enormous confidence that a complex series of operations would go off without a hitch, that information about the conspiracy could be contained indefinitely, and that the behavior of dozens, hundreds, maybe thousands of a people can be controlled with a high degree of certainty.

And the objection is that, given what we know about human nature and the functioning of complex social institutions, and the difficulty of predicting the success of complex operations like the ones we’re talking about here, we just don’t have any reason to think this level of control and certainty is possible. As the conspiracy scales in size and complexity, the more likely it is to fail, and less likely that rational people would even attempt it.

25:31

Conspiracy Theories, Mind Control and Falsifiability

In Part 2 I reviewed an argument for default skepticism about global conspiracy theories. Such theories assume that information about the conspiracy can be contained indefinitely, and that the behavior of hundreds or thousands of people can be controlled with a high degree of certainty. The objection is that we don’t have any reason to believe that this level of control and certainty is possible.

1. The Conspiracy Theorist’s Response: Mind Control is Possible

The common response of conspiracy theorists to this line of reasoning is to challenge the objection directly. They’ll say that the levels of information regulation and psychological control that are required for grand conspiracies to be successful are indeed possible. And not just possible. This is a key component that you see in every grand conspiracy theory — the claim that there exist various methods of PERSUASION and MIND CONTROL that elite powers have at their disposal, and that these methods of mind control, coupled with carefully organized campaigns of DISINFORMATION and DISTRACTION, are the means by which grand conspiracies are successfully implemented and maintained.

Now, we might optimistically hope that we could settle the issue by carefully investigating these mind control methods and testing whether they can really do what conspiracy theorists say they can do.

But not surprisingly it’s not that easy. Everybody grants there exist a wide range of techniques of psychological manipulation that are used by governments and businesses to influence our beliefs and decisions.

But conspiracy theorists will say that the existence of the most effective and powerful mind control techniques — the ones that the elites actually rely on to successfully implement their agenda — is itself a well regulated secret. They’ll say that these are either very old methods that the elites have always deployed, or relatively new methods that have been developed in secret government programs and intentionally kept out of the scientific mainstream, or some combination of the two.

2. Mind Control Methods

It’s hard to generalize about mind control methods because there’s a lot of variation among conspiracy theorists about which methods are used. Some are more plausible to skeptics than others.

Mind Control Methods That Are More Plausible to Skeptics

For example, conspiracy theorists will talk a great deal about the use of mass media as a propaganda tool for manipulating public opinion and behavior, and they’ll cite the anti-democratic propaganda theories of prominent mid-century intellectuals like Walter Lippmann and Edward Bernays and Harold Lasswell who thought that modern government required the manipulation of mass opinion by propaganda methods engineered by an elite ruling class.

The main idea here is certainly not a fringe idea. There’s also a long tradition of academic writing on the left, for example, from writers influenced by Freud and Marx and Hegel, that is deeply critical of the control of media in the service of government and corporate propaganda.I’m thinking here of names like Theodore Adorno and Herbert Mercuse and Jacques Ellul and Noam Chomsky, who all have written extensively on the threat to democracy and human freedom posed by propaganda through the mass media.

Conspiracy theorists also write about the deliberate dumbing down of our education system as a means of public manipulation. But there are radical education critics who have argued for similar conclusions, so again, it’s not just grand conspiracy theorists who are making these arguments.

Conspiracy theorists are also concerned about the increasing capacity of technology to track our internet activity and shopping behavior and online conversations, and worry about threats to privacy and personal security that these technologies pose. But so are many technology watchdogs groups, there’s nothing unusual or fringe about this worry either.

Mind Control Methods That Are Less Implausible to Skeptics

On the other hand, many conspiracy theorists also think that the government and corporations are adding toxins to our food and water supply with the express aim of altering our brain chemistry to make us more docile and apathetic and more susceptible to manipulation. And if you’re diligent and eat only organic local foods and purified water, they’ll still get you by spraying chemicals from airplanes in the atmosphere that do the same thing.

Others think that the electromagnetic radiation that we’re constantly bathed in can be manipulated to alter brain states, and that the proliferation of cell phones and cell phone towers is a vehicle for exposing us to this mind altering radiation.

Switching gears again, there are many conspiracy theorists who think that the elite control of human civilization goes back to ancient times, that they employ methods of persuasion that have correspondingly ancient origins, with roots in pagan mystery cults and occult symbolism that (they claim) one can find hiding in plain sight everywhere up to the present day. Some believe that the presence of these archetypal symbols in our media environment has an effect on our collective unconscious, to use the Jungian term, and they’re intentionally inserted as a tool of mass psychological conditioning.

These sorts of ideas are much harder for skeptics to swallow, and not all conspiracy theorists defend them all, so as I said, it’s hard to generalize about what mind control methods conspiracy theorists are committed to. What we can say is that there is a spectrum of views that, from the skeptic’s standpoint, range from more plausible to less plausible.

3. How the Issue of Mind Control Changes the Debate

What interests me about the debate over mind control methods is how raising this issue changes the nature of the dialectic between the conspiracy theorist and the skeptic.

As I said earlier, one might hope that we could settle the issue by carefully investigating these mind control methods, confirming that they exist and testing whether they can really do what conspiracy theorists say they can do.

But the problem is that, according to conspiracy theorists, information about the existence and use of these methods is itself carefully regulated by the elites, using the very methods that are under discussion. Some of it is publicly available but much of it is shrouded in secrecy, as one might expect.

And this is where the debate usually stalls. We might forgive the skeptic for thinking that there’s a threat of circularity looming in this line of response. The skeptic is asking about the plausibility of grand conspiracies, they identify a core assumption about the controllability of information and human behavior that underwrites such theories, and the skeptic asks for evidence for this core assumption. But they’re told that she won’t find such evidence, at least not in the mainstream science journals or the halls of mainstream academia or in the mainstream news media, because concealing the existence of the methods in question is itself central to the grand conspiracy, and the elites have control over the content of these mainstream information sources.

What can be even more frustrating for the skeptic is the belief that you often encounter among conspiracy theorists, that mainstream media and science and academia have been infiltrated by so-called “dis-informers”, people who at some level are aware of the truth about the elite agenda and the existence of these mind control methods, or who are being unwittingly manipulated by the elites, and who are helping to implement the elite agenda by sowing disinformation within their respective academic fields and within the public mainstream. So if you present evidence against the existence of these mind control methods, you may find yourself accused being “one of them”, a dis-informer intentionally spreading lies to help maintain the conspiracy, or at best a dupe, a tool of the powers that be, whether you’re aware of it or not.

4. Are Grand Conspiracy Theories Unfalsifiable?

At this point its hard to see how productive dialogue between the skeptic and the conspiracy theorist can continue. A common skeptical response is to accuse the conspiracy theorist of indulging in unfalsifiable paranoid fantasies that are immune to rational criticism. Why immune to rational criticism? Because every bit of evidence that would count against the theory is either dismissed or reinterpreted as disinformation manufactured by the conspirators. This quality, that any bit of countervailing evidence can in principle be accommodated within a grand conspiracy theory, has been called “self-sealing” by some. Whenever the theory is poked by some bit of countervailing evidence, it seals itself by reinterpreting that evidence as consistent with the theory after all.

Philosophers of science might describe this as a case of “unfalsifiability”, in Karl Popper’s sense of the word. It’s not a compliment to call a theory “unfalsifiable”. Karl Popper was an Austrian philosopher of science, arguably one of the most influential philosophers of the 20th century. He introduced the concept of unfalsifiability to identify theories that, in his judgment, didn’t really qualify as scientific theories at all, because they could never be open to a genuine empirical test. In a genuine test, you lay out the conditions under which an observation or an experimental result would count against the theory, would give us a reason to reject the theory. Genuine tests, for Popper, are always potential falsifications of a theory — the theory sticks its neck out and risks being chopped off by experimental or observational results. For Popper, the best theories are the ones that stick their neck out the farthest, the ones that are the most daring and specific in their predictions, and survive those tests.

To give a simple example, if your horoscope says that today “new opportunities will present themselves, but may come with costs that you’ll have to consider carefully”, that’s a virtually unfalsifiable prediction. It’s hard to imagine what sort of day you would have that would count as falsifying this prediction. Popper would say that it’s not an empirical claim at all, it’s not something you can assign a truth-value to that could be subject to empirical testing.

But what if your horoscope said that today, at 3:41 PM, a man wearing a two-piece suit and dark sunglasses will ring your doorbell and offer you a great deal on a time-share condo in Florida? THAT’S a falsifiable claim, and what makes it falsifiable is that it clearly specifies the conditions under which we could all agree that the claim is false. If a man shows up at 2:30 instead of 3:41, it’s false. If the man isn’t wearing dark sunglasses, it’s false. If the man offers a time-share in Mexico instead of Florida, it’s false. We all have a shared understanding of what would count as evidence that would falsify this claim.

For Popper, good theories, genuinely scientific theories, make falsifiable claims. Theories that don’t make falsifiable claims are not scientific, they’re “pseudo-sciences”, theories masquerading as genuine theories, but in reality aren’t really saying anything, or at least not saying anything that evidence can have any bearing on.

Now, there are a couple of different ways that theories can be unfalsifiable. They can be unfalsifiable because they make only vague predictions, like that first horoscope prediction. That’s not the major concern about grand conspiracy theories, though I suppose could be a concern in some cases.

But theories can also be made unfalsifiable in another way, by introducing revisions to the theory that are motivated simply to accommodate potentially falsifying evidence. Consider, for example, the claim that the hands on my wrist watch turn because there’s a little green demon inside my watch that makes them turn. You naturally ask, okay, let’s open the watch and see for ourselves. I open the watch but we don’t see any little green demon. You say aha, we’ve falsified your claim. But I say not so fast — my little green demon has the property that it turns immaterial and invisible when the watch is opened up. I’ve revised the green demon hypothesis to make it consistent with the observational results, so the observation no longer counts as falsifying it. But the worry now is that I might try to do this for any attempt to falsify the hypothesis, just keep revising the hypothesis to accommodate the new evidence, thereby rendering the hypothesis immune to falsification.

This is much closer to the worry that skeptics have about conspiracy theories when they include claims about pervasive mind control. The worry is that when you question every source of information you hear from mainstream media or mainstream science, or suspect all mainstream information sources of being biased or compromised in a way that makes them untrustworthy, then it’s very easy to make a conspiracy theory effectively unfalsifiable.

This gives support to the common complaint that conspiracy theorists are dogmatically attached to an ideological worldview that is immune to rational criticism.

5. A Defense: Appealing to the History of Real Conspiracies

So, how does the conspiracy theorist typically respond to this charge that their worldview is dogmatically ideological and unfalsifiable?

Well, conspiracy theorists will insist that it’s just false to think that there’s no publicly available evidence for their claims. They’ll point to historical precedents then even skeptics have to grant.

MK-ULTRA and the CIA

On the subject of mind control methods, for example, it’s now widely known that the CIA was involved in covert, illegal mind control experimentation on human subjects that began in the 1950s and continued at least through the late 1960s. If you search for “MK-ULTRA” on wikipedia you’ll find an entry under that name. MK-ULTRA was the code name for a project that

involved the use of many methodologies to manipulate individual mental states and alter brain functions, including the surreptitious administration of drugs and other chemicals, hypnosis, sensory deprivation, isolation, verbal and sexual abuse, as well as various forms of torture.

On the US Senate floor in 1977, Senator Ted Kennedy said the following:

The Deputy Director of the CIA revealed that over thirty universities and institutions were involved in an "extensive testing and experimentation" program which included covert drug tests on unwitting citizens "at all social levels, high and low, native Americans and foreign." Several of these tests involved the administration of LSD to "unwitting subjects in social situations".

Now, the existence of programs like this doesn’t by itself show that they were successful at developing the sorts of mind control methods that grand conspiracy theories require. In fact the official line espoused by the government is that the science was poorly conducted and the results unreliable. But it does establish an interest and a willingness on the part of governments to invest time and money on secret projects of the sort that conspiracy theorists are worried about.

COINTELPRO and the FBI

Conspiracy theorists will also point to other covert government programs that reveal a willingness to engage in the sorts of activities that they believe are actually being implemented on a larger scale. For example, every conspiracy theorist is familiar with COINTELPRO, an FBI program that operated over the same time period, roughly mid-50s to early 70s. COINTELPRO is an acronym for Counter Intelligence Program, and its mandate was, as the wikipedia entry once again puts it, “surveilling, infiltrating, discrediting, and disrupting domestic political organizations” that the government believed to be subversive, typically communist and socialist organizations, individuals and organizations associated with the civil rights movement, the women’s rights movement, anti-war movements, and various other radical movements on the political left and the right.

What’s most interesting to conspiracy theorists isn’t necessarily the targets of this program, but the methods they used. Government operatives didn’t just spy on political activists. They infiltrated their organizations, they planted false media stories and other publications in the name of the targeted groups. They forged letters, spread misinformation about meetings, set up pseudo-movement groups run by government agents. They used the legal system to harass people, make them appear to be criminals. They gave false testimony and used fabricated evidence to arrest and imprison people. They conspired with local police departments to threaten people, conduct illegal raids and searches, commit assaults, vandalism, and instigate police actions that resulted in civilian deaths.

Now, the official line, once again, is that COINTELPRO was shut down after their operations became public in the early 1970s, but that hasn’t kept people from speculating on the existence of continuing programs of government surveillance and infiltration.

For conspiracy theorists, what COINTELPRO and other programs like it demonstrate is at least the plausibility of an ongoing program of secret government activities that is by every measure illegal and immoral, yet authorized at the highest levels of government. They will remind us that COINTELPRO continued its operations and reported to the highest officials under four different White House administrations. Is it so implausible that such operations might not be continuing, or other operations like it or related to it, with the exponentially more powerful technological resources that are available today?

So, this is one common pattern of response from conspiracy theorists to the charge that their views are implausible. They’re only implausible, they’ll say, to those who are ignorant of history and ignorant of the wide and varied sources of information that are out there in the public square, that point to activities and programs being conducted in secret, with aims that we should at least be highly suspicious of.

And they’ll throw back the charge of unfalsifiable dogmatism to the skeptics. They’ll say that skeptics are in the grip of their own presuppositions and confirmation bias and tunnel vision, where any evidence that challenges the official or mainstream version of events is assumed to be false or unreliable from the outset, giving no room for this evidence to receive a fair hearing.

6. The Skeptic’s Response

So, how does the skeptic respond at this point? Well, I think we should all be grateful for the education. The history of programs like MK-ULTRA and COINTELPRO is sobering, we should all know it and we should reflect on it.

But the skeptic is going to point out that, by their very nature, grand conspiracy theories make claims that go beyond what can be established by publicly available evidence. It’s one thing to say say that MK-ULTRA investigated mind control techniques, it’s another thing to say that the government has actually succeeded in developing techniques that can do the job that conspiracy theorists require them to, and that they’re currently in operation right now.

Similarly, it’s one thing to say that the government uses the media as a propaganda tool, it’s quite another thing to say that all forms of mainstream information are under the control of a cabal of elites that are using them to implement the agenda of a totalitarian world government.

At best, the skeptic will grant that the sordid history of secret government activities and covert scientific programs helps to establish the possibility of the sort of top-down control that’s required, but possibility is a very weak epistemic notion. Lots of things are possible, but still highly unlikely. It’s logically possible that governments have been conspiring with aliens for decades, it’s logically possible that the government is using cell phones to transmit mind controlling signals into our brains, it’s logically possible that we’re all living in the Matrix — but mere logical possibility doesn’t give us positive reasons to think any of this is true.

So the ball falls back into the conspiracy theorists’ court. The skeptic is asking for evidence for their more substantive claims about covert activities, evidence that will withstand critical scrutiny in the public square. And they’re asking that conspiracy theorists not respond to challenges by continual ad hoc revision of their theory, or accusing the challenger of being a dis-informer or a stooge of the elites.

7. Advice to Both Sides

Now, not that anyone’s asking me, but if I might offer some advice to those sympathetic to grand conspiracy theories, it’s that you should take this request seriously. Because if you don’t, then there’s virtually no chance that your views will be taken seriously by a broader mainstream public. Now maybe that’s not your goal, maybe you think the mainstream is compromised beyond hope and you’re resigned to fighting a guerrilla war from the jungle and picking up defectors as they come. All I’m saying is that there are people who will listen to you if you engage them on terms that don’t presuppose the truth of your worldview and the falsity of theirs right out of the gate.

And if I might offer some advice to skeptics, it’s two-fold: one, resist the urge to label all conspiracy theorists as irrational and paranoid, at least if your goal is to have some kind of productive dialogue with them; and two, consider seriously the possibility that they may know more than you on certain subjects, like, for example, the history of covert government operations. This is necessary, I think, for beginning a dialogue in openness and good faith, rather than prejudging the conclusion at the very outset.

Causation, God and the Big Bang - PDF Ebook
Article
22:32

Causation, God and the Big Bang

One of the things I wanted to do with the podcast was devote some shows to answering viewer questions. Occasionally I get emails from people asking me if such-and-such is a good argument or not, because they’re involved in a debate with someone else. There’s usually a logic component to the question, but almost always there’s also a substantive empirical or philosophical question at issue too.

My YouTube friend “Theophage” had a question about simultaneous causation, and the role it might play in a critique of an argument for the existence of God. After thinking about it I thought that this would be a good topic for a video response. This document is based on the transcripts of that video response.

Theophage writes,

For several months now I’ve been arguing that simultaneous causation is impossible, or at the very least, it doesn’t happen in our world. I have a refutation of the Kalam cosmological argument based on the idea that (a) causation is a purely temporal phenomenon, but also that (b) causes need to come before their effects, rather than in the same instant as their effect.

The problem I’m having is that I’m unable to find much in the way of references or philosophical discussions on the matter of simultaneous causation on the internet. Because of this I’m finding it difficult to support my position, and yet all of my opponents, whom I have argued this with, don’t seem to be able to back up their arguments by references either. Is there any way you can help me with this?

Well, I’m happy to try.

1. The Kalam Argument For the Existence of God

Let’s just set the context so that viewers who aren’t familiar with the Kalam argument understand what the broader debate is about and why the question of simultaneous causation is relevant.

The Kalam argument is a species of what are called “cosmological arguments” for the existence of God as the creator of the universe. A cosmological argument is an argument that moves from premises about the existence of the universe to the conclusion that there is a God who the creator or sustainer of the universe.

William Lane Craig is most responsible for popularizing the Kalam cosmological argument. When you see this term in print it will likely have his name attached to it somewhere. But he didn’t originate it, it’s a very old argument. The term “Kalam” comes from a tradition of Islamic philosophy. There’s a big literature on it, but the basic argument is pretty simple. It goes like this:

1. If something comes into existence at some finite time in the past, then its coming into existence has a cause.
2. The universe began to exist at some finite time in the past.
Therefore, the universe’s coming into existence has a cause.

Now, if this argument is compelling then we can conclude that something had to cause the universe to come into existence. Rationality demands that we ask what could possibly function as a cause of the universe’s coming into being. Of course you need additional argumentation to conclude that this something is the God of Christianity and Judaism and Islam, but this is the crucial first stage of the argument.

Premise 2 is a relatively uncontroversial claim. It’s central to the Big Bang model of cosmology, which has almost universal support among cosmologists. There’s overwhelming evidence that the universe originated in a very hot, very dense state at some time in the past, between thirteen and fourteen billion years ago, and has been expanding and cooling ever since.

The interesting premise is the first one, which states a general metaphysical principle, that if a thing comes into existence then it must have a cause.

2. Theophage’s Refutation Strategy

Here’s how Theophage’s refutation strategy is going to go. He wants to say that the universe’s coming into existence is a special event that by its nature is going to violate this metaphysical principle. Why? Because if we follow the standard interpretation of Big Bang cosmology, the beginning of the universe also marks the beginning of time itself. It’s the beginning of temporal succession. And that means we can’t talk about moments of time prior to the first moment, the moment of the universe coming into being, because there are no such moments.

Theophage wants to say that if there are no moments of time prior to the beginning of the universe, then there can be no cause of the universe’s coming into being. Why? Because causation is by its nature a temporal process. Causes necessarily precede — they come before — their effects. But we’ve just agreed that nothing can come before the universe’s coming into being, since that beginning marks the first moments of time. Hence it follows that the universe’s coming into being doesn’t have a cause, it’s an uncaused event, and the Kalam argument can’t get off the ground.

So what does this have to do with simultaneous causation? Well, one way that a defender of the Kalam argument can respond to this objection is to say that causation doesn’t necessarily have to be a temporal process, with causes always preceding their effects. Maybe it makes sense to say that sometimes causes can occur simultaneously with their effects. So if we grant this, the defender can say the cause of the universe’s coming into being occurred simultaneously with the universe’s coming into being. At that very first moment, some kind of causal power was at work that simultaneously brought the universe into existence.

If we grant this possibility, this takes the steam out of Theophage’s objection, because it gives room for God or some other causal agent to enter the picture to function as the causal explanation of the universe. What Theophage wants to do is block this move by giving an argument against simultaneous causation.

It’s a long way around, but this is how I understand the motivation for Theophage’s question. He wants information on the status of the concept of simultaneous causation in philosophy and in physics. Do people generally reject it, do they accept it, what’s going on?

I’m going to try to answer that question.

But first, Theophage has a couple of arguments of his own against simultaneous causation that he wanted to run past me. I’d like to look at these first before I forget them.

3. Theophage’s First Argument Against Simultaneous Causation

Here’s Theophage’s first argument.

Given the known laws of physics, any kind of transfer of energy, say from one ball hitting another, must take a non-zero amount of time. So for every kind of physical cause, like a ball hitting another ball, there is a necessary lag between cause and effect. Simultaneous causation is thus impossible.

I like how clear this argument is. This is how I would formalize it a bit:

1. Every kind of physical cause involves a transfer of energy.
2. Transfers of energy always take a non-zero amount of time.
Therefore, every kind of physical cause takes a non-zero amount of time.

The conclusion is just another way of saying that every kind of physical cause is not simultaneous with its effect.

The first premise asserts something very specific about the nature of physical causation. There are philosophers and physicists who think along these lines. I want to note here that this kind of physicalist analysis of causation is not the only game in town, but I’ll come back to this later.

My first thought is, even if we grant the two premises, I see a problem here with the relevance of the argument to the specific case in mind, namely, the origin of the universe. Even if we grant that every physical cause involves a transfer of energy, there’s a good reason to think that whatever the nature of the causal agent that is responsible for the universe coming into being, it’s probably not an ordinary physical cause. The cause of the physical universe coming into existence isn’t going to be like a baseball bat hitting a baseball.

So, to that extent, I don’t think a defender of the Kalam argument is going to find this compelling, since they’re already thinking of the cause of this first event as a very special kind of cause.

4. Theophage’s Second Argument Against Simultaneous Causation

But Theophage has another argument against simultaneous causation. Here it is:

Further, we assume that a thing cannot cause itself to come about or exist, because it would have to exist before it existed in order to cause its own existence. Included in that is the assumption that causes have to come temporally prior to their effects. Naturally it is a logical contradiction for a thing to exist before it exists.

But, if simultaneous causation is allowed, a thing does not need to exist before it exists, only simultaneous with its own existence, which by definition everything does anyway. So it would seem that if simultaneous causation is allowed, then things are also allowed to cause themselves. And yet we don’t see this happening around us.

The first paragraph just restates the basic objection that I paraphrased earlier. It’s the second paragraph that’s interesting to me. What I’m getting out of these paragraphs is this argument:

1. If causation can be simultaneous, then a thing can cause itself to exist simply in virtue of co-existing with itself.
2. But by definition, everything co-exists with itself.
3. Thus, if causation can be simultaneous, then things are allowed to cause themselves.
4. But we don’t see this happening around us.
Therefore, causation cannot be simultaneous

Here I’m not as confident that I understand the reasoning or precisely what’s being asserted, but assuming this is halfway accurate, here are my thoughts.

It seems like he wants to say that, from the fact that effects can be simultaneous with their causes, it follows that things can cause themselves to exist. But I just don’t see how this follows. It might be true that if a thing causes itself to exist, then cause and effect must act simultaneously. But the converse surely doesn’t follow. If both the cause and effect act simultaneously, it doesn’t follow that the thing must, or is able to, cause itself to exist. A might imply B, but it doesn’t follow that B implies A. I think this argument relies on B implying A.

That’s all I’m going to say about Theophage’s arguments against simultaneous causation. I don’t think that they win the day, but that doesn’t mean that there aren’t compelling arguments to be had against simultaneous causation.

5. The Status of Simultaneous Causation

And that gets us to his question about the status of this concept in philosophy and physics. There are two questions here really.

  1. Is there a consensus on the status of simultaneous causation in philosophy?
  2. Is there a consensus on the status of simultaneous causation in physics?

The answer, I think, is “no” to both questions.

5.1 The Status of Simultaneous Causation in Philosophy

First, there’s no consensus on simultaneous causation in philosophy. This is hardly surprising since there’s no consensus on causation in philosophy.

Causal reasoning is widespread in science and philosophy, and there’s a good deal of agreement about what counts as good or bad causal reasoning in specific disciplines. But if you dig deeper you’ll see that, while there’s general agreement that causation is a relation of some kind, between things we call “causes” and things we call “effects”, there’s no agreement on what causes and effects are, fundamentally, or what the causal relation is, fundamentally.

You can carve up the various schools of thought in a variety of ways. Your basic introductory philosophy lecture on causation will probably start off distinguishing between two camps: “Humeans” about causation, and “non-Humeans” about causation.

Humean theories of causation are inspired by the 17th century Scottish philosophy David Hume’s analysis of causation. He argued that, when we say that A causes B, we’re saying that whenever events of type A occur, they’re always followed by events of type B. But for Hume, this is just a way of labeling a certain kind of regularity in the patterns of observable phenomena.

Most importantly, he interprets the necessity of causal claims, the feature of a causal relation whereby A somehow necessitates B, as a feature not of the world itself, but of our psychology, about our learned expectations built up through repeated observations of B following A.

When I call a theory of causation a “Humean” theory, I’m not saying that it agrees with Hume’s analysis of causation. What I’m saying is that Humean views have a certain family resemblance that share certain features of Hume’s approach.

What they share is this: Humean views want to keep the analysis of causal relations at the level of patterns in observable phenomena, or at the level of our theoretical representations of the world, and they shy away from talking about causation or causal laws in metaphysical terms, as part of the furniture of the world.

Non-Humean approaches, on the other hand, embrace the metaphysical path. They want to identify causal relations with relations between properties or entities or processes in the world itself. They think there is a causal structure to the world itself, and the task of a philosophical analysis of causation is to identify the ground of this causal structure.

For example, when Theophage says that every physical cause involves a transfer of energy, he’s implicitly working within one of these non-Humean traditions. In fact there is a respectable school of thought about causation that analyses causal relations in terms of causal processes, and analyses causal processes in terms of energy transfer. This is sometimes called the “conserved quantity” theory of causation, since it identifies every causal process with the exchange of a conserved quantity. Energy is a conserved quantity in physics (or mass-energy if you’re being precise), so is linear momentum, so is electrical charge, and so on.

I don’t want to endorse any of these views here. My point is that critical thinking about causation requires that we know something about the landscape of alternative theories of causality that are out there. And we shouldn’t be surprised that a question about causation like “can causes be simultaneous with their effects” will be answered very differently depending on which theory we’re using.

So, getting back to simultaneous causation, let me give you an example. On David Hume’s own account of causation, there can be no simultaneous causation by definition, since he defines the causal relation in terms of the pattern of the temporal ordering of events, where events of type B always follow events of type A.

If you’re thinking that this is a pretty cheap way of settling the issue, I’m on your side.

But Hume actually has some other more substantive arguments against simultaneous causation. He argues, for example, that if effects were simultaneous with their causes, then you could never have a temporally extended causal chain. If event 1, a cause, is simultaneous with event 2, the effect, and event 2 is a simultaneous cause of event 3, and event 3 is a simultaneous cause of event 4, then event 1 and event 4, at the end of the chain, are also going to be simultaneous. And so will any event no matter how far down the chain. It seems that on this view, every causally connected event is simultaneous, and no event can ever be causally connected to any event, at any other moment in time. But that seems like a crazy result.

For Hume, the possibility of a temporally extended chain of events actually rests on his belief that time is “atomistic”, that it has a smallest indivisible unit, and time intervals are made up of sequences of this smallest indivisible unit.

I mention this because I want to come back to it when we talk about simultaneous causation in physics, since this is an argument that has to be addressed if we think that simultaneous causation is plausible.

5.2 The Status of Simultaneous Causation in Physics

Okay, let’s talk about physics. I’m going to say about physics basically what I said about philosophy — that there’s no consensus on simultaneous causality in physics, and that how you think about the question is going to be a function of what you understand causality to be.

With physics you have the additional complication that a lot of people think that the concept of causation actually plays very little if any role in fundamental physics. But we’ll say more about that later.

Let’s start the discussion with some simple candidates for simultaneous causation. Imagine a lead ball resting on a cushion, where the weight of the ball causes an indentation in the cushion. The claim is that the downward force of the ball is a continuing event that is causing the continuing indentation, and that the cause and the effect are occurring simultaneously.

Another example is the glow of a heated metal bar. We say that the temperature of the bar causes the bar to glow, but these are ongoing processes that are occurring at the same time.

Reasons for Skepticism About Simultaneous Causation

Now, if you don’t like simultaneous causation then you’re going to argue that this description isn’t quite right. When you look at the physics in detail, you’ll want to say that what we have here is an equilibrium situation where certain phenomena are occurring simultaneously, but changes in those phenomena, like changes in the weight of the ball, or the temperature of the metal bar, don’t produce instantaneous changes in the shape of the cushion, or the brightness of the light emitted from the bar. There’s always a lag between cause and effect, so this isn’t really an example of simultaneous causation.

Another reason for skepticism about simultaneous causality in physics comes from relativity theory. If the world is governed by relativistic principles, then the speed of light is the cosmic speed limit, and there can be no causal interaction between spatially separated, simultaneous events. For some people this is enough to rule out simultaneous causation in physics, end of story.

Defenses of Simultaneous Causation

On the other hand, there have been some explicit defense of simultaneous causation in physics. I dug up a paper from my office files by Michael Huemer and Ben Kovitz called “Causation as Simultaneous and Continuous”. Their view is that in physics causal relations are described by the laws of physics, which they interpret as causal laws. So if you look at Newton’s second law, for example, it says that changes in a body’s state of motion, either speeding up or slowing down or changing direction, are caused by the net forces acting locally on the body, and as the forces change the body’s state of motion changes simultaneously.

They also argue that in order to avoid the problem with extended causal chains that I mentioned earlier, if we accept simultaneous causation we also have to accept that causal processes don’t occur at moments of time, but are actually extended over time, so that causal processes should be viewed as extending over time and overlapping.

I’m not going to say anything more about this paper by Huemer and Kovitz, and I’m not endorsing its conclusions, I’m just it to point out that there are defense of simultaneous causation in the literature.

Does Causation Play Any Role in Fundamental Physics?

Now, the last thing I want to talk about is something I mentioned earlier, about a not uncommon attitude that physicists and philosophers of physics have about causation and the role that causal concepts play in fundamental physics. This attitude makes you think twice about the whole notion of grounding a theory of causation in physics.

What’s the attitude? Well, the most famous expression of this attitude comes from the philosopher Bertrand Russell, who wrote an essay in 1913 called “On the Notion of Cause”. This is what he writes:

All philosophers of every school imagine that causation is one of the fundamental axioms or postulates of science. Yet, oddly enough, in advanced sciences such as gravitational astronomy, the word “cause” never appears. To me it seems that the reason why physics has ceased to look for causes is that in fact there are no such things. The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm.

The idea he’s getting at is this: the most basic laws and principles of physics aren’t framed as causal claims of the form “A causes B”, but rather as differential equations. What these equations describe are relations of functional dependency between variables — force, mass, acceleration, charge, momentum, energy, and so on. What’s important about this notion of functional dependency is that, at the most fundamental level, these functions are time-symmetric. What this means is that if you take the time variable in these equations and reverse it, the equations still have viable solutions, so the equations themselves don’t distinguish processes running forward in time and processes running backward in time.

Now as a matter of fact we know that many physical processes do have a directionality to them. Ice cubes melt at room temperature, they don’t spontaneously re-form into ice cubes. So we know that irreversible processes enter the picture at some level, and this irreversibility seems to be related in some way to our common-sense intuitions about the direction of time. But the puzzle has been how to understand the origins of time irreversibility when the fundamental laws of physics are all time-reversible.

This is a long-standing question in physics and the philosophy of physics, and many books and articles have been written on it. There’s no consensus answer to this question. Many have concluded with Russell that physics might just be the wrong place to look even for an explanation of the directionality of time and change, much less to look for a foundation for a universal law of causality.

I would say that a lot of the recent philosophical work on the relationship between physics and causality seems to accept this pessimistic outlook. But not everyone agrees that physics or science can or should try to get rid of the concept of causation entirely. In fact most don’t. Scientists, including physicists, use the concept of cause and effect all the time. It would be hard to imagine what science would look like without it.

So the challenge has been to give an account of causation that acknowledges that our use of causal concepts can’t be grounded in fundamental physics, but at the same time seeks to explain and justify our use of these concepts in science and everyday life.

I think that on this issue, the focus has shifted to looking at how and why we represent the world in causal terms. And a view that is gaining some popularity is that the naturalness of representing the world in causal terms is best understand as a consequence of our perspective as agents who intervene in the world, make plans and decisions, and so forth.

This approach to causation falls into the Humean category, broadly construed. I’m not endorsing this view, I’m just reporting on the state of discussion of these issues as I see them.

6. Consequences for the Original Question

Now, what’s the upshot of all this for the original question, which was about simultaneous causation and its application to the origins of the universe.

Well, I think your options break down like this. If you’re a non-Humean about causation — let’s say you’re working with something like the “conserved quantity” approach — then I think reasonable people can disagree on whether causation can be simultaneous. A lot turns on how you define the causal relation and the sorts of things that can function as causes and effects.

However, I don’t see how the conserved quantity approach gives you any traction at all in thinking about the origins of the universe itself, when you don’t have any conserved quantities to start with.

Now, if you’re in the Humean camp, then you’ll likely be very skeptical of the very idea of applying causal concepts to describe the origin of the universe. On this view the application of these concepts is grounded at some level in human agency and human representations, they’re not given in the fabric of the world itself. So I can’t see how this approach would be helpful either.

So, I think the situation is grim if what you want to do is use contemporary views of causation to ground a position one way or the other on causal explanations of the origins of the universe.

Now, does all of this pose a problem for defenders of the Kalam cosmological argument? I’m not sure it does. I think defenders of the Kalam argument might respond by saying that when they say that anything that comes into being has a cause for its coming into being, they’re not really interested in subsuming this notion under contemporary views of causation. What they’re doing is more like wrapping some causal language around a more basic metaphysical principle, some version of the “principle of sufficient reason”, which says roughly that for every event or true fact there is a sufficient reason why the event occurred or why the fact is true.

So what they’ll say is that the origins of the universe at some finite time in the past is an event that demands explanation in terms of some sufficient reason, and they won’t care so much whether you call it a causal explanation or some other kind of explanation. I think they’d be happy to grant all these critiques of causation and causal concepts that we’ve been talking about, and they’ll dig in at a more fundamental level and defend the core metaphysical intuition that underlies the first premise of the Kalam argument.

And then, the proper response from a critic, I think, is to challenge this underlying metaphysical intuition on its own terms. That’s how I see this playing out, anyway.

Five Reasons to Major in Philosophy - PDF Ebook
Article
19:16

Five Reasons to Major in Philosophy

I get a lot of students who tell me that they loved the one or two philosophy classes they took as an undergraduate student, and that they would have enjoyed taking more courses if their schedules allowed it, but since they weren’t majoring in philosophy it just wasn’t possible. This is really unfortunate when you find a student that really loves philosophy and you know isn’t nearly as passionate about their chosen major.

So I ask them, have you ever considered majoring in philosophy? Or keeping your current major and adding philosophy as a second major?

Most often the student will say “no”, they’ve never really considered philosophy as a major. Sometimes they’ll say that they have thought of it but dismissed it for one reason or another. A lot of students express worries about the marketability of a philosophy degree. One student told me, half jokingly, that her parents would kill her if she said she was switching her major to philosophy!

I understand where these students are coming from, I sympathize with their worries,and their parents’ worries, about employment and the marketability of their degree.

But too often I think these worries are based on misconceptions about the practical value of a philosophy degree. I’d like to address this question — “why major in philosophy?” — and along the way clear up some of the more common misconceptions about the value of a philosophy degree.

What You Learn When You Study Philosophy

Before we do anything else we need to spend a bit of time talking about what it is that you actually learn as a philosophy major.

It’s helpful to distinguish the various skills that you’re taught as a philosophy major, from the actually content of the material that you’ll be learning.

Skills

On the “skills” side, the most obvious category, and maybe the most important, is the ability to think logically, critically and independently. Philosophy majors usually have to take at least one course in logic, and basic skills in logic and argumentation are reinforced in every class, since these are the basic tools of philosophical reasoning. Students get lots of practice in reading and analyzing the argumentative structure of various kinds of texts.

Alongside this instruction in logic and argumentation, philosophy students are taught to value independent critical thought, and to scrutinize the arguments they hear from tv, media, advertisers, politicians.

Philosophy also teaches and demands strong communication skills, both in speech and in writing. Philosophers pays close attention to language, and we value clarity and precision in the use of words. To do philosophy well you need to learn to say exactly what you mean, and to mean exactly what you say. I remind students that someone can write beautifully and still be unclear about what it is they’re trying to say. Philosophy demands clarity above all else, because the issues we talk about are conceptually difficult enough to begin with.

Lots of degree programs demand basic reasoning skills, but philosophy develops particular skills in reasoning with very general, sometimes very abstract, concepts and principles. For example, in an ethics class we might want to know, is it wrong to steal in such-and-such a situation. But we also ask, if it’s wrong, WHY is it wrong? What MAKES it wrong? And this moves us up a level, to discussing fundamental moral values and principles. And then students are required to think and reason and argue about different candidates for fundamental moral values and principles, which is an even higher level of abstraction still.

Now, these discussions don’t just spiral off into the heavens. At all times we’re concerned about how general principles apply to specific, concrete cases, and philosophy students become very good at seeing the practical consequences of adopting different, higher-level principles. This is a skill with lots of practical value, just ask any lawyer or business executive or high-level administrator, they’ll tell you how important these kinds of skills are in their field.

Content

Moving on to “content”, not surprisingly, if you major in philosophy then you’re going to be exposed to the ideas and theories of some of the most influential philosophers spanning the time of the Greeks to the present.

You’ll learn about the most influential views on the deepest questions we can ask, about the nature of reality, about the nature and origins of knowledge, about the foundations of ethics and morality, about different theories of justice and the role of the state in political life, and much more.

Let me just pause a minute to emphasize an important point. Some might reasonably wonder what the relevance is of studying, in our day and age, the abstract philosophical theories of people, most of whom are long dead. Well, it is relevant, and here’s why. The world we live in today — the economic and scientific and legal and political and religious institutions that make up modern societies — this world is a product of the ideas and the influence of philosophers, scientists and thinkers of various kinds across the span of history. The modern world has an intellectual history, and we can’t hope to understand how it works and where it’s headed if we’re ignorant of how it came to be. Frankly, I can’t think of anything more relevant than that.

And this leads to the final item on my list — intellectual history and conceptual foundations. One of the great things about philosophy is that you can study almost any discipline from a philosophical perspective, so if you happen to love science, you can study the philosophy of science; if you love art you can study the philosophy of art; if you love politics you can study political philosophy; and so on.

The difference between studying science, and studying the philosophy of science, is that the philosophy of science focuses on the nature of science itself — what it is, how it works, what its methods are, what it can and can’t tell us about the world — and it focuses on foundational questions within the sciences that are hard or impossible to answer by empirical means alone. The same goes for all these “philosophy OF” fields.

From a student’s perspective, what’s great about this is that if you pair up the study of your favorite subject with the philosophy of that subject — science with the philosophy of science, art with the philosophy of art, psychology with the philosophy of psychology, economics with the philosophy of economics, and so on — then you can potentially acquire a much deeper understanding of that subject than you otherwise would have.

To my mind, this naturally leads to an argument for adding a philosophy degree as a second major alongside another major, but even if you just major in philosophy, you can indulge your interests in these other subjects to whatever extent you like, since studying the philosophy OF any field requires that you learn something about that field.

But now we’re getting into reasons for majoring in philosophy, and I don’t want to get ahead of myself. Here, I think we should start at the beginning, with a deeper and more basic reason for studying philosophy.

Five Reasons To Major in Philosophy

1. Philosophy Has Intrinsic Value

The search for truth and wisdom about the most important issues that face human beings — this has intrinsic value, it’s important for its own sake.

Now, I get that not everyone feels this way, but a lot of people do. For a lot of us, philosophy has intrinsic value and we value it intrinsically. It’s something that we naturally feel compelled to do, we do it for its own sake, not for the sake of anything else.

Natural philosophers are easy to spot. You enjoy staying up late and arguing with your friends about God and ethics and politics and religion. You’re frustrated by the superficial way that important issues are discussed in the media. It really bugs you when people adopt views just because their parents or their peers or their religion or the media tell them to.

You know who you are.


2. It’s What a Liberal Arts Education Ought To Be

This term “liberal arts” is sometimes misunderstood. The original meaning dates back to Roman times, when a “liberal arts” education meant an education worthy of a free person, hence the word “liberal”, with same root as “liberty”, which means “freedom”.

The liberal arts curriculum became standardized and in the medieval Western university the so-called “Seven Liberal Arts” included at its core the “trivium” — grammar, logic and rhetoric — which focused on basic thinking and communication skills, and the four subjects that made up what was called the “quadrivium” — arithmetic, astronomy, music and geometry.

That’s the classical meaning of the term. These days the term is more often associated with an educational philosophy, liberal education, that captures the spirit of the classical tradition in a modern context.

Here’s a definition of “liberal education” endorsed by the AACU, the Association of American Colleges and Universities:

Liberal Education is an approach to learning that empowers individuals and prepares them to deal with complexity, diversity, and change. It provides students with broad knowledge of the wider world (e.g. science, culture, and society) as well as in-depth study in a specific area of interest.

Philosophy teaches students how to think critically and independently, and how to communicate effectively, both in speech and in writing. The study of philosophy typically results in a broad, general knowledge of a lot of different subjects. It’s what a liberal arts education ought to be!

But now I hear the voice of that student I mentioned in my head, who said her parents would kill her if she told them she was switching her major to philosophy. This is all well and good, they’ll say, but really, what kind of a job can you get with a philosophy degree?

I get it. Philosophy has a reputation for being one of the more impractical university degrees. But I want to show you that this reputation isn’t nearly as deserved as you might think.

3. Employers Are Looking For These Skills

For starters, employers really are looking for the skills that philosophy teaches. Business leaders routinely say that finding smart, technically competent employees isn’t hard, there are tons of those on the market. What’s much harder to find are employees who have strong communication, critical reasoning and problem-solving skills. They’ll say “I can teach you how to program. I can teach you how to use a piece of technology. I can teach you policies and procedures. That’s easy. What’s much harder to teach, and what is increasingly lacking in university graduates, are these general communication and reasoning skills.”

There’s survey data to back this up. In 2009 the Association of American Colleges and Universities sponsored a survey to investigate employers’ views on college learning in the wake of the economic downturn. They interviewed over 300 business and government executives — owners, CEOs, presidents, and vice presidents — and asked them what skills they thought were important for their employees to have, and whether universities were successful at teaching these skills to recent graduates. The results are telling.

This summary table gives a list of skills and learning outcomes.

The percentage on the right is the proportion of employers surveyed who said that colleges and universities should place MORE emphasis than they do today on that particular skill.

Look at the top of the list. Number one, “the ability to effectively communicate orally and in writing”. Number two, “critical thinking and analytical reasoning skills”.

If you’re good at philosophy, you have these skills, they’re central to philosophy.

If you look down this list, what you see are skills associated not with proficiency in technical fields, but with a broad liberal arts education: the ability to apply knowledge and skills to real-world settings, the ability to connect choices and actions to ethical decisions, understanding global context, cultural diversity, civic knowledge, proficiency in a foreign language, democratic institutions and values. Remember, this is what business leaders are saying they need in their employees, not academics trying to justify their jobs.

Employers are looking for these skills. Philosophy is one of the best liberal arts degrees for teaching these skills.

4. Your Income Expectations Are Higher Than You May Think

Here’s another fact that might surprise you. With a philosophy degree, your income expectations are higher than you might think. And hopefully it’s becoming more clear why this is the case.

Here’s a table with a list of different university degree programs, from a 2008 survey. The middle column gives the median starting salary for graduates of those programs. “Median” here just means the most common starting salary. The column on the right gives the median mid-career salary — this is the most common salary after 15 years of work experience. And these are salaries for people for whom this is their terminal degree, no degrees beyond the bachelor’s degree.

http://www.aacu.org/leap/documents/2009_EmployerSu...

Not surprisingly, engineering, computer science and the more technical business-related programs rank highest on this list. On the bottom of this list you see “philosophy”. Not a great starting salary, just under 40 thousand. But look at the mid-career salary — $81,000! That’s not bad.

What’s more interesting is what programs fall below philosophy on this scale. In the figure below we’ve scrolled down the list a bit, and we’re seeing the programs that have lower median salaries than philosophy.

Some have higher starting salaries, but all of these have lower mid-career salaries. And look at the list. Chemistry is below philosophy. Marketing is below philosophy. Political science and accounting are below philosophy. Architecture is below philosophy.

Let’s scroll down a little farther.

Nursing, journalism, geography, art history, biology, english, anthropology, psychology — all with statistically lower mid-career salaries than philosophy graduates.

These figures surprise a lot of people, but by now you shouldn’t be too surprised. Recall the survey about the skills that employers are looking for. Critical thinking, communication and analytic problem-solving skills are very much in demand, and philosophy graduates are better than average in these areas. But philosophy graduates are also good at high-level reasoning involving general principles, and this whole set of skills becomes more important, and less common, as you move up the administrative ranks in an organization. Philosophers who work in government and the business world will testify to this. Their skills don’t give you much advantage when you’re doing data entry and clerical work in entry level jobs, but they confer an increasing advantage as you move up the job ladder.

So, it’s a reasonable speculation that philosophy ranks as high as it does on this list because philosophy graduates are better prepared than many other graduates to compete for mid and upper level jobs with higher salaries.

But we’re not done yet. Here’s our fifth and final reason for majoring in philosophy.

5. A Philosophy Major is the Ideal Springboard Degree

A philosophy major is the ideal “springboard” degree.

What do we mean by this? We mean that philosophy is an ideal springboard to graduate programs and professional schools beyond the undergraduate level. If you’re interested in going to graduate school, law school, medical school, or an MBA program, philosophy is a great undergraduate major.

How do we know this? Well, each of these programs requires that students take an admission test to get in.

For graduate school programs, you have to take the GRE. For law school, you have to take the LSAT. For MBA programs, you need to take the GMAT. For medical school it’s the MCAT.

It turns out that philosophy majors do very well on these graduate admission tests. In fact, when you average over all of them, the undergraduate major that performs best on these tests overall is the philosophy major.

Here’s an example. Below is a list of GRE scores ordered by major, averaged over three years. The GRE test has a verbal reasoning component, a quantitative reasoning component, and an analytical writing component. This table shows the verbal reasoning scores. Liberal arts majors do well in this category, but philosophy majors do the best, even above english majors.

Data from the Graduate Record Examination's Guide to the Use of Scores 2010-11. Produced by Educational Testing Service.

Now here are the quantitative reasoning scores.

Not surprisingly, math, engineering and physical science majors do very well in this category. They do better than philosophy majors, on average. But once you move below this category, philosophy is at the top of the list. Philosophy majors do better than biology majors, they do better than architecture and accounting majors.

The table below shows the GRE scores for the analytical writing section.

Again, the liberal arts disciplines do well, but again, philosophy tops the list. And notice how low the math and physical science majors are on this component of the test. They win the quantitative category but lose on the other categories, with the result that philosophy is, on average, the highest scoring major.

This general pattern is played out over and over in these graduate and professional admission tests. They all have a verbal or language component and a quantitative or analytical reasoning component. Philosophy majors score very high on the verbal and language components and better than average on the quantitative and analytical reasoning components, with high average scores overall.

Here’s a table showing the majors that perform best on the LSAT test. The LSAT emphasizes logic games, analytical reasoning and argumentation; it doesn’t really have as large a verbal component as the GRE, but it does have an essay comprehension section. Over 2007 and 2008, economics and philosophy tied for first place.

LSAT Scores of Economics Majors: The 2008-2009 Class Update, by Michael Nieswiadomy http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1430654

Actually, this table is a bit misleading since it omits the math/physics majors, who actually do better. But they’re such a small percentage of the students taking the LSAT, and this particular table omits the smaller categories.

The table below shows the smaller categories, with math and physics on top, and economics and philosophy tied for second.

Again, it’s interesting to see what majors fall below philosophy. Engineering. Chemistry. History. Biology. English.

We can scroll down farther.

Below english we have computer science. Finance. Political science. Psychology. Anthropology. Journalism. Sociology.

And ironically, the two lowest performing majors — the ones with the most students who arguably are looking to get into law school — are pre-law and criminal justice.

Summing Up

Here are five reasons to major in philosophy:

  1. Philosophy has intrinsic value, and we — or many of us, at least — value it intrinsically. Some of you reading this are natural philosophers, you know who you are. A philosophy degree might be the perfect fit for you.
  2. A philosophy degree is what a liberal arts education ought to be. Critical thinking and communication skills; broad, general knowledge. It’s not the only way to get a good liberal arts education, but it’s one of the best, particularly if you combine with a more focused, in-depth study of a particular field.
  3. Employers are looking for these liberal arts skills. They really are, they’re a scarce commodity on the job market, and they become increasingly scarce and more important in higher level positions, which makes them even more valuable the higher you go.
  4. Your income expectations with a philosophy degree are higher than you may think. It’s just false to think that this is an impractical degree. It’s certainly no more impractical than many other liberal arts and science degrees, and it does better than many in terms of expected income.
  5. Philosophy is the ideal springboard degree. If you’re interested in going on to graduate school or law school or medical school or business administration programs, the skills that you develop as a philosophy major can help you compete for admission to these programs. The combination of strong verbal and analytic reasoning skills results in philosophy majors doing very well on graduate admission tests.

If you’re a student and you’re interested, go visit your university’s philosophy department website, check out what they offer, and schedule an appointment with a departmental adviser. They’ll be happy to help out.

Section 7: Basic Concepts in Logic and Argumentation
Basic Concepts in Logic and Argumentation - PDF Ebook
Article
04:17

What is an Argument?

Since arguments are at the heart of logic and argumentation it's natural to start with this question.

The first thing to say about arguments is that, as this term is used in logic, it isn't intended to imply anything like an emotional confrontation, like when I say that "an argument broke out at a bar" or "I just had a huge argument with my parents about my grades". In logic an “argument” is a technical term. It doesn't carry any connotation about conflict or confrontation.

Here's our definition. It will have three parts:

(1) An argument is a set of "claims", or "statements".

We'll have more to say about what a claim or statement is later, but for now it's enough to say that a claim is the sort of thing that can be true or false.

(2) One of the claims is singled out for special attention. We call it the "conclusion". The remaining claims are called the "premises".

(3) The premises are interpreted as offering reasons to believe or accept the conclusion.

That's it, that's the definition of an argument.

Now let's have a look at one:

1. All musicians can read music.
2. John is a musician.
Therefore, John can read music.

Premises 1 and 2 are being offered as reasons to accept the conclusion that John can read music.

This may not be a particularly good argument actually, since that first premise makes a pretty broad generalization about all musicians that isn't very plausible. I'm sure there are a few great musicians out there that don't read sheet music. But it's an argument nonetheless.

Now, notice how it's written. The premises are each numbered and put on separate lines, and the conclusion is at the bottom and set off from the rest by a line and flagged with the word "therefore".

This is called putting an argument in standard form and it can be useful when you're doing argument analysis.

In ordinary language we're almost never this formal, but when you're trying to analyze arguments, when you're investigating their logical properties, or considering whether the premises are true or not, putting an argument in standard form can make life a lot easier.

Now just to highlight this point, here's another way of saying the same thing:

"Can John read music? Of course, he's a musician, isn't he?"

This expresses the very same argument as the written in standard form above. But notice how much easier it is to see the structure of the argument when it's written in standard form. In this version you have to infer the conclusion, "John can read music", from the question and the "of course" part.

And you have to fill in a missing premise. What you're given is "John is a musician", but the conclusion only follows if you assume that all musicians, or most musicians, can read music, which is not a given, it's just a background assumption. The argument only makes sense because you're filling in the background premise automatically. You can imagine that this might become a problem for more complex arguments. You can't always be sure that everyone is filling in the same background premise.

So, standard form can be helpful, and we're going to be using it a lot.

4 questions

Quiz questions based on the lecture "What is an Argument?"

04:25

2. What is a Claim, or Statement?

Arguments are made up of "claims", or "statements". In this section I want to say a few words about what this means and why it's important for logic.

Here's a definition you might see in a logic text:

A claim is a sentence that can be true or false (but not both).

Actually in logic texts the more commonly used term is "statement" or "proposition". These are all intended to mean the same thing. A claim, or a statement, or a proposition, is a bit of language whose defining characteristic is that it makes an assertion that could be true or false but not both.

The "true or false" part of this definition expresses a principle of classical logic that's called the Principle of Bivalence. This principle asserts that a claim can only assume one of two truth values, "true" or "false"; there's no third option like "half-true" or "half-false", or "almost true".

The "but not both" part of this definition expresses a different principle of classical logic called the Principle of Non-Contradiction. This principle states that a claim can't be both true and false at the same time, it's either one or the other. To assert otherwise is to assert a contradiction.

(Actually there are systems of logic where both the Principle of Bivalence and the Principle of Non-Contradiction are relaxed. Logicians can study different systems of reasoning that don't assume these principles. But in classical logic they hold, and in what follows we're going to assume that they hold.)

Now, you might ask why defining claims in this way is important. Here are two reasons.

(1) Not all sentences can function as claims for argumentative purposes.

For example, questions don't count as claims, since they don't assert anything that can be true or false. If you have a question like "Do you like mushrooms on your pizza?", that's a request for information, it doesn't assert that such-and-such is true or false.

Another type of sentence that doesn't count as a claim is a command, or an imperative, like "Step away from the car and put your hands behind your head!". That's a request to perform an action, it makes no sense to ask whether the action or the command is true or false.

So, not every bit of language counts as a claim, and so not every bit of language can play the role of a premise or a conclusion of an argument.

Here’s a second reason why this definition of a claim is important.

(2) The definition gives us a way of talking about clarity and precision in our use of language.

This is because in saying that a sentence can function as a claim in an argument, what we're saying is that all the relevant parties, both the person giving the argument and the intended audience of the argument, have a shared understanding of the meaning of that sentence.

In this context, what it means to understand the meaning of a sentence is to understand what it would mean for the sentence to be true or false. That is, it involves being able to recognize and distinguish in your mind the state of affairs in which the sentence is true from the state of affairs in which the sentence is false.

So, in the context of logic and argumentation, for a sentence to be able to function as a claim, for it to be able to function as a premise or a conclusion of an argument, there has to be a shared understanding of what the sentence is asserting.

Consequently, if the sentence is too vague or its meaning is ambiguous, then it can't function as a claim in an argument, because in this case we literally don't know what we're talking about.

The requirement that a sentence be able to function as a claim is not a trivial one. It's actually pretty demanding. Not all bits of language make assertions, and not all assertions are sufficiently clear in their meaning to function as claims.

This is important. Before we can even begin to ask whether an argument is good or bad, we need to have a shared understanding of what the argument actually is, and this is the requirement that you're expressing when you say that an argument is made up of claims that can be true or false.

What this means in practice is that, ideally, you want everyone to have a shared understanding of what the argument is about and what the premises are asserting. If people can't agree on what the issue is, or what's being asserted then you have to go back and forth can clarify the issue and clarify the arguments so that all parties finally have a shared understanding of the argument.

Only then can you have a rational discussion of the strengths and weaknesses of the argument.

Questions and Comments

1. Sometimes a statement can be expressed using a question, can’t it?

That’s true. A good example is “rhetorical questions”, which are really statements.

For example, if you ask me whether I’m going to study for the final exam, and I look at you say “What do you think I am, an idiot?”, I’m not actually asking whether you think I’m an idiot. It’s a rhetorical question, used to (sarcastically) assert the statement “Yes, I’m going to study for the final exam”.

This example illustrates a general point, that identifying arguments in ordinary language requires paying attention to rhetorical forms and stylistic conventions in speech and in writing. A complex writing style, or a failure to understand the rhetorical context in which an argument is given (for example, failing to recognize SATIRE), can lead to confusion about what is being asserted.

2. What’s the difference between a sentence being VAGUE and a sentence being AMBIGUOUS?

If I ask my daughter when she’ll be back from visiting friends and she says “Later”, that’s a VAGUE answer. It's not specific enough to be useful.

On the other hand, if I ask her which friend she’ll be visiting, and she says “Kelly”, and she has three friends named Kelly, then that’s an AMBIGUOUS answer, since I don’t know which Kelly she’s talking about. The problem isn’t one of specificity, it’s about identifying which of a set of well-specified meanings is the one that was intended.

Note that all natural language suffers from vagueness to some degree. If I say that the grass is looking very green after last week’s rain, one could always ask which shade of green I’m referring to. But it would be silly to think that you don’t understand what I’m saying just because I haven’t specified the shade of green.

For purposes of identifying claims in arguments, the question to ask isn’t “Is this sentence vague?”, but rather, “Is this sentence TOO vague, given the context?”.

If all I’m doing is trying to determine whether the grass needs watering or not, the specific shade of green probably doesn’t matter. But if I’m trying to pick a green to paint a room in my house, specifying the shade will be more important.

5 questions

Quiz questions based on the lecture, "What is a Claim, or Statement?"

03:58

What is a Good Argument?

An argument is an attempt to persuade, but the goal of logic and argumentation isn't simply to persuade — it's to persuade for good reasons.

The most basic definition of a good argument is straightforward: it's an argument that gives us good reasons to believe the conclusion.

There's not much we can do with this definition, though. It's too vague. We have to say more about what we mean by "good reasons" obviously.

To do this we'll start by looking at some ways that arguments can fail to be good, and from these we'll extract a couple of necessary conditions for an argument to be good.

Here's an argument:

1. All actors are robots.
2. Tom Cruise is an actor.
Therefore, Tom Cruise is a robot.

Here's another argument:

1. All tigers are mammals.
2. Tony is a mammal.
Therefore, Tony is a tiger.

Both of these are bad arguments, as you might be able to see. But they're bad in different ways.

In the first argument on top, the problem is obviously that the first premise is false — all actors are not robots. But that's the only problem with this argument.

In particular, the logic of this argument is perfectly good. When I say that the logic is good what I mean is that the premises logically support or imply the conclusion, or that the conclusion follows from the premises.

In this case it's clear that if the premises of this argument were all true then the conclusion would have to be true, right? If all actors were robots, and if Tom Cruise is an actor, then it would follow that Tom Cruise would have to be a robot. The conclusion does follow from these premises.

Now let's look at the other argument:

1. All tigers are mammals.
2. Tony is a mammal.
Therefore, Tony is a tiger.

First question: Are all the premises true?

Are all tigers mammals? Yes they are. Is Tony a mammal? Well in this case we have no reason to question the premise so we can stipulate that it's true. So in this argument all the premises are true.

Now what about the logic?

This is where we have a problem. Just because all tigers are mammals and Tony is a mammal, it doesn't follow that Tony has to be a tiger. Tony could be a dog or a cat or a mouse. So even if we grant that all the premises are true, those premises don't give us good reason to accept that conclusion.

So this argument has the opposite problem of the first argument. The first argument has good logic but a false premise. This argument has all true premises but bad logic. They're both bad arguments but they're bad in different ways. And these two distinct ways of being bad give us a pair of conditions that an argument must satisfy if it's going to be good.

First condition:

If an argument is good then all the premises must be true.

We'll call this the "Truth Condition".

Second condition:

If an argument is good then the conclusion must follow from the premises.

We'll call this the "Logic Condition".

Note that at the top I called these "necessary conditions". What this means is that any good argument has to satisfy these conditions. If an argument is good then it satisfies both the Truth Condition and the Logic Condition.

But I'm not saying that they're "sufficient" conditions. By that I mean that they don't by themselves guarantee that an argument is going to be good. An argument can satisfy both conditions but still fail to be good for other reasons.

Still, these are the two most important conditions to be thinking about when you're doing argument analysis.

In later lectures we'll look more closely at both the Truth Condition and the Logic Condition, and we'll also look at ways in which an argument can satisfy both conditions and still fail to be good.

Questions and Comments

1. What’s the difference between “persuading” and “persuading for good reasons”?

If my only goal is to persuade you to accept my conclusion, then I might use all kinds of rhetorical tricks to achieve that goal. I might choose to outright lie to you. If mere persuasion is the ultimate goal then there would be no normative distinction between argumentation and using lies, rhetorical tricks and psychological manipulation to persuade.

Mere persuasion is NOT the ultimate goal of argumentation, at least as this term is used in philosophy and rhetoric. Argumentation is about persuasion for good reasons. Precisely what this means is not obvious, and it will take us some time to work it out work, but at a minimum it involves offering premises that your audience is willing to accept, and demonstrating how the conclusion follows logically from those premises.

Here’s another way to think about argumentation. From a broader perspective, to argue with a person, as opposed to merely trying to persuade or influence a person, is to treat that person as a rational agent capable of acting from, and being moved by, reasons. It’s part of what it means to treat a person as an “end” in themselves, rather than as a mere means to some other end.

In this respect, theories of argumentation are normative theories of how we ought to reason, if we’re treating our audience as rational agents. They’re a component of a basic moral stance that we adopt toward beings who we recognize as capable of rational thought.

2. How can an argument satisfy both the Truth Condition and the Logic Condition and still fail to be good?

I said I would consider this question in a later lecture, but just to give an example, arguments that commit the fallacy of “begging the question” may satisfy both these conditions but fail to be good.

Consider this argument:

1. Capital punishment involves the killing of a person by the state as punishment for a crime.
2. It is morally unjustified for the state to take the life of a person as punishment for a crime.
Therefore, capital punishment is morally unjustified.

Let’s grant that the conclusion follows from the premises. Could this count as a good argument?

The problem is that the second premise simply asserts what is precisely at issue in the debate over capital punishment. It “begs the question” about the ethics of capital punishment by assuming as a premise that it’s wrong. Such arguments are also called “circular”, for obvious reasons — the conclusion simply restates what is already asserted in the premises.

The problem with this kind of argument isn’t that the premises are obviously false. The problem is that they don’t provide any independent reasons for accepting that conclusion.

Logic and critical thinking texts will treat this kind of argument as fallacious, a bad argument. What makes this argument bad isn’t captured by violating the Truth Condition or the Logic Condition.

Arguments that beg the question in this way may well have all true premises and good logic, but they would still be judged as bad arguments due to their circularity.

So this is an example of how an argument may be judged bad even though it satisfied both the Truth Condition and the Logic Condition. And this is what it means to say that these are merely necessary conditions for a good argument, not sufficient conditions. All good arguments will satisfy these two conditions, but not all arguments that satisfy these two conditions will be good.

Still, the distinctions captured by the Truth Condition and the Logic Condition are absolutely central to argument analysis.

3. You keep using arguments with only two premises. Do all arguments only have two premises?

I can see how people might initially get this impression, since so many of the introductory examples you see in logic and critical thinking texts are short two-premise arguments. For the purpose of introducing new logical concepts, the short two-premise argument forms are very useful.

But you can have arguments with five, ten or a hundred premises. In longer and more complex arguments what you usually see are nested sets of sub-arguments where each sub-argument has relatively few premises, but the overarching argument (when you expand it all out) might be very long.

You see this kind of progression learning almost any complex skill. You start out by rehearsing the most elementary concepts or skill elements (basic programming statements in computer languages; the positions and moves of the chess pieces in chess, forehand and backhand strokes in tennis; basic statement types and argument forms in logic, etc.), and then combine them to create more complex structures or perform more complex tasks. Learning logic and argument analysis isn’t any different.

5 questions

Test your understanding of the concepts discussed in "What is a good argument? (1)".

05:34

Identifying Premises and Conclusions

Argument analysis would be a lot easier if people gave their arguments in standard form, with the premises and conclusions flagged in an obvious way.

But people don’t usually talk this way, or write this way. Sometimes the conclusion of an argument is obvious, but sometimes it’s not. Sometimes the conclusion is buried or implicit and we have to reconstruct the argument based on what’s given, and it’s not always obvious how to do this.

In this lecture we’re going to look at some principles that will help us identify premises and conclusions and put natural language arguments in standard form. This is a very important critical thinking skill.

Here’s an argument:

“Abortion is wrong because all human life is sacred.”

Question: which is the conclusion?

“Abortion is wrong”?

or

“All human life is sacred”?

For most of us the answer is clear. “Abortion is wrong” is the conclusion, and “All human life is sacred” is the premise.

How did we know this? Well, two things are going on.

First, we’re consciously, intentionally, reading for the argument, and when we do this we’re asking ourselves, “what claim are we being asked to believe or accept, and what other claims are being offered as reasons to accept that claim?”.

Second, we recognize the logical significance of the word “because”. “Because” is what we call an indicator word, a word that indicates the logical relationship of claims that come before it or after it. In this case it indicates that the claim following it is being offered as a reason to accept the claim before it.

So, rewriting this argument in standard form, it looks like this ...

1. All human life is sacred.
Therefore, abortion is wrong.

At this point we could start talking about whether this is a good argument or not, but that’s not really our concern right now. Right now we’re more concerned with identifying premises and conclusions and getting the logical structure of an argument right.

Here are some key words or phrases that indicate a CONCLUSION:

therefore, so, hence, thus, it follows that, as a result, consequently,

and of course there are others.

This argument gives an example using “so”:

It’s flu season and you work with kids, SO you should get a flu shot.

Now, keywords like these make it much easier to identify conclusions, but not all arguments have keywords that flag the conclusion. Some arguments have no indicator words of any kind. In these cases you have to rely on your ability to analyze context and read for the argument.

Here’s a more complex argument that illustrates this point:

"We must reduce the amount of money we spend on space exploration. Right now, the enemy is launching a massive military buildup, and we need the additional money to purchase military equipment to match the anticipated increase in the enemy’s strength."

Notice that there are no indicator words that might help us flag the conclusion.

So, which claim is the conclusion of this argument?

Is it ...

“We must reduce the amount of money we spend on space exploration.”?

Is it ...

“The enemy is launching a massive military buildup”?

Or is it ...

“We need the additional money to purchase military equipment to match the anticipated increase in the enemy’s strength”?

The answer is ...

“We must reduce the amount of money we spend on space exploration.”

Most people can see this just by looking at the argument for a few seconds, but from experience I know that some people have a harder time seeing logical relationships like this.

If it’s not obvious, the way to work the problem is this: for each claim asserted in the argument you have to ask yourself,

“Is this the main point that the arguer is trying to convey?”

or,

“Is this a claim that is being offered as a reason to believe another claim?”

If it’s being offered as a reason to believe another claim, then it’s functioning as a premise. If it’s expressing the main point of the argument, what the argument is trying to persuade you to accept, then it’s the conclusion.

There are words and phrases that indicate premises too. Here are a few:

since, if, because, from which it follows, for these reasons,

and of course there are others.

And here’s an example that uses “since”:

"John will probably receive the next promotion SINCE he’s been here the longest."

“Since” is used to indicate that John’s being here the longest is a reason for thinking that he will probably receive the next promotion.

So, let’s summarize:

  1. Arguments in natural language aren’t usually presented in standard form, so we need to know how to extract the logical structure from the language that’s given.
  2. To do this, we look at each of the claims in the argument and we ask ourselves, is this the main point that the arguer is trying to convey, or is this being offered as a reason to accept some other claim?
  3. The claim that expresses the main point is the conclusion.
  4. The claims that are functioning as reasons to accept the main point are the premises.
  5. And finally, premises and conclusions are often flagged by the presence of indicator words. Paying attention to indicator words can really help to simplify the task of reconstructing an argument.
7 questions

Test your understanding of the concepts introduced in this section.

06:29

The Truth Condition

The Truth Condition is a necessary condition for an argument to be good. We stated it as the condition that all the premises of an argument have to be true. In this lecture I want to talk about what this condition amounts to in real world contexts where arguments are used to persuade specific audiences to accept specific claims.

I'm going to try to show why we actually need to modify this definition somewhat to capture what's really important in argumentation. I'll offer a modification of the definition that I think does a better job of capturing this.

Here's our current definition of the Truth Condition:

All premises must be true.

I’m going to use a simple example to illustrate the problem with this definition.

Consider this claim:

"The earth rotates on its axis once every 24 hours".

We all agree that this claim is true. It's an accepted part of our modern scientific understanding of the world.

But, say, 500 years ago, this claim would have been regarded by almost everyone as obviously false. The common understanding was that the earth does not move. Most everyone believed that the planets and everything else in the universe revolved around the earth, which his fixed at the center of the universe.

And they had good reason to believe this. When we look outside we see the moon and the sun and the stars and planets all moving around us. It certainly doesn't seem like we're all moving at hundreds of miles an hour toward the east. Around the equator it would be closer to a thousand miles an hour, as fast as a rifle bullet. If the earth was really rotating that fast, why don't centrifugal forces make us fly off the surface of the earth? Or why don't we experience perpetual hurricane-force winds as the rotating earth drags us through the atmosphere at hundreds of miles per hour?

These are the sorts of arguments that medievals might have given, and did give, to support their contention that the earth in fact does not move. For their time, given what they knew about physics and astronomy, these seem like they would be compelling arguments.

So, for a medieval audience, any argument that employed this claim as a premise — the claim that the earth rotates once every 24 hours — or that argued for it as a conclusion, would have been judged as a bad argument, because for them the claim is clearly false.

Now, why does this situation pose a problem for our version of the Truth Condition? It poses a problem because if we read "true" as REALLY true, true IN ACTUALITY, and we think it's really true that the earth rotates, and it's really false that it's fixed at the center of the universe, then this version of the Truth Condition makes it so that no medieval person can have a good argument for their belief that the earth does not move.

And this just seems wrong. It seems like we want to say that yes, they were wrong about this, but at the time they had perfectly good reasons to think they were right.

And if they had good reasons, that means they had good arguments. But this version of the Truth Condition doesn't allow us to acknowledge that they had good arguments for their belief that the earth does not move.

So, we need to modify our phrasing of the Truth Condition so that it's sensitive to the background beliefs and assumptions of particular audiences.

This is a natural modification that does the trick:

We'll call a claim plausible (for a given audience — plausibility is always relative to a given audience even if we don't specifically say so) if that audience believes they have good reason to think it's true.

So to say that a claim is plausible for a given audience is just to say that the audience is willing to grant it as a premise, that they're not inclined to challenge it, since they think they have good reason to believe it's true.

What we're doing here is pointing out that in real-world argumentation, when someone is offering reasons for someone else to believe or accept something, an argument will only be persuasive if the target audience is willing to grant the premises being offered. Premises have to be plausible to them, not just plausible to the arguer, or some hypothetical audience.

So here are our conditions for an argument to be good, with the truth condition modified in the way that I've just suggested.

Condition one: The Truth Condition

All premises must be true (where “true” is read as “plausible”)

Condition two: The Logic Condition

The conclusion most follow from the premises.

Just to note, for the rest of these lectures I'll keep using the expression "Truth Condition", even though it's really a "plausibility” condition, just because the language of "truth" is so commonly used in logic and critical thinking texts when describing this feature of good arguments.

Just remember, when we talk about evaluating the premises of an argument to see if the argument satisfies the Truth Condition — when I give an argument and I ask, are all the premises true? — what I'm really asking is whether the intended audience of the argument would be willing to grant those premises. In other words, whether they would find those premises plausible.

A Possible Objection

Let me just wrap up with an objection to this modification that some of my students will usually offer at this point.

Some people might object that what I've done here is redefined the concept of truth into something purely relative and subjective, that I'm denying the existence of objective truth.

This isn't what I'm saying. All I'm saying is that the persuasive power of an argument isn't a function of the actual truth of its premises. It's a function of the subjective plausibility of its premises for a given audience. A premise may be genuinely, objectively true, but if no one believes it's true, then no one will accept it as a premise, and any argument that employs it is guaranteed to fail, in the sense that it won't be judged by anyone as offering good reasons to accept it.

This point doesn't imply anything about the actual truth or falsity of the claims. We can say this and still say that claims or beliefs can be objectively true or false. The point is just that the objective truth or falsity of the claims isn't the feature that plays a role in the actual success or failure of real world arguments. It's the subjective plausibility of premises that plays a role, and that's what this reading of the Truth Condition is intended to capture.

5 questions

Test your understanding of the concepts introduced in this section.

05:49

The Logic Condition

The Logic Condition is another necessary condition for an argument to be "good".

In the tutorial that introduced the notion of a "good argument" we defined the Logic Condition in very general terms: an argument satisfies the Logic Condition if the conclusion "follows from" the premises.

The main point I wanted to make in that discussion was to distinguish arguments that are bad because of false premises, from arguments that are bad because of bad logic. These are two distinct ways in which an argument can fail to provide good reasons to believe the conclusion.

But we need to say a lot more about the Logic Condition, and what it means to say that an argument has good logic.

In fact, this lecture and the next two lectures are devoted to it. The concepts that we'll be looking at are absolutely central to logic.

The Hypothetical Character of the Logic Condition

I want to highlight the hypothetical character of the Logic Condition, and how it differs from the Truth Condition.

We say that an argument satisfies the Logic Condition if the conclusion "follows from" the premises, or equivalently, if the premises "support" the conclusion.

The following two arguments give examples of good logic and bad logic, respectively.

1. All tigers are mammals.
2. Tony is a tiger.
Therefore, Tony is a mammal.

In this first argument the conclusion clearly follows from the premises. If all tigers are mammals and if Tony is a tiger then it follows that Tony is a mammal.

1. All tigers are mammals.
2. Tony is a mammal.
Therefore, Tony is a tiger.

In this second argument the conclusion doesn't follow. If all tigers are mammals and if Tony is a tiger we can't infer that Tony is a tiger. Those premises may be true but they don't support the conclusion.

I want to draw attention to the way in which we make these kinds of judgments. In judging the logic of the argument we ask ourselves this hypothetical question:

"IF all the premises WERE true, WOULD they give us good reason to believe the conclusion?"

  • If the answer is "yes" then the logic is good, and the argument satisfies the Logic Condition.
  • If the answer is "no" then the logic is bad, and the argument doesn't satisfy the Logic Condition.

This gives us a more helpful way of phrasing the Logic Condition. An argument satisfies the Logic Condition if it satisfies the following hypothetical condition:

If the premises are all true, then we have good reason to believe the conclusion.

The key part of this definition is the hypothetical "IF".

When we're evaluating the logic of an argument, we're not interested in whether the premises are actually true or false. The premises might all be false, but that's irrelevant to whether the logic is good or bad. What matters to the logic is only this hypothetical "if". IF all the premises WERE true, WOULD the conclusion follow?

So, when evaluating the logic of an argument we just ASSUME the premises are all true, and we ask ourselves what follows from these premises?

This is fundamentally different from the Truth Condition, where what we're interested in is the actual truth or falsity of the premises themselves (or as we talked earlier, the "plausibility" or "implausibility" of premises).

The Truth Condition and the Logic Condition are focusing on very different properties of arguments.

In fact you can have arguments with all false premises that satisfy the Logic Condition. Here's an example:

1. If the moon is made of green cheese, then steel melts at room temperature.
2. The moon is made of green cheese.
Therefore, steel melts at room temperature.

This argument satisfies the Logic Condition, even though both premises are clearly false. Why? Because IF the first premise WAS true, and IF the second premise WAS true, then the conclusion WOULD follow.

In fact, this argument is an instance of a well known argument FORM that always satisfies the Logic Condition:

1. If A then B
2. A
Therefore, B

If A is true then B is true; A is true, therefore B is true. ANY argument that instantiates this argument form is going to satisfy the Logic Condition.

Here's another example:

1. All actors are billionaires.
2. All billionaires are women.
Therefore, all actors are women.

Each claim in this argument is false, so it's a bad argument, but the logic is airtight. This argument fails the Truth Condition but satisfies the Logic Condition.

And like the previous example it's an instance of an argument FORM that always satisfies the Logic Condition:

1. All A are B.
2. All B are C.
Therefore, all A are C.

Alternately, you can have arguments that have all true premises but FAIL the Logic Condition, like this one:

1. If I live in Iowa then I live in the United States.
2. I live in the United States.
Therefore, I live in Iowa.

The premises are true (at the time of writing this), but the conclusion doesn't follow, because even if they're all true it doesn't follow that I have to live in Iowa, or even that it's likely that I live in Iowa. I might live in New York or Florida or any of the other fifty states.

So, to summarize:

  1. We can rephrase the Logic Condition in a more helpful way by emphasizing the hypothetical character of the property that we're interested in, which is the LOGICAL RELATIONSHIP between premises and conclusion: If the premises are all true, then we have good reason to accept the conclusion.
  2. The actual truth or falsity of the premises is irrelevant to the logic of the argument.
  3. Argument analysis is a two-stage process. When we evaluate the logic of the argument we're not concerned about the actual truth or falsity of the premises. All we're concerned with is whether the conclusion follows from those premises.
  4. Once we've evaluated the logic, then we can ask whether the premises are actually true or plausible.
  5. If you confuse these steps, and make the mistake of judging the logic of an argument in terms of the truth or falsity of the premises, then you won't be able to properly evaluate an argument.
  6. This distinction, between TRUTH and LOGIC, is arguably the most important distinction for critical argument analysis.
4 questions

Test your understanding of the concepts introduced in this section.

05:29

Valid vs Invalid Arguments

An argument has to satisfy the Logic Condition in order for it to qualify as a good argument. But there are two importantly different ways in which an argument can satisfy the Logic Condition.

One way is if the argument is valid. Another way is if the argument is strong.

"Validity" and "strength" are technical terms that logicians and philosophers use to describe the logical "glue" that binds premises and conclusions together. Valid arguments have the strongest logical glue possible.

In this lecture we're going to talk about "validity" and the difference between "valid" versus "invalid" arguments. In the next lecture we'll talk about "strength" and the difference between "strong" versus "weak" arguments.

Together, these two concepts, validity and strength, will help us to specify precisely what it means for an argument to satisfy the Logic Condition.

Valid vs Invalid

We've seen valid arguments before. Recall the Tom Cruise argument:

1. All actors are robots.
2. Tom Cruise is an actor.
Therefore, Tom Cruise is a robot.

This is an example of a valid argument.

Here's the standard definition of a valid argument:

An argument is VALID if it has the following hypothetical or conditional property:

IF all the premises are true, then the conclusion CANNOT be false.

In this case we know that in fact the first premise is false (not all actors are robots) but the argument is still valid because IF the premises were true it would be IMPOSSIBLE for the conclusion to be false.

In other words, in a hypothetical world where all actors are robots, and Tom Cruise also happens to be an actor, then it's logically impossible for Tom Cruise NOT to be a robot.

THAT is the distinctive property of this argument that we're pointing to when we call it “valid” — that it's logically impossible for the premises to be true and the conclusion false. Or to put it another way, the truth of the premises guarantees the truth of the conclusion.

These are all different ways of saying the same thing. Validity is the strongest possible logical glue you can have between premises and conclusion.

Here's an example of an INVALID argument:

1. All actors are robots.
2. Tom Cruise is a robot.
Therefore, Tom Cruise is an actor.

The first premise is the same, "All actors are robots". But the second premise is different. Instead of assuming that Tom Cruise is an actor, we're assuming that Tom Cruise is a robot.

Now, if these premises are both true, does it follow that Tom Cruise HAS to be an actor? No, it does not follow. It would follow if we said that ONLY actors are robots, but the first premise doesn't say that.

All we can assume is that in this hypothetical world, anyone in the acting profession is a robot, but robots might be doing lots of different jobs besides acting. They might be mechanics or teachers or politicians or whatever. So in this hypothetical world the fact that Tom Cruise is a robot doesn't guarantee that he's also an actor.

And THAT is what makes this an invalid argument.

An argument is INVALID just in case it's NOT VALID.

What this means is that even if all the premises are true, it's still possible for the conclusion to be false. The truth of the premises doesn't guarantee the truth of the conclusion.

That's ALL it means to call an argument "invalid".

In particular, it doesn't imply that the argument is bad. As we'll see in the next lecture, invalid arguments can still be good arguments. Even if they don't guarantee the conclusion they can still give us good reasons to believe the conclusion, so they can still satisfy the Logic Condition.

But like I said, we'll talk more about this later.

A Cautionary Note About the Terminology

I'll end with a cautionary note about this terminology.

We're using the terms "valid" and "invalid" in a very specific technical sense that is commonly used in logic and philosophy but not so common outside of these fields.

As we all know in ordinary language the word "valid" is used in a bunch of different ways. Like when we say "that contract is valid", meaning something like the contract is “legally legitimate” or that it's “executed with proper legal authority”.

Or when we say "You make a valid point", we mean that the point is “relevant” or “appropriate”, or it has some justification behind it.

These are perfectly acceptable uses of the term "valid". But I just want to emphasize that this isn't how we're using the term in logic when we're doing argument analysis. It's important to keep the various meanings of "valid" and "invalid" distinct so there's no confusion.

Note for example that when we use the terms valid and invalid in logic we're talking about properties of whole arguments, not of individual claims.

If we're using the terms in the way we've defined them in this tutorial then it makes NO SENSE to say that an individual premise or claim is valid or invalid.

Validity is a property that describes the logical relationship between premises and conclusions. It's a feature of arguments taken as a whole. Still, it's very common for students who are new to logic to confuse the various senses of valid and invalid, and make the mistake of describing a premise as invalid when what they mean is simply that it's false or dubious.

So that's just a cautionary note about the terminology. If you keep the logical definition clear in your mind then you shouldn't have a problem.

5 questions

Test your understanding of the concepts introduced in this section.

06:38

Strong vs Weak Arguments

There are two importantly different ways in which an argument can satisfy the Logic Condition. One way is if the argument is VALID. Another way is if the argument is STRONG. We've talked about validity. Now let's talk about strength.

Here's an argument:

1. All humans have DNA.
2. Pat is human.
Therefore, Pat has DNA.

This is a valid argument. If the premises are true the conclusion can't possibly be false.

Now take a look at this argument:

1. 50% of humans are female.
2. Pat is human.
Therefore, Pat is female.

The percentage isn't exact, but we're not interested in whether the premises are actually true. We're interested in whether, if they were true, the conclusion would follow.

In this case the answer is clearly NO. Knowing that Pat is human doesn't give any good reason to think that he or she is female.

This is an example of an argument that does NOT satisfy the Logic Condition.

Now take a look at this argument:

1. 90% of humans are right-handed.
2. Pat is human.
Therefore, Pat is right-handed.

This argument is different. In this case the premises make it very likely — 90% likely — that the conclusion is true. They don't guarantee that Pat is right-handed, but we might still want to say that they provide good reasons to think that Pat is right-handed.

And if that's the case then we should say that this argument satisfies the Logic Condition. Because it has the property that, if all the premises are true, they give us good reason to believe the conclusion.

This difference is what the distinction between weak and strong arguments amounts to.

  • The first argument is what we call a logically WEAK argument. It does not satisfy the Logic Condition and so it can't be a good argument.
  • The second argument is a logically STRONG argument. It does satisfy the Logic Condition so it can be a good argument.

This is what distinguishes these arguments, but note what they have in common. They're both logically INVALID.

In a valid argument if the premises are true the conclusion can't possibly be false. Neither of these arguments guarantees certainty. They're both fallible inferences. Even if the premises are true you could still be wrong about the conclusion.

The difference is that in a STRONG argument the premises make the conclusion VERY LIKELY true. A WEAK argument doesn't even give us this.

How Strong Does the Inference Have to Be to Satisfy the Logic Condition?

Now these examples immediately raise an important question:

HOW strong does the inference have to be for the argument to satisfy the Logic Condition and qualify as a strong argument?

To put it another way, with what probability must the conclusion follow from the premises for the argument to qualify as strong?

50% is clearly too weak. 90% is clearly strong enough. But where's the cut-off, what's the threshold that the strength of the logical inference has to meet o count as satisfying the Logic Condition?

Well, it turns out that there is no principled answer to this question.

The distinction between valid and invalid arguments is a sharp one. Every argument is either valid or invalid. There are no "degrees" of validity. Validity is like pregnancy — you can't be almost pregnant or a little bit pregnant.

The distinction between strong and weak arguments, on the other hand, is a matter of degree. It does make sense to say that an argument is very strong, or moderately strong, or moderately weak or very weak.

But the threshold between weak and strong arguments isn't fixed or specified by logic. It is, in fact, a conventional choice that we make. We decide when the premises provide sufficient evidence or reason to justify accepting the conclusion. There are no formal principles of logic that make this decision for us.

This is actually a big topic. It needs a lot more space to properly discuss (it really belongs in a course on inductive and scientific reasoning).

Valid, Strong and Weak Argument Forms

There are some common argument forms that people generally recognize as valid, strong or weak that are helpful to know.

Here are some simple argument forms that are recognized as valid, strong or weak respectively.

VALID:

1. ALL A are B.
2. x is an A.
Therefore, x is a B.

An example of a valid argument of this form is

1. All actors are robots.
2. Tom is an actor.
Therefore, Tom is a robot
.

STRONG:

If we change "ALL" to "MOST" we get an invalid but strong argument:

1. Most A are B.
2. x is an A.
Therefore, x is a B.

Here’s an instance of this argument form:

1. Most actors are robots.
2. Tom is an actor.
Therefore, Tom is a robot.

The conclusion doesn't follow with certainty, but we're stipulating that "most" means "enough to make it reasonable to believe the conclusion".

WEAK:

If we switch from "most" to "some" we get a weak argument form:

1. Some A are B.
2. x is an A.
Therefore, x is a B.

Here’s an instance of this argument form:

1. Some actors are robots.
2. Tom is an actor.
Therefore, Tom is a robot.

"Some actors are robots" doesn't even guarantee 50-50 odds. The way this term is commonly used in logic, "some" just means that AT LEAST ONE actor is a robot.

Summary

These definitions summarize what we've seen so far:

  • VALID: If all the premises are true, the conclusion follows with certainty.
  • STRONG: If all the premises are true, the conclusion follows with high probability.
  • WEAK: If all the premises are true, the conclusion follows neither with certainty nor with high probability.

Validity, strength and weakness are logical properties of arguments that characterize the logical relationship between the premises and the conclusion.

Both valid and strong arguments satisfy the Logic Condition for an argument to be good. Weak arguments fail to satisfy the Logic Condition and so are automatically ruled out as bad.

7 questions

Test your understanding of the concepts introduced in this section.

01:57

What is a Good Argument (II)?

Now that we’ve discussed the Truth Condition and the Logic Condition in more detail, we can state the conditions for an argument to be good with more precision than in our first attempt.

The Concept of a Good Argument

The basic concept is straightforward:

An argument is good if it offers its intended audience good reasons to accept the conclusion.

What we’ve been trying to do in this lecture series is clarify what we mean by “good reasons”. We broke it down into two different components, one relating to the truth of the premises, another relating to the logical relationship between the premises and the conclusion.

First Pass

Our first pass at a clarification of what “good reasons” means was stated in terms of two necessary conditions that must be fulfilled for the argument to count as good:

The Truth Condition:

The argument has all true premises

The Logic Condition:

The conclusion follows from the premises.

This is helpful, but it needed further analysis to clarify what we mean by “true” and what we mean by “follows from”.

Second Pass

The last four lectures have introduced a set of concepts that let us articulate more clearly the conditions that an argument must satisfy for it to count as good:

The Truth Condition:

All the premises are plausible (to the intended audience)

The Logic Condition:

The argument is either valid or strong.

This is much more specific, and for purposes of argument analysis, much more helpful.

We use “plausible” to highlight the fact that a given premise might be regarded as true by one audience but as false by another, and what we want are premises that are regarded as true by the target audience of the argument. A plausible premise is one where the audience believes it has good reason to think it’s true, and so is willing to grant it as a premise. This helps to distinguish a plausible premise from a reading of “true premise” that’s defined in terms of correspondence with the objective facts. We’d like our premises to be true in this sense, but what really matters to argument evaluation is whether they’re regarded as plausible or not by the intended audience of the argument.

We use “valid” and ‘strong” to help specify precisely what we mean when we say that the conclusion follows from the premises. A valid argument is one where the conclusion follows with absolute certainty from the premises, where the truth of the premises logically necessitates the truth of the conclusion. A strong argument is one where the conclusion follows not with absolute certainty, but with some high probability. Together, these help to clarify what we mean when we say that an argument satisfies the Logic Condition.

Altogether, these definitions of plausibility, validity and strength give us a helpful set of tools for assessing the quality of arguments.

4 questions

Test your understanding of the concepts introduced in this section.

02:18

Deductive Arguments and Valid Reasoning

In this final series of lectures we’re going to look at the distinction between deductive arguments and inductive arguments, and see how they relate to the concepts of validity and strength that we’ve previously introduced.

Both of these terms, “deductive” and “inductive”, have a life outside of their usage in logic, and they can be used in different ways so it’s helpful to be familiar with the various ways they’re used.

In this lecture we’ll look at the relationship between deduction and valid arguments.

Deduction and Valid Reasoning

In ordinary logic, the term “deductive argument” or “deductive inference” is basically a synonym for “valid argument” or “valid inference”. The terms are often used interchangeably.

However, it’s also common to describe an argument as a deductive argument even if the argument fails to be valid.

For example, someone might give an argument like this one:

1. If the match is burning then there is oxygen in the room.
2. The match is not burning.
Therefore, there is no oxygen in the room.

and they might intend for this argument to be valid. They believe the conclusion follows with certainty from the premises.

But in this case they’ve made a mistake — this argument isn’t valid, it’s invalid. Just because the match isn’t burning it doesn’t follow that there’s no oxygen in the room.

So this is an invalid argument, but it was intended as a valid argument. In this case, we’ll still want to call it a deductive argument, but it’s a failed deductive argument, a deductive argument that is guilty of a formal fallacy, a mistake in reasoning.

So, while the terms “deductive” and “valid” are sometimes used interchangeably, they aren’t strict synonyms.

  • When you describe an argument as valid you’re saying something about the logic of the argument itself.
  • When you describe an argument as deductive you’re saying something about the conscious intentions of the person presenting the argument, namely, that they are intending to offer a valid argument.

You need to draw this distinction in order for it to be meaningful to say things like “this is a valid deductive argument” or “this is an invalid deductive argument”, which is a pretty common thing to say in logic.

2 questions

Test your understanding of the concepts introduced in this section.

01:41

Inductive Arguments and Strong Reasoning

In logic there’s a close relationship between deductive and valid arguments, and there’s a similar relationship between inductive and strong arguments.

In standard logic, the term “inductive argument” basically means “an argument that is intended to be strong rather than valid”.

So, when you give an inductive argument for a conclusion, you’re not intending it to be read as valid. You’re acknowledging that the conclusion doesn’t follow with certainty from the premises, but you think the inference is strong, that the conclusion is very likely true, given the premises.

Here’s an example of a strong argument:

1. Most Chinese people have dark hair.
2. Julie is Chinese.
Therefore, Julie has dark hair.

We would call this an inductive argument because it’s obvious that the argument is intended to be strong, not valid. Since the argument is in fact strong, it counts as a successful inductive argument.

And as with deductive arguments, we also want to be able to talk about FAILED inductive arguments, arguments that are intended to be strong but are in fact weak.

Like this one:

1. Most Chinese people have dark hair.
2. Julie has dark hair.
Therefore, Julie is Chinese.

Here we’re supposed to infer that, simply because Julie has dark hair, she’s probably Chinese. This is a weak argument.

But we still want to call it an inductive argument if the intention was for it to be strong. In this case the word “most” indicates that the inference is intended to be strong rather than valid. We would call this a WEAK inductive argument.

So the terms “strong” and “inductive” have a relationship similar to the terms “valid” and “deductive”.

  • To call an argument STRONG is to say something about the logical properties of the argument itself (that if the premises are true, the conclusion is very likely true).
  • To call an argument INDUCTIVE is to say something about the INTENTIONS of the arguer (that the argument is intended to be strong).
3 questions

Test your understanding of the concepts introduced in this section.

09:41

Inductive Arguments and Scientific Reasoning

In the terminology of standard textbook logic an inductive argument is one that's intended to be strong rather than valid. But this terminology isn't necessarily standard outside of logic.

In the sciences in particular there is a more common reading of induction that means something like "making an inference from particular cases to the general case".

In this lecture we're going to talk about this reading of the term and how it relates to the standard usage in logic, and the role of inductive reasoning in the sciences more broadly.

The Standard Scientific Meaning of “Induction”

In the sciences the term "induction" is commonly used to describe inferences from particular cases to the general case, or from a finite sample of data to a generalization about a whole population.

Here's the prototype argument form that illustrates this notion of induction:

1. a1 is B.
2. a 2 is B.
:
n. a n is B.
Therefore, all A are B

You note that some individual of a certain kind, a1, has a property B. An example would be "This swan is white".

Then you note that some other individual of the same kind, a2, has the same property — "This OTHER swan is white".

And you keep doing this for all the individuals available to you this. THIS swan is white, and THIS swan is white, and THIS SWAN OVER THERE is white, and so on.

So you've observed n swans, and all of them are white. From here the inductive move is to say that ALL swans, EVERYWHERE are white. Even the swans that you haven't observed and will never observe.

This is an example of an inductive generalization.

Arguments of this form, or that do something similar — namely, infer a general conclusion from a finite sample — exemplify the way the term "induction" is most commonly used in science.

Another example that illustrates inductive reasoning in this sense is the reasoning involved in inferring a functional relationship between two variables based on a finite set of data points.

Let's say you heat a metal rod. You observe that it expands the hotter it gets. So for various temperatures you plot the length of the rod against the temperature. You get a spread of data points that looks like this:

What you may want to know, though, is how length varies with temperature generally, so that for any value of temperature you can then predict the length of the rod.

To do that you might try to draw a curve of best fit through the data points, like so:

It looks like a pretty linear relationship, so a straight line with a slope like this seems like it'll give a good fit to the data.

Now, given the equation for this functional relationship, you can now plug in a value for the temperature and derive a value for the length.

The equation for the straight line is an inductive generalization you've inferred from the finite set of data points.

The data points in fact are functioning as premises and the straight line is the general conclusion that you're inferring from those premises.

When you plug in a value for the temperature and derive a value for the length of the rod based on the equation for the straight line, you're deriving a prediction about a specific event based on the generalization you've inferred from the data.

This example illustrates just how common inductive generalizations are in science, so it's not surprising that scientists have a word for this kind of reasoning.

In fact, the language of induction used in this sense can be traced back to people like Francis Bacon who back in the 17th century articulated and defended this kind of reasoning as a general method for doing science.

Scientific vs Logical Senses of Induction

So how does this kind of reasoning relate to the definition of induction used in logic?

Recall, in standard logic an argument is inductive if it's intended to be strong rather than valid.

The key thing to note is that this is a much broader definition than the one commonly used in the sciences. That definition focuses on arguments of a specific form, those where the premises are claims about particular things or cases, and the conclusion is a generalization from those cases.

But if you take the standard logical definition of an inductive argument you find that many different kinds of arguments will qualify as inductive, not just arguments that infer generalizations from particular cases.

So, for example, on the logical definition, a prediction about the future based on past data will count as an inductive argument.

1. The sun has risen every day in the east for all of recorded history.
Therefore, the sun will rise in the east tomorrow.

The sun has risen every day for as long as the earth has existed, as far as we know. So we expect the sun to rise tomorrow as well.

This is an inductive argument on our definition because we acknowledge that even with this reliable track record it's still possible for the sun to not rise tomorrow. Aliens, for example, might blow up the earth or the sun overnight.

So this inference from the past to the future is inductive, and most of us would say that it's a strong inference. But notice that it's not an argument from the particular to the general. The conclusion isn't a generalization, it's a claim about a particular event, the rising of the sun tomorrow.

So this kind of argument wouldn't count as inductive under the standard science definition, but it does count under the standard logical definition.

Here's a second example that illustrates the difference:

1. 90% of human beings are right-handed.
2. John is a human being.
Therefore, John is right-handed.

Notice that the main premise is a general claim, while the conclusion is a claim about a particular person. On the standard science definition this isn't an inductive argument, since it's moving from the general to the particular rather than from the particular to the general. But on the logical definition of induction this argument does count, since the argument is intended to be strong, not valid.

The relationship between the two definitions is a relation of set to subset. The arguments that qualify as inductive under the standard science definition are a subset of the arguments that qualify as inductive under the standard logical definition.

So from a logical point of view there's no problem with calling an inference from the particular to the general an inductive argument since all such arguments satisfy the basic logical definition.

But scientists are sometimes confused when they see the term "induction" used to describe other forms of reasoning than the ones they normally associate with inductive inferences. There shouldn't be any confusion as long as you keep the two senses in mind and distinguish them when it's appropriate.

But if you don't distinguish them then you may run into discussions like this one that contradict themselves. Below are the first two senses of the Wikipedia entry on "induction" at the time this lecture was first written (2010):

Induction or inductive reasoning, sometimes called inductive logic, is the process of reasoning in which the premises of an argument are believed to support the conclusion but do not entail it, i.e. they do not ensure its truth. Induction is a form of reasoning that makes generalizations based on individual instances.

The first sentence presents the standard logical definition — inductive reasoning is defined as strong reasoning, reasoning that doesn't guarantee truth. The second sentence presents the standard science definition of induction, defining it as reasoning from the particular to the general.

Later on in the article the authors present a number of examples of inductive arguments that satisfy the logical definition but not the scientific definition, such as inferences from correlations to causes, or predictions of future events based on past events, and so on. These examples flat out contradict the definition of induction in the second sentence.

Summary

Now let's summarize some key points of this discussion.

First, we should be aware that there is a difference between the way the term "induction" is defined in general scientific usage and the way it's defined in logic. The logical definition is much broader — it's basically synonymous with "non-deductive" inference. The scientific usage is narrower, and focuses on inferences from the particular to the general.

Second, induction in the broader logical sense is fundamental to scientific reasoning in general. Inductive reasoning is risky reasoning, it's fallible reasoning, where you're moving from known facts about observable phenomena, to a hypothesis or a conclusion about the world beyond the observable facts.

The distinctive feature about this reasoning is that you can have all the observable facts right, and you can still be wrong about the generalizations you draw from those observations, or the theoretical story you tell to try to explain those observations. It's a fundamental feature of scientific theorizing that it's revisable in light of new evidence and new experience.

It follows from this observation that scientific reasoning is broadly speaking inductive reasoning — that scientific arguments should aim to be strong rather than valid, and that it's both unrealistic and confused to expect them to be valid.

Disciplines that trade in valid arguments and valid inferences are fields like mathematics, computer science and formal deductive logic. The natural and social sciences, on the other hand, deal with fallible, risky inferences. They aim for strong arguments.

4 questions

Test your understanding of the concepts introduced in this section.

Section 8: Basic Concepts in Propositional Logic
Basic Concepts in Propositional Logic - PDF Ebook
Article
05:55

You can’t get very far in argument analysis without learning some basic concepts of propositional (also known as “sentential”) logic. This is where, for example, you learn the proper meaning of terms like "contradiction" and "consistent", which should be part of everyone's logical vocabulary.

In this course I review these basic concepts, but I don’t spend time working out formal proofs in propositional logic, as you would in a full course in symbolic logic. My interest here is in presenting and explaining those concepts that in my view are important for doing ordinary, everyday argument analysis.

In particular, many of these concepts are central to discussions of formal and informal fallacies. Many fallacies can only be understood if you first understand these basic concepts of propositional logic.

I should point out that if there is one skill that is essential to master for the LSAT (the Law School Admission Test), it’s the ability to reason correctly with conditional statements, statements that are logically equivalent to the form “If A then B”. This topic is covered thoroughly in Part 4 of this unit.

I don't have a separate unit on categorical logic (or "Aristotelian" logic, as it's sometimes called), but the Appendix to this course covers the basic difference between categorical and propositional logic, and how to interpret the basic categorical statement forms, "All A are B", "Some A are B" and "Only A are B". These are closely related to propositional statement forms covered earlier in the unit.

04:25

Conjunctions (A and B)

A conjunction is a compound claim formed by, as the name suggestions, conjoining two or more component claims. The component claims are called the “conjuncts”.

The logic of conjunctions is pretty straightforward.

Here’s a simple conjunction pertaining to my preferences for pizza toppings:

Let A stand for the claim "I love pepperoni".

Let B stand for the claim "I hate anchovies".

Then the conjunction of A and B is the compound claim

I love pepperoni AND I hate anchovies”.

We want to know the conditions under which the conjunction as a whole is true or false.

In this case it’s pretty obvious. The conjunction “A and B” is true just in case each of the conjuncts is true. If either one is false then the conjunction as a whole is false.

Truth Table for Conjunctions

It’s sometimes handy to represent the logic of compound claims with a table that gives the truth value for the compound claim for every possible combination of truth values of the component claims.

For conjunctions the “truth table” looks like this:


In the first row, A is true and B is true. In the second row, A is true and B is false.Under each of the conjuncts we list all the possible truth values in such a way that each row represents a distinct logical combination of truth values.

In the third row, A is false and B is true, and in the last row, A is false and B is false.

This exhausts all the possible combinations of truth values.

The middle column under the “and” represents the truth value of the conjunction taken as a whole, “A and B”, as a function of the truth values for A and B in the adjacent row.


But for every other combination of truth values, where at least one of the conjunctions is false, then the conjunction as a whole is false.So, in the first row, we see that if both A and B are true, then the conjunction as a whole is true.

We’ll use truth tables like this one to represent the logic of all the compound forms we’ll be looking at.

Conjunctions in Ordinary Language

Knowing the logic of the conjunction doesn’t help much if you can’t recognize when a conjunction is being used in ordinary language. Here are a few things to look out for.

“John is a Rolling Stones fan and John is a teacher.”

“John is a Rolling Stones fan and a teacher.”

In the first sentence the conjunctive form is transparent. Each of the conjuncts, "John is a Rolling Stones fan" and "John is a teacher", show up as complete sentences on either side of the “and”.

But the second sentence represents the very same conjunction as the first sentence. The syntax is different, but from the standpoint of propositional logic, the semantics, the meaning of the sentence, is exactly the same as the first sentence. It’s implicit that “a teacher” is a predicate term that takes “John” as the subject. Don’t make the mistake of reading a sentence like this as a simple, non-compound claim.

Also, conjunctions don’t always use the word “and” to flag the conjunction. In ordinary language there may be a subtle difference in meaning between this sentence using “but” ...

“John is a Rolling Stones fan BUT he doesn’t like The Who.”

... and the same sentence using “and”, but from the standpoint of propositional logic, where all we care about is whether the sentence is true or not, and how the truth of the sentence depends on the truth of any component claims, this sentence still represents a conjunction. The “but” doesn’t make any difference from this perspective. This sentence is still true just in case John is a Rolling Stones fan AND John doesn’t like The Who.

There are other words that sometimes function to conjoin claims together, like “although”, “however”, and “yet”. You can substitute all of these for “but” in this sentence and you’ll get slight variations in the sense of what’s being said, but from the standpoint of propositional logic all of these represent the same conjunction, a claim of the form “A and B”.

One last point. Conjunctions can have more than two component claims. A claim like

“A and B and C and D and E”

might represent a compound claim like,

“John is a writer and a director and a producer of The Simpson’s tv show, but he’s also a stand-up comic and an accomplished violinist”.

This is still a conjunction, and it follows the same rules as any conjunction, namely, it’s true as a whole just in case all those component claims are true, and false otherwise.

This is about all you need to know about the logic of conjunctions and how conjunctions are expressed in ordinary language.

03:53

Disjunctions (A or B)

You form a conjunction when you assert that two or more claims are all true at the same time. You form a disjunction when you assert that AT LEAST ONE of a set of claims is true.

“John is at the movies or John is at the library.”

”In this case we’ve got two claims, “John is at the movies” and “John is at the library”. The disjunction asserts that one of these is true, John is either at the movies or he’s at the library.

The individual claims that make up a disjunction are called the “disjuncts”. So, you’d say that in this case “A” and “B” are the disjuncts, and the disjunction is the whole claim, “A or B”.

We’re interested in the conditions in which the disjunction as a whole counts as true or false.

Inclusive and Exclusive Disjunctions

There are two kinds of cases we need to distinguish. Both of the sentences below express disjunctions. You can represent these as claims of the form “A or B”.

  1. “A triangle can be defined as a polygon with three sides or as a polygon with three vertices.”
  2. “The coin landed either heads or tails.”

But there’s a difference between the first sentence and the second sentence. With the first sentence, both disjuncts can be true at the same time, they’re not mutually exclusive. You can define a triangle as a polygon with three sides, or as a polygon with three vertices, but both are equally good definitions.

With the second sentence it’s different. A coin toss is either heads or tails, it can’t be both. So the “or” expressed in the bottom sentence is more restrictive than the “or” expressed in the top sentence.

  • The “or” in the first sentence is called an “inclusive OR”. It includes the case where both conjuncts can be true.
  • The “or” on the second sentence is called an “exclusive OR”. It excludes the case where both conjuncts can be true.

When examining arguments that use “OR” you need to know what kind of OR you’re dealing with, an inclusive OR or an exclusive OR, because the logic is different.

Truth Tables for Disjunctions

Here are the truth tables for the inclusive OR and the exclusive OR:

With the inclusive OR, if either A or B is true, then the disjunction as a whole is true. The only case where it’s false is if both A and B are false.

The truth table for the exclusive OR is exactly the same except for the first row, where both A and B are true. The exclusive OR says that A and B can’t both be true at the same time, so for this combination the disjunction is false.

You’re using an exclusive OR when you say things like, “The dice rolled either a six or a two”, “The door is either open or shut”, “I’m either pregnant or I’m not pregnant”, “I either passed the course or I failed the course.”.

On the other hand, if a psychic predicts that you will either come into some money or meet a significant new person in the next month, that’s probably an inclusive OR, since they’re probably not excluding the possibility that both might happen.

But sometimes it’s hard to know whether an OR is intended to be inclusive or exclusive, and in those cases you might need to ask for clarification if an argument turns on how you read the “OR”.

I’ll finish here with the same point I made about conjunctions, namely that a string of disjunctions like this — “A or B or C or D or E” — is still a disjunction.

When I see strings like this I think of detective work, where you’re given a set of clues and you’ve got to figure out who did it, and the string is a list of suspects, and with each new clue or bit of evidence you systematically eliminate one of the options until you’re left with the killer. You see this reasoning with medical diagnosis or forensic research, and hypothesis testing in general; they all exploit the logic of disjunctive statements.

That’s it for the logic of disjunctions!

07:02

Conditionals (If A then B)

Conditionals are claims of the form “If A is true, then B is true”. We use and reason with conditionals all the time.

There is a lot that can be said about conditions. In this lecture we’re just going to give the basic definition and the truth table for the conditional. In Part 4 of this course I’ll come back to conditionals and say a few more things about the different ways in which we express conditional relationships in language.

The Parts of a Conditional: Antecedent and Consequent

Here’s a conditional:

“If I miss the bus then I’ll be late for work”.

It’s composed of two separate claims, “I miss the bus”, and “I’ll be late for work.”

The conditional claim is telling us that if the first claim is true, then the second claim is also true.

We have names for the component parts of a conditional. The first part, the claim that comes after the “if”, is called the antecedent of the conditional. The second part, the claim that becomes after the “then”, is called the consequent of the conditional.

The names are a bit obscure but they do convey a sense of the role that the claims are playing. What “antecedes” is what “comes before”. The “consequent” is a “consequence” of what has come before.

The names are handy to know because they’re used in translation exercises where you’re asked to express a bit of natural language as a conditional, and they’re used to to identify the most common logical fallacies that are associated with conditional arguments.

One of these fallacies, for example, is called affirming the consequent. You commit this fallacy when you’re given a conditional like this and assume, from the fact that I was late for work, that I must have missed the bus. You’re affirming the consequent and trying to infer the antecedent. This is an invalid inference, and the name for the fallacy, which you’ll find in any logic or critical thinking textbook, is “affirming the consequent”.

The Logic of Conditionals

It’s pretty easy to understand what conjunctions and disjunctions assert, it’s not quite as easy seeing exactly what it is that you’re asserting when you assert a conditional.

Here’s a conditional: “If I drive drunk then I’ll get into a car accident”.

Question: If I assert that this conditional claim is true, am I asserting that I’m driving drunk?

No, I’m not.

Am I asserting that I’m going to get into a car accident?

No, I’m not asserting that either.

When I assert “if A then B”, I’m not asserting A, and I’m not asserting B. What I’m asserting is a conditional relationship, a relationship of logical dependency between A and B. I’m saying that if A were true, then B would also be true. But I can say that without asserting that either A or B is in fact true.

It follows that a conditional can be true even when both the antecedent and the consequent are false. Here’s an example:

“If I live in Beijing then I live in China.”

I don’t live in Beijing, and I don’t live in China, so both the antecedent and the consequent are false, but this conditional is clearly true: if I did live in the city of Beijing, I would live in China.

Setting up the Truth Table

Shortly we’re going to look at the truth table for the conditional, which gives you the truth value of the conditional for every possible combination of truth values of A and B. The easiest way to understand that truth table is to think about the case where we would judge a conditional to be false.

Here’s a conditional:

“If I study hard then I’ll pass the test.”

Under what conditions would we say that this conditional claim is false?

Let’s consider some possibilities. Let’s say I didn’t study hard, but I still passed the test. Here the antecedent is false but the consequent is true. In this case, would the conditional have been false?

Well, no. The conditional could still be true in this case. What it says is that if I study hard then I’ll pass. It doesn’t say that the only way I’ll pass is if I study hard. So, my failing to study hard and still passing doesn’t falsify the conditional.

So, this combination of truth values (antecedent false, consequent true) does not make the conditional false.

Now, what about this case? I didn’t study hard and I didn’t pass the test. Here, both the antecedent and the consequent are false.

This clearly doesn’t falsify the conditional. If anything this is what you might expect would happen if the conditional was true and the test was hard!

So this combination of truth values (antecedent false, consequent false) doesn’t make the conditional false either.

Now let’s look at a third case: I studied hard but I didn’t pass the test. Here, the antecedent is true but the consequent is false.

Under these conditions, could the conditional still be true?

No, it can’t. THESE are the conditions under which a conditional is false, when the antecedent is true but the consequent turns out to be false.

If I studied hard and didn’t pass, this is precisely what the conditional says won’t happen.

So, in general, a conditional is false just in case the antecedent is true and the consequent is false.

This turns out to be the ONLY case when we want to say with certainty that a conditional claim is false. All other combinations of truth values are consistent with the conditional being true.

This, now, gives us the truth table for the conditional.

Truth Table for Conditionals


The arrow has the virtue that it gives a nice visual cue about the direction of the logical dependency between the antecedent and the consequent. The conditional asserts that if A is true then you can infer B, it doesn’t go the other way, it doesn’t say that if B is true then you can infer A.For the sake of consistency and familiarity with the truth tables for the conjunction and the disjunction, I’ve placed the arrow symbol in between the A and the B to represent the conditional operator, where “A → B” means “If A then B”. Actually in formal symbolic logic an arrow is often used as the symbol for the conditional. It’s also common in formal logic to use a sideways horseshoe symbol for the conditional, but there’s no need to complicate things more than they are.

Notice that the second row gives the only combination of truth values that makes the conditional false, when A is true and B is false. For all other combinations the conditional counts as true. This definition doesn’t always give intuitive results about how to interpret language that uses conditionals, but for purposes of doing propositional logic it gets it right in all the cases that matter.

That’s it for now for the logic of the conditional. We’ll come back to conditionals when we talk about contradictories, and in Part 4 of this course, where we look at different ways we express conditionals in ordinary language.

8 questions

Review the concepts introduced in this section.

02:34

In Part 2 of this series of tutorials we’re going to look at the concept of the contradictory of a claim, distinguish it from the contrary of a claim, define what a contradiction is, and introduce the concepts of consistency and inconsistency when applied to a set of claims.

1. Contradictories (not-A)

Let A be the claim “John is at the movies”.

The contradictory of A is defined as a claim that always has the opposite truth value of A. So, whenever A is true, the contradictory of A is false, and whenever A is false, the contradictory of A is true.

There are couple of different ways that people write the contradictory of A. We’re going to write it in English as “not-A”. But in textbooks and websites that treat logic in a more formal way you’ll likely see “not-A” written with a tilde (~A) or as a corner-of-a-rectangle shape (¬ A).

What does the contradictory assert? It asserts that the claim A is false. There are couple of ways of saying this, some more natural than others.

You can read not-A as “A is false”, i.e.

“‘John is at the movies’ is false”,

or

“It is not the case that John is at the movies”.

But the most natural formulation is obviously

“John is NOT at the movies”.

For simple claims like this it’s not too hard to find a natural way of expressing the contradictory. For compound claims, like conjunctions or disjunctions or conditionals, finding the contradictory isn’t so simple, and sometimes we have to revert to more formal language to make sure we’re expressing the contradictory accurately. In Part 3 we’ll spend some time looking at the contradictories of compound claims.

Here’s the truth table for the contradictory. Pretty simple. Whenever A is true, not-A is false, and vice versa.


For example, when you’re debating an issue it’s important that all parties understand what it means for the claim at issue to be true and what it means for it to be false, so that everyone understands what sorts of evidence would count for or against the claim. And this requires that you understand the logic of contradictories.The definition is simple, but the concept is important, and it isn’t trivial when you’re looking at real-world arguments involving more complex claims.

So as I said, the concept may be simple but from a logical literacy standpoint it’s not trivial.

03:04

Contradictories vs Contraries

One of the problems that people have with identifying contradictories is that they sometimes confuse them with contraries. In this lecture we’ll clear up the distinction.

Here’s our claim,

A = “The ball is black.”

Now consider the claim “The ball is white”. Is this the contradictory of A?

It’s tempting to say this. There’s a natural sense in which “the ball is white” is opposite in meaning to “the ball is black”, since black and white are regarded as opposites in some sense. And it’s also true that they both can’t be true at the same time: if the ball is black then it’s not white, and vice versa.

But this is NOT the contradictory of A.

Why not?

Well, what if the ball we’re dealing with is a GREY ball? The ball isn’t black, and it isn’t white — it’s grey.

This possibility is relevant because it’s a counterexample to the basic definition of a contradictory.

If “the ball is white” is the contradictory of “the ball is black”, then these are supposed to have opposite truth values in all possible worlds; whenever one is true the other is false, and vice versa.

But if the ball is grey, then A, “The ball is black” is false, since grey is not black.

But if A is false, then the contradictory of A must be TRUE. But “The ball is white” is NOT true, it’s false as well. Both of these claims are false. And that’s not supposed to be possible if these are genuine contradictories of one another.

We do have a word to describe pairs of claims like these, though. They’re called contraries.

Two claims are contraries of one another if they can’t both be true at the same time, but they can both be false.

So, the ball can’t be both black and white, but if it’s grey, or red, or blue, then it’s neither black nor white. These are contraries, not contradictories.

Now, how do we formulate the contradictory of “The ball is black” so that it always has the opposite truth value?

Like so: you say “The ball is NOT black”.

Now, the ball being grey doesn’t violate the definition of a contradictory. In this world, A is false, but not-A is true — it’s true that the ball is not black.

***

Examples like this illustrate why it’s sometimes helpful to have the formal language in the back of your head. A more formal way of stating the contradictory is “it is not the case that A” — “it is not the case that the ball is black” — which is equivalent to “the ball is not black”.

The language is pretty stiff but if you stick “it is not the case that ...” in front of the claim, you’re guaranteed not to make the mistake we’ve seen here, of mistaking contrary properties for contradictory properties. In some cases, when dealing with more complex, compound claims, it’s the only way to formulate the contradictory.

03:48

Contradictions (A and not-A)

The concept of a contradiction is very important in logic. In this lecture we’ll look at the standard logical definition of a contradiction.

Here’s the standard definition. A contradiction is a conjunction of the form “A and not-A”, where not-A is the contradictory of A.

So, a contradiction is a compound claim, where you’re simultaneously asserting that a proposition is both true and false.

Given the logic of the conjunction and the contradictory that we’ve looked at in this course, we can see that the defining feature of a contradiction is that for all possible combinations of truth values, the conjunction comes out false, since a conjunction is only true when both of the conjuncts are true, but by definition, if the conjuncts are contradictories, they can never be true at the same time:


So, propositional logic requires that all contradictions be interpreted as false. It’s logically impossible for a claim to be both true and false in the same sense at the same time.ALWAYS FALSE

This is known as the “principle of non-contradiction”, and some people have argued that this is the most fundamental principle of logical reasoning, in that no argument could be rationally persuasive to anyone if they were consciously willing to embrace contradictory beliefs.

There’s a minor subtlety in the definition of a contradiction that I want to mention.

Here’s a pair of claims:

John is at the movies.” and “John is not at the movies.

This is clearly a contradiction, since these are contradictories of one another. John can’t be both at the movies and not at the movies at the same time.

Now, what about this pair?

John is at the movies.” and “John is at the store.

Recall, now, that these are contraries of one another, not contradictories. They can’t both be true at the same time, but they can both be false at the same time.

Our question is: Does this form a contradiction?

This is actually an interesting case from a formal point of view. Let’s assume that being at the store implies that you’re not at the movies (so we’re excluding the odd possibility where a movie theater might actually be in a store).

Then, it seems appropriate to say that, since they both can’t be true at the same time, it would be contradictory to assert that John is both at the movies and at the store. And that’s the way most logicians would interpret this. They’d say that the law of non-contradiction applies to this conjunction even though, strictly speaking, these aren’t logical contradictories of one another. The key property that it has, is that it’s a claim that is false for all possible truth values.

Here’s another way to look at it.

This is the truth table for the conjunction:


But in our case, the top line of the truth table doesn’t apply, since our two claims are contraries — they can’t both be true at the same time. So this case never applies.A conjunction is true only when both conjuncts are true. For all other truth values it’s false.


This kind of example raises a question that logicians might debate — whether, on the one hand, a contradiction should be defined as a conjunction of contradictory claims, or, on the other hand, whether it should be defined as any claim that is false in all logically possible worlds.The remaining three lines give you all the possible truth values for contraries, and now we see that the conjunction comes out false for all of them.

Examples like these suggest to some people that it is this latter definition which is more fundamental, that’s it more fundamental to say that a contradiction is a claim that is logically false, false in all possible worlds.

This issue isn’t something you’ll have to worry about, though. If you’re a philosopher or a logician this may be interesting, but for solving logic problems and analyzing arguments, it doesn’t make any difference.

05:03

Consistent vs Inconsistent Sets of Claims

Like the terms “valid” and “invalid”, the most common use of “consistency” and “inconsistency” in everyday language is different from its use in logic.

In everyday language, something is “consistent” of it’s predictable or reliable. So, a “consistent A student” is a student who regularly and predictably gets As. A consistent athlete is one who reliably performs at a certain level regardless of the circumstances. An inconsistent athlete performs well sometimes and not so-well other times, and their performance is hard to predict.

This isn’t how we use the terms “consistent” and “inconsistent” in logic.

In logic, “consistency” is a property of sets of claims. We say that a set of claims is consistent if it’s logically possible for all of them to be true at the same time.

What does “logically possible” mean here? Logically possible means that the set of claims doesn’t entail a logical contradiction.

A contradiction is a claim that is false in all logically possible worlds, and we usually write the general form of a contradiction as a claim of the form “A and not-A”.

“not-A” is usually interpreted as the contradictory of A, but as we saw in the last tutorial, this is can also be the contrary of A.

So, if it’s logically impossible for a set of claims to be true at the same time, then we say that the set is logically inconsistent.

Examples

Let’s look at some examples:

“All humans are mortal.”

“Some humans are not mortal.”

These clearly form an inconsistent set, since these are logical contradictories of one another. “Mortal” means you will some day die. “Not mortal” means you’ll never die, you’re “immortal”. If one is true then the other must be false, and vice versa.

Now what about this set?

“All humans are mortal.”

“Simon is immortal.”

Can both of these be true at the same time?

In this case, the answer is “yes”, both of these can be true. They’re only inconsistent if you assume that Simon is human —if that were true, then these would be inconsistent. But “Simon” is just a name for an individual, so Simon could be a robot or an angel or an alien, and if so then the claim about all humans being mortal wouldn’t apply.

Now, if you added the assumption about Simon being human as a claim to this set, like so ...

“All humans are mortal.”

“Simon is immortal.”

“Simon is human.”

... then you’d have an inconsistent set. Here you have three claims where, if any two of them are true, the third has to be false.

Let’s take a moment and look at this at little closer.

If all humans are mortal, and if Simon is immortal, then it logically follows that Simon can’t be human.

By “logically follows” I mean that you can construct a valid argument from these premises for the conclusion that Simon is not human, like so:

1. All humans are mortal.
2. Simon is immortal.
Therefore, Simon is not human.

This is what it means to say that the set entails a logical contradiction. From the set one can deduce a claim that is either the contradictory or the contrary of one of the other claims in the set.

Here’s another way to represent this. The first two claims entail a claim that is the contradictory of the third claim. And from this it becomes evident that, to assert that all the claims in the set are true is to assert a formal contradiction.

Now, we can run this with any pair of claims in the set. If we set the second and third claims as true, for example, then we can infer that the first must be false. If Simon is immortal and if Simon is human, then it must be the case that not all humans are mortal, which contradicts the first claim.

The only remaining pair to check is the first and the third claims. If it’s true that all humans are mortal, and it’s true that Simon is human, we can validly infer that Simon is mortal, which contradicts the second claim.

An Important Fact About Inconsistent Claims

This example helps to illustrate another important fact about inconsistent sets of claims. IF we’re given a set of claims that we know is inconsistent, then we know that at least one of the claims in the set must be FALSE.

So, if we want re-establish consistency, we need to abandon or modify at least one of these claims.

We used logic to establish that the set is inconsistent, but it’s important to understand that logic alone can’t tell us which of these claims to modify.

Logic tells us is that you can’t consistently believe all of these claims at the same time, but it doesn’t tell us how in any particular case to resolve the inconsistency.

Nevertheless, it can be very helpful in argumentation to have a group of people come to agree that a set of claims is inconsistent. In the end they may disagree about how to resolve the inconsistency, but it’s still an achievement to get everyone to realize that they can’t accept everything on the table, that something has to go.

5 questions

Review the concepts introduced in this section.

02:18

1. not-(not-A)

In this series of lectures we’ll be looking at how to write and interpret the contradictories of the basic compound claim forms — conjunctions, disjunctions, and conditionals.

But before we do that we should first talk about the contradictory of a contradictory. Contradictories are sometimes called “negations”, so this rule is commonly called “double negation”.

The rule is straightforward. If you take a claim and negate it, and then negate the negation, you recover the original claim:

not-(not-A) = A

Here’s a simple example:

“Sarah makes good coffee.”

The contradictory of this is “It is not the case that Sarah makes good coffee”, or, more naturally, “Sarah does not make good coffee”.

If we now take the contradictory of this, we get an awkward expression. If you were being very formal you’d say

“It is not the case that it is not the case that Sarah makes good coffee.”

or

“It is false that it is false that Sarah makes good coffee.”

These are pretty unnatural. “It is not the case that Sarah does not make good coffee” is also pretty unnatural. But you don’t have to say this. With double negation you recover the original claim, so you can just say

“Sarah makes good coffee”.

Double negation is mostly used as a simplification rule in formal logic, but we use it intuitively in ordinary speech all the time.

One word of caution. To use double negation correctly you need to know how to construct and recognize the contradictories of different kinds of claims.

For example, let’s say someone wants to say

“It’s false that Sarah and Tom didn’t go bike riding”.

If I want to simplify this using double negation I need to know how to interpret the contradictory of “Sarah and Tom didn’t go bike riding”.

But does this mean that “Both Sarah and Tom went bike riding”? Or does it mean “Either Sarah or Tom went bike riding”?

To be sure about this you need to know how to interpret the contradictory of a conjunction where each of the conjuncts is already negated — “Sarah didn’t go bike riding AND Tom didn’t go bike riding.

The correct answer is the second one, “Either Sarah or Tom went bike riding”. But you’ll need to check out the lecture on negating conjunctions to see why.

02:56

not-(A and B)

Let’s look at how to construct the contradictory of a conjunction.

Here’s our claim:

“Dan got an A in physics and a B in history.”

When I say that this claim is false, what am I saying?

The conjunction says that both of these are true. The conjunction is false when either one or the other of the conjuncts is false, or both are false.

This gives us the form of contradictory:

“Either Dan didn’t get an A in physics OR he didn’t get a B in history”.

The contradictory of a conjunction is a DISJUNCTION, an “OR” claim. You construct it by changing the AND to an OR and negating each of the disjuncts.

If you wanted to be really formal about it could write a derivation of the contradictory but the rule is fairly simple to remember. When you have a “not-” sign in front of a conjunction, you “push” the “not-” inside the brackets, distribute the “not-” across both of the conjuncts, and then switch the “AND” to an “OR”.

The basic rule looks like this:

not-(A and B) = (not-A) or (not-B)

If this formula looks “algebraic” to you, that’s because it is. This is a formal equivalence in propositional logic. It’s also a basic formula of Boolean logic, which computer scientists will be familiar with since it’s the logic that governs the functioning of digital electronic devices.

Let’s do some problems with it.

Question: What’s the contradictory of

“John and Mary went to the zoo”?

Answer:

“Either John didn’t go to the zoo or Mary didn’t go to the zoo.”

You need to recognize the individual conjuncts, negate them, and put an “OR” in between. Note that the “either” is just a stylistic choice — “either A or B” is equivalent to “A or B”.

You can also write the answer like this, of course:

“Either John or Mary didn’t go to the zoo”.

Let’s try another one.

Question: What’s the contradictory of

“Sam loves hotdogs but he doesn’t like relish”?

With this one you have to remember that, from the standpoint of propositional logic, the “but” is functioning just like “and”, and the whole thing is still a conjunction.

You’ll also need to pay attention to the negation, “doesn’t like relish”, because you’re going to end up negating this negation, which gives us an opportunity to use the double-negation rule.

Here’s the answer:

“Either Sam doesn’t like hotdogs or he likes relish”.

You replace the conjunction with a disjunction, and you negate the disjuncts. Note that we’ve used double negation on the second disjunct. It’s much easier to write “he likes relish” than “it’s not the case that he doesn’t like relish”.

Well, that’s about all you need to know about negating conjunctions. The basic rule is easy to remember:

“not-(A and B)” = “not-A OR not-B”

We’ll see in the next lecture that the rule for the contradictory of a disjunction is very similar.

01:51

3. not-(A or B)

Now that we’ve done the contradictory of a conjunction, the contradictory of a disjunction will be no problem.

“Dan will either go to law school or become a priest”.

This is a disjunction. What am I saying when I say that this disjunction is false?

The disjunction says that either one, or the other, or both of these are true — Dan will either go to law school, or he’ll become a priest, or both.

If this is false, that means that Dan doesn’t do any of these things. He doesn’t go to law school, and he doesn’t become a priest.

So the contradictory looks like this:

“Dan will not go to law school AND Dan will not become a priest.”

The disjunction has become a conjunction, with each of the conjuncts negated. This is structurally identical to the rule we saw in the previous video, with the “OR” and the “AND” switched.

In English we have a natural construction that is equivalent to this conjunction:

“Dan will neither go to law school nor become a priest.”

Remember this translation rule for “neither … nor …”:

“(not-A) and (not-B)” is the same as “neither A nor B”.

Don’t be fooled by the “or” in “nor” — this not a disjunction, it’s a conjunction.

Let’s put the rules for the contradictory of the conjunction and the disjunction side-by-side, so we can appreciate the formal similarity:

not-(A and B) = (not-A) or (not-B)

not-(A or B) = (not-A) and (not-B)

In propositional logic these together are known as DeMorgan’s Rules or DeMorgan’s Laws, named after Augustus DeMorgan who formalized these rules in propositional logic in the 19th century.

They’re also part of Boolean logic, and are used all the time in computer science and electrical engineering in the design of digital circuits.

That’s it. These are the rules you need to know to construct the contradictories of conjunctions and disjunctions.

03:46

not-(If A then B)

The contradictory of the conditional is probably the least intuitive of all the contradictory forms that we’ll look at. But we’ve already discussed this topic when we introduced the conditional and presented the truth table for the conditional, so this should be review.

Here’s a conditional:

“If I pay for dinner then you’ll pay for drinks.”

What does it mean to say that this conditional claim is false?

When we first introduced the conditional we looked at this question. We determined that the only condition under which we would certainly agree that this claim is false is when the antecedent is true but the consequent is false.

This gives us the form for the contradictory. The most natural way to say it is “I pay for dinner but you don’t pay for drinks”. I’m affirming the antecedent and denying the consequent.

Recall from our discussion of the conjunction that “but” just means “and”, and that this is a conjunction, not a conditional.

Let me repeat that. The contradictory of a conditional is not itself a conditional, it’s a conjunction.

Here’s the general rule that makes this clear:

not-(If A then B) = A and not-B = A but not-B

The contradictory of a conditional is a conjunction that affirms the antecedent of the conditional but denies the consequent. Almost always, though, it’s more natural to phrase the contradictory as “A but not-B”, as in “I pay for dinner but you don’t pay for drinks”.

These are the conditions under which, if they obtained, we’d say that the original conditional was false.

Why not “If A then not-B”?

The most common mistake that students make when solving problems that require taking the contradictory of a conditional is to interpret the contradictory as a conditional of this form:

“If A then not-B”

This is a tempting interpretation of the contradictory, but it just doesn’t work. There are a couple of ways of seeing why this is so. One way uses truth tables.

On the left is the truth table for the conditional. The conditional is true for all truth values of A and B except when A is true and B is false.

The contradictory of the conditional is, by definition, a claim that is true whenever the conditional is false, and vice versa. So the middle column has the opposite truth value of the conditional, for the same values of A and B.

This, we know, must be the truth table for the contradictory of the conditional. The question is, what operations on A and B will yield this truth table? That’s the question represented by the question mark in between A and B.

We can see right away that a truth table formed by simply negating the consequent won’t do. On the left is the truth table for the contradictory of the conditional that we know must be true. On the right is the truth table where the only change is negating the consequent.


You can see that the truth values for this new conditional don’t match up with the truth values for the contradictory of the conditional. They match for the cases where A is true, but not where A false.In the truth table on the right I’ve switched the truth values in the B-column, and I’ve evaluated the truth value of the conditional in the middle column according to the rule that the conditional as a whole is true except when A is true and B is false.

From this alone we can rule this out as a candidate for the contradictory. Whatever functions as the contradictory of the conditional has to be more restrictive in its truth value, so that it comes out false whenever A is false.

Now let’s look at the truth table for the conjunction with B negated.

As you can see, this gives us exactly what we need — the truth tables match. What we’ve just done confirms our rule, that the contradictory of a conditional is a conjunction with the B-term negated.

For the sake of having them all in one place, here are the formulas we introduced that give the contradictories for the basic compound claims of propositional logic:

not-(not-A) = A

not-(A and B) = (not-A) or (not-B)

not-(A or B) = (not-A) and (not-B)

not-(If A then B) = A and (not-B)

7 questions

Test your understanding of the concepts introduced in this section.

02:49

A if B

In this section of the course we’ll be looking at various different ways that we express conditional relationships in language.

The basic syntax for the conditional is “if A then B”, but in ordinary language we have lots of ways of expressing conditionals that don’t use this form.

We’ll start with the form “A if B”.

“If I pay for dinner then you’ll pay for drinks."

This is written in standard conditional form. The antecedent is “I pay for dinner”. The consequent is “you’ll pay for drinks”.

The “if” is what flags the antecedent. In standard form, the antecedent comes immediately after the “if”.

Now, I can write the same claim like this:

“You’ll pay for drinks if I pay for dinner.”

Here the consequent is now at the beginning of the sentence and the antecedent is at the end. But the antecedent is still “I pay for dinner”. The “if” flags the antecedent just as as it does when the conditional is written in standard form.

Here’s the general translation rule:

B if A = If A then B

I want to mention something here that might be a source of confusion. I’ve written it as a “B if A” rather than “A if B”, so that the As and Bs correspond when compared with the conditional in standard form. So the same symbols represent the antecedent and the conditional in both versions.

But you shouldn’t expect the same letter to always represent the antecedent of the conditional. The symbols are arbitrary. I could write the same rule in all these different ways,

A if B = If B then A

Q if P = If P then Q

$ if @ = If @ then $

and it would still represent the same rule.

What matters is that in standard form, whatever follows the “if’ is the antecedent. The trick in interpreting different versions of the conditional is to identify the claim that is functioning as the antecedent, so that you can then re-write the conditional in standard form.

This is actually a very useful skill when analyzing ordinary arguments. We’ll eventually cover the valid and invalid argument forms that use the conditional, and these are always expressed using the conditional in standard form, so in order to apply your knowledge of valid and invalid argument forms you need to be able to translate conditionals into standard form.

Let’s finish with a couple examples

The exercise is to write these conditionals in standard form:

“David will be late if he misses the bus.”

and

“You’ll gain weight if you don’t exercise.”

The answers looks like this:

“David will be late if he misses the bus.”
=
“If David misses the bus then he’ll be late.”

“You’ll gain weight if you don’t exercise.”
=
“If you don’t exercise then you’ll gain weight.”

The rule is that you look for the “if”, and whatever follows the “if” is the antecedent of the conditional.

This is the simplest alternative form for the conditional. As we’ll see, there are other forms, and they can be trickier to translate.

03:34

A only if B

There’s a big difference between saying that A is true IF B is true, and A is true ONLY IF B is true. Let’s look at this difference.

Here’s a conditional expressed using “only if”:

“The match is burning only if there’s oxygen in the room.”

We need to figure which of these is the antecedent, “the match is burning” or “there’s oxygen in the room”.

Given what we did in the last lecture, it’s tempting to just look for the “if” and apply the rule, whatever comes after the “if” is the antecedent, and conclude that “there’s oxygen in the room” is the antecedent.

But that’s wrong. “There’s oxygen in the room” isn’t the antecedent.

If this was the antecedent, then the sentence would be saying that if there’s oxygen in the room then the match will be burning. But if you’re saying that then you’re saying that the presence of oxygen is enough to guarantee that the match is burning.

That’s not what’s being said. What’s being said is that the presence of oxygen in the room is necessary for the match to be burning, it doesn’t say that it will be burning.

This sentence expresses a conditional, but the antecedent of the conditional is, in fact, “the match is burning”.

The “only if” makes a dramatic difference. This sentence is equivalent to the following conditional written in standard form:

“If the match is burning then there’s oxygen in the room.”

The “only if” actually reverses the direction of logical dependency. When you have “only if”, the claim that precedes the “only if’ is antecedent, what follows it is the consequent.

Here’s the “only if” rule:

“A only if B” = “If A then B”

The antecedent doesn’t come after the “if”, the consequent comes after the “if”.

Let’s take away the symbols and compare the “if” and “only if’ rules.

___ if ___

___ only if ___

When you’re given a conditional that uses “if” or “only if”, you look for the “if”, and if the “if” is all by itself, then the antecedent is what immediately follows the “if”.

If the “if” is preceded by “only” then you do the opposite, what follows the “only if” is the consequent, what precedes the “only if” is the antecedent. Once you’ve got that, then the rest is easy:

(consequent) if (antecedent)

(antecedent) only if (consequent)

From here you can easily write the conditional in standard form, “If A then B”.

Examples

Let’s look at some examples. Here are two sentences:

“Our team will kick off if the coin lands heads.”

I’ll buy you a puppy only if you promise to take care of it.”

They both express conditionals. You need to write these in standard form, in the form “If A then B”. That requires that you identify the antecedent and the conditional in each sentence.

In the first sentence, “Our team will kick off if he coin lands heads”, the “if” appears by itself, so we know that what immediately follows the ‘if” is the antecedent.

So we write the conditional in standard form as follows:

“If the coin lands heads then our team will kick off.”

For the second sentence we have an “only if”, so we know to do precisely the opposite of what we did in the previous case. The antecedent is what precedes the “only if”. So, you write the conditional in standard form like this:

“If I buy you a puppy then you promise to take care of it.”

This conditional doesn’t say that if you promise to take care of it I’ll buy you a puppy. It’s saying that if I buy you a puppy, then you can be sure that you promised to take care of it, because that was a necessary condition for buying the puppy. But merely promising to take care of the puppy doesn’t guarantee that I’ll buy it.

It might be a bit more natural to write it like this: “If I bought you a puppy then you promised to take care of it”.

Sometimes shifting the tenses around can be helpful in expressing conditionals like this in a more natural way. For our purposes they mean the same thing.

Here’s the general rule once again:

“A only if B” = “If A then B”

In the next lecture we’ll look at what happens when you combine the “if” and “only if”.

02:27

A if and only if B

You may have heard the expression “if and only if” in a math class or some other context. In this tutorial we’ll look at what this means as a conditional relationship.

As the name suggests, “A if and only if B” is formed by conjoining two conditionals using the “if” rule and the “only if” rule:

“A if B” and “A only if B”

So it asserts two things, that A is true if B is true, AND that A is true ONLY IF B is true.

You can use the “if” rule and the “only if” rule to translate these into standard conditionals, and when you do the expression looks like this:

“If B then A” and “if A then B”

This asserts that the conditional relationship runs both ways. Given A you’re allowed to infer B, and, given B you’re allowed to infer A.

It’s not surprising, then, this is also called a biconditional. You might encounter the biconditional written in different ways, but they all mean the same thing, that A implies B and B implies A.

Biconditionals show up a lot in formal logic and mathematics. They’re used to demonstrate the logical equivalence of two different expressions. From a propositional logic standpoint, the defining feature of a biconditional is that the claims, A and B, always have the same truth value — if A is true, then B is true, and vice versa.

Here’s an example of a biconditional relationship whose truth is obvious:

Let A = “The triangle XYZ has two equal sides.”

Let B = “The triangle XYZ has two equal angles.”

It’s clear that if A is true then B is also true. The sides XY and XZ are equal, and from the diagram you can see this requires that the angles at Y and Z must also be equal.

And it’s also clear that converse is true as well, that if a triangle has two equal angles then it also has two equal sides. So “if B then A” is also true.

But if both of these conditionals are true, then we can say that A is true if and only if B is true, and vice versa.

One of the helpful things about learning about the biconditional as a concept is that it helps us to remember that ordinary conditionals are only half a biconditional, they only go one way. If A implies B it doesn’t follow that you can go backwards and say that B implies A. It reminds us that you need to argue or demonstrate that you can run the inference in the other direction as well.

01:59

A unless B

There are a lot of ways of saying “if A then B”. We can even say it without using the words “if” or “then”.

Here’s an example:

“Jennifer won’t go to the party unless Lauren goes too.”

The word “unless” is acting like a conditional operator in this sentence.

If you were asked to rewrite this as a standard conditional, you would probably translate this as

“If Lauren doesn’t to the party then Jennifer won’t go to the party.”

This exactly right.

We’ve done two things here. First, we recognize that the antecedent of the conditional is what comes immediately after the “unless”.

Second, we recognize that we need to take the contradictory of this claim to get the meaning of the conditional right.

So, we look for the “unless”, take what immediately follows, negate it, and make that the antecedent of the conditional.

This gives us the form of the general rule:

B unless A = If not-A then B

If you say that B is true unless A is true, then you’re saying that if A is false then B is true.

Once again, don’t be too fixated on which letters we’re using to represent the antecedent and the consequent.

For me, I like simple translation rules that are easy to remember, so I usually say to myself, read “unless” as “if not-”. This is probably the easiest way to remember this rule.

Here’s a final example that puts a small spin on things. The claim is

“Unless you pay us one million dollars, you’ll never see your pet goldfish again.”

Here the “unless” is at the beginning of the sentence rather than in the middle, but the rule still applies. You should read “unless” as “if not-”.

So the translation is

“If you don’t pay us one million dollars, then you’ll never see your pet goldfish again”.

It doesn’t matter where the “unless” shows up in the sentence, the translation rule still applies.

03:23

The Contrapositive (If not-B then not-A)

The contrapositive is a very important translation rule. It’s mainly used for simplifying conditionals that use negations, but it’s used extensively in LSAT logic games and other tests of analytical reasoning.

The contrapositive is easy to illustrate.

“If I live in Paris then I live in France.”

This is a conditional claim, and it happens to be true (assuming we’re talking about Paris, the city with the Eiffel Tower, and not some other city with the same name).

Now, if this is true, then this is also true:

“If I don’t live in France then I don’t live in Paris”.

This is the contrapositive of the conditional above.

The contrapositive is a conditional formed by switching the antecedent and the consequent and negating them.

Here’s the general rule:

“If A then B” = “If not-B then not-A”.

Here are few more examples:

Conditional:

“If we win the game then we’ll win the championship.”

Contrapositive:

“If we didn’t win the championship then we didn’t win the game.”

If the first conditional is true, then if we didn’t win the championship, we can be sure that we didn’t win the game.

The rule applies to all conditionals, even false or nonsensical conditionals like this one:

Conditional:

“If George Bush is a robot then Jackie Chan is Vice-President.”

This is absurd, of course, but if this conditional was true — if George Bush’s being a robot actually entailed that Jackie Chan was the Vice President” — then the contrapositive would also be true:

Contrapositive:

"If Jackie Chan is not Vice President, then George Bush is not a robot."

For our last example let’s mix it up.

Conditional:

“You won’t become a good player if you don’t practice.”

It’s written with the “if” in the middle, so to write the contrapositive you have to make sure you’ve got the antecedent and the consequent right.

Well, the “if” rule says that whatever follows the “if” is the antecedent, so we know the antecedent is “You don’t practice”, and the consequent is “you won’t become a good player”.

Now, to write the contrapositive, you switch the antecedent and the consequent and negate both parts.

The consequent of the original is “you won’t become a good player”. Negating this you get “You will become a good player”. This becomes the antecedent of the contrapositive.

Contrapositive:

“If you become a good player than you practiced.”

Sometimes you may want to shift tenses a bit to make a claim sound more natural. In this case I’ve written it as “you become a good player” rather than “you will become a good player”, but it doesn’t make much difference. I could have also written it as “you became a good player”.

Once you’ve got the antecedent of the contrapositive it’s easy to write the consequent — “if you become a good player then you must have practiced.”

The challenge with problems like these is to not get turned around and mistake an antecedent for a consequent. In this case, the most common error would be to interpret the contrapositive as “If you practice then you’ll become a good player”. This is very tempting, but it’s not entailed by the original claim.

The original claim doesn’t say that if you practice you’re guaranteed to become a good player. All it says is that if don’t practice then you’re certainly not going to become a good player. So, what we can infer is that if you end up becoming a good player, then we can be sure of one thing, that you practiced.

The value of these rules is that they keep you from assuming that you know more than you do, based on the information given.

04:35

(not-A) or B

Here’s another translation rule for conditionals that doesn’t use “if” or “then”. This one is useful for interpreting disjunctions as conditionals, or rewriting conditionals in the form of a disjunction.

It turns out you can write any conditional as a disjunction, a claim of the form “A or B”.

Consider this conditional:

“If live in Paris, then I live in France.”

There are only three possibilities for the way the disjunction can be phrased:

  • A or B
  • A or (not-B)
  • (not-A) or B

From the title of this lecture you already know the answer to this question, but for the sake of demonstrating why this answer is correct let’s work through these.

A or B = I live in Paris or I live in France.

If the original conditional is true, is this disjunction true? No, this disjunction doesn’t have to be true. The original conditional is consistent with me living in New York, say. There’s no reason why I have to live in Paris or France. So this won’t work. Let’s try the next one.

A or (not-B) = I live in Paris or I don’t live in France.

If the original conditional is true, does it follow that either I live in Paris or I don’t live in France? That would be an odd inference, wouldn’t it? This entails that Paris is the only city in France that I’m allowed to live in. This doesn’t follow, so strike that one out.

(not-A) or B = I don’t live in Paris or I live in France.

Now, if the original conditional is true, does it imply that either I don’t live in Paris or I live in France?

It does. Let’s see why.

Recall that what the “OR” means is that one or the other of these must be true, they can’t both be false. It’s easy to see that they can’t both be false. If they were both false, I’d be saying that I live in Paris but I don’t live in France. That’s impossible, since Paris is in France. So both of these disjuncts can’t be false at the same time; one of them must be true.

Now let’s assume that the first disjunct is false: “I do live in Paris.” Does it then follow that I live in France? Yes it does — Paris is in France.

Now assume that the second disjunction is false: “I don’t live in France.” Does it then follow that I don’t live in Paris? Yes it does, for the same reason. This is indeed the correct translation.

If you’re not convinced you can show that these are logically equivalent with truth tables:


On the left is all the possible truth values of A and B. In the middle is the truth table for the disjunction when you negate A. On the right is the truth table for the conditional.

You can see that the truth values for the disjunction and the conditional, highlighted in grey, match up.

Some people find these kinds of explanations helpful, and some don’t. Either way, the general rule is easy to remember:

If A then B = (not-A) or B

It’s helpful to have the brackets around “not-A” so that you don’t confuse this expression with the contradictory of a disjunction, “not-(A or B)”. Brackets in logic function like they do in math. They clarify the order of operations when it might otherwise be unclear.

Let’s look at a few examples.

Conditional:

If we win the game then we’ll win the championship.

Disjunction:

We won’t win the game or we’ll win the championship.

I admit that these translations don’t always sound very natural, but if you think about the semantics of disjunctions and work through the reasoning you’ll see that they get the logic right.

And sometimes you can express them in a more natural way, like this:

“Either we lost the game or we won the championship.”

Regardless, you won’t go wrong if you trust the translation rule.

Here’s one more example:

Conditional:

"If there’s no gas in the car then the car won’t run."

Disjunction:

"There’s gas in the car or the car won’t run."

This one sounds pretty natural as it is.

This translation rule can be handy for working through certain kinds of LSAT logic problems where you have to represent conditional rules on a diagram. Sometimes it’s easier to do this when the rule is expressed as an “either __ or ___” proposition.

04:41

Necessary and Sufficient

The last set of terms we’ll look at for expressing conditional relationships involve the concepts of “necessity” and “sufficiency”.

“If I become rich, then I’ll be happy.”

Here’s a question: When I say this, am I saying that becoming rich is necessary for me to be happy?

If I say that becoming rich is necessary for my happiness, then I’m saying that there’s no way for me to be happy unless I’m rich.

That doesn’t seem right.

What does seem right is to say that my becoming rich is sufficient for my being happy.

“Sufficient” means that it’s enough to guarantee that I’ll be happy. But it doesn’t imply that becoming rich is the only way that I can be happy. My becoming rich is sufficient for my happiness, but it’s not necessary for it.

In terms of the antecedent and the consequent of the original conditional, we can say that

“If A then B” = “A is sufficient for B”.

Or in other words, the antecedent is sufficient to establish the consequent.

This is the first general rule: A conditional can always be translated into a sentence stating that the truth of the antecedent is sufficient to ensure the truth of the consequent.

So how do interpret the language of “necessity”?

Well, let’s go back to our original claim.

“If I become rich, then I’ll be happy.”

We can’t say that A is necessary for B. But we can say that B is necessary for A.

In other words, we can’t say that my being rich is necessary for my happiness. But we can say that my happiness is a necessary consequence of my being rich. In other words, if I end up rich then I’m necessarily happy.

So, relationships of necessity and relationships of sufficiency are converses of one another. If A is sufficient for B then B is necessary for A.

It’s easier to see if you have the rules side by side:

A is sufficient for B = If A then B

A is necessary for B = If B then A

When you see a claim of the form “A is sufficient for B”, then you read that as saying that if A is true then B is guaranteed; the truth of A is sufficient for the truth of B.

When you see a claim of the form “A is necessary for B”, then you should imagine flipping the conditional around, because the B term is now playing the role of the antecedent.

Another way of stating the rule for “necessary” is to express it in terms of the contrapositive, “If not-A then not-B”. The only way to make this clear is to look at examples.

“Oxygen is necessary for combustion.”

This doesn’t mean that if there’s oxygen in the room then something is going to combust. Matches don’t spontaneously burst into flame just because there’s oxygen in the room.

What this statement says is that if there’s combustion going on then you know that oxygen must be present. And you would write that like this:

“If there’s combustion then there’s oxygen.”

Or, you could write it in the contrapositive form,

“If there’s no oxygen then there’s no combustion”.

Either way will do.

Let’s do an example working the other way. We’re given the conditional,

“If I have a driver’s license then I passed a driver’s test.”

How do we write this in terms of necessary and sufficient conditions?

How about this?

“Having a driver’s license is necessary for passing a driver’s test.”

Does this work?

No, it doesn’t. It would be very odd to say this, since it implies that you already have to have a driver’s license in order to pass a driver’s test!

We need to switch these around:

“Passing a driver’s test is necessary for having a driver’s license.”

Using the language of sufficiency, you’ll reverse these:

Having a driver’s license is sufficient for passing a driver’s test.”

This is a little awkward, but the logic is right. If you know that someone has a driver’s license, that’s sufficient to guarantee that at some point they passed a driver’s test.

Finally, I want to draw attention to the parallels between the language of necessary and sufficient conditions and the language of “if and only if”. These function in exactly the same way.

A is necessary for B = A if B = If B then A

A is sufficient for B = A only if B = If A then B

Both emphasize that a conditional relationship only goes one way, and that if you can establish that both are true then you’ve established biconditional relationship:

A is necessary and sufficient for B

means the same as

A if and only if B

which means the same as

If (B then A) and (If A then B)

* * *

That’s it for this section on the different ways we express conditionals in ordinary language.

I know from experience that mastering the rules that we’ve been discussing in these last few lectures really does make you become aware of logical relationships, and by itself this will help you to detect errors in reasoning and help you to be clear and precise in your own reasoning.

7 questions

Test your understanding of the concepts introduced in this section.

05:47

Categorical vs Propositional Logic

In any standard logic textbook you’ll see separate chapters on both propositional logic and categorical logic. Sometimes categorical logic is called “Aristotelian” logic, since the key concepts in this branch of logic were first developed by the Greek philosopher Aristotle.

I’m not planning on doing a whole course on categorical logic at this stage, but there are a few concepts from this tradition that are important to have under your belt when doing very basic argument analysis, so in this next series of tutorials I’m going to introduce some of these basic concepts.

In this introduction I’m going to say a few a words about what the basic difference is between categorical logic and propositional logic.

In the course on “Basic Concepts in Logic and Argumentation” we saw a lot of arguments and argument forms that are basically categorical arguments and that use the formalism of categorical logic.

Here’s a classic example.

1. All humans are mortal.
2. Simon is human.
Therefore, Simon is mortal.

This argument is valid. When you extract the form of this argument it looks like this:

1. All H are M
2. x is an H
Therefore, x is an M

The letters are arbitrary, but it’s usually a good idea to pick them so they can help us remember what they represent.

Now, the thing I want to direct your attention to is how different this symbolization is from the symbolization in propositional logic.

When we use the expression “All H are M”, the “H” and the “M” DO NOT represent PROPOSITIONS, they don’t represent complete claims. In propositional logic each letter symbolizes a complete proposition, a bit of language that can be true or false. Here, the H and the M aren’t propositions.

So what are they?

They’re categories, or classes. H stands for the category of human beings, M stands for the category of all things that are mortal, that don’t live forever.

These categories or classes are like buckets that contain all the things that satisfy the description of the category.

What they don’t represent is a complete claim that can be true or false. This is a fundamental difference in how you interpret symbolizations in categorical logic compared to how you interpret them in propositional logic.

In categorical logic, you get a complete claim by stating that there is a particular relationship between different categories of things.

In this case, when we say that all humans are mortal, you can visualize the relationship between the categories like this:

We’re saying that the category of mortals CONTAINS the category of humans. Humans are a SUBSET of the category of things that die. The category of mortals is larger than the category of humans because lots of other things can die besides human beings. This category includes all living things on earth.

Now, when you assert this relationship between these two categories, you have a complete proposition, a claim that makes an assertion that can be true or false.

This is the fundamental difference between symbolizations in propositional logic and categorical logic.

In propositional logic you use a single letter to represent a complete proposition.

In categorical logic the analysis is more fine-grained. You’re looking INSIDE a proposition and symbolizing the categories that represent the subject and predicate terms in the proposition, and you construct a proposition by showing how these categories relate to one another.

Now, what does that small “x” represent, in “x is an H”?

It represents an INDIVIDUAL human being, Simon.

In categorical logic you use capital letters to represent categories or classes of things, and you use lower-case letters to represent individual members of any particular category.

On a diagram like this you’d normally us a little x to represent Simon, like this:

So, putting the X for Simon inside the category of humans is a way of representing the whole proposition, “Simon is human”.

Notice, also, that from this diagram you can see at a glance why the argument is VALID. This diagram represents the first two premises of the argument. When judging validity you ask yourself, if these premises are true, could the conclusion possibly be false?

And you can see that it can’t be false. If x is inside the category of humans, then it HAS to be inside the category of mortals, since humans are subset of mortals.

In a full course in categorical logic you would learn a whole set of diagramming techniques for representing and evaluating categorical arguments, but that’s not something we’re going to get into here.

What we’re going to talk about is what sorts of claims lend themselves to categorical analysis. These are claims with the following forms:

  • All A are B
  • Some A are B
  • No A are B
  • All A are not-B
  • Some A are not-B
  • No A are not-B

Claims like “All humans are mortal”, “Some men have brown hair”, “No US President has been female”, “All mammals do not have gills” and so on.

The Greek philosopher Aristotle worked out a general scheme for analyzing arguments that use premises of this form. In the tutorial course on “Common Valid and Invalid Argument Forms” we look at a few of the most common valid and invalid categorical argument forms.

In the remaining lectures in this section all I really want to do is look at the semantics of categorical claims, what they actually assert, and how to write the contradictory of these categorical claims.

03:05

All A are B

This is the classic universal generalization, “All A are B”.

Here some examples of claims with this form:

  • “All humans are mortal.”
  • “All whales are mammals.”
  • “All lawyers are decent people.”

Two things to note about these sorts of generalizations:

First, the “all” is strict; when you read “All”, it really means “ALL”, no exceptions.

Second, we often don’t use the “all” to express a universal generalization. “Humans are mortal” means “humans in general are mortal”, it’s implied that you’re talking about all humans. Similarly, “whales are mammals” means “all whales are mammals”, and “lawyers are decent people” means “all lawyers are decent people.”

Now, in the real world, people sometimes aren’t careful and will make a generalization that they will acknowledge has exceptions. They might say “All politicians are crooks”, but then they might admit that one or two are pretty decent. They’re not lying, they’re just not being precise, or maybe they’re exaggerating for the sake of dramatic effect. What they really mean is “Most” or “Almost all” politicians are crooks.

In logic it matters a great deal whether you mean “all” or “almost all”, so just be aware of the strictness of your language; if you don’t really mean “all”, then don’t say “all”.

The contradictory of a universal generalization is pretty straightforward, there’s just one thing to be on the lookout for.

If I say “All humans are mortal”, it’s tempting to think that the contradictory might be “No humans are mortal”.

But this is wrong. This isn’t the contradictory.

Remember that the contradictory has to have the opposite truth value in all possible worlds; if one is true then the other must be false, and vice versa.

But imagine a world in which half the people are mortal and the other half are immortal. In this world, BOTH of these statements would be FALSE. This shouldn’t happen if they’re genuine contradictories.

No, these are CONTRARIES. They can’t both be true at the same time, but they can both be false.

The contradictory of “All humans are mortal” is

“SOME humans are NOT mortal”.

or, “Some humans are immortal.”

These two claims always have opposite truth values. If one is true the other has to be false.

Here’s the general form:

not-(All A are B) = Some A are not-B

And here are some examples:

Claim: “All dogs bark.”

Contradictory: “Some dogs don’t bark.”

Claim: “Canadians are funny.”

Contradictory: “Some Canadians are not funny.

Note that here you need to remember that “Canadians are funny” makes a claim about all Canadians, logically you need to read it as “All Canadians are funny”.

Claim: "All Michael Moore films are not good.

The translation rule works just the same, but you need to use double-negation on the “not good” part. So the contradictory looks like

Contradictory: “Some Michael Moore films are good”.

You see how this works.

03:17

Some A are B

Let’s look at this expression, “Some A are B”.

  • “Some dogs have long hair.”
  • “Some people weigh over 200 pounds.”
  • “Some animals make good pets.”

It might seem that “some” is so vague that it’s hard to know exactly what it means, but in logic “some” actually has a very precise meaning. In logic, “some” means “at least one”.

“Some” is vague in one sense, but it sets a precise lower bound. If “some dogs have long hair”, then you can be certain that at least one dog has long hair.

So, “some people weigh over 200 pounds” means “at least one person weighs over 200 lbs”.

“Some animals make good pets” means “at least one animal makes a good pet”.

There are couple of equivalent ways of saying this. If you want to say “some dogs have long hair”, then you could say

“At least one dog has long hair”,

or

“There is a dog that has long hair”,

or

“There exists a long-haired dog”.

These are all different ways of saying “at least one”.

Here’s something to be aware of. The standard reading of “At least one A is B” is consistent with it being true that “All A are B”.

So if I say “Some dogs have long hair”, this doesn’t rule out the possibility that all dogs in fact have long hair.

But sometimes, “some” is intended to rule out this possibility. Sometimes we want it to mean “at least one, but not all”. Like if I say, “some people will win money in next month’s lottery”, I mean “at least one person will win, but not everyone will win”. Which reading is correct — whether it means “at least one” or “at least one but not all” — will depend on the specific context.

Now, let’s look at the contradictory of “Some A are B”.

“Some dogs have long hair.”

If this is false, what does this imply? Does it imply that ALL dogs have long hair?

No. At most, this would be a contrary, if we were reading “Some” as “At least one, but not all”.

No, the contradictory of “Some dogs have long hair” is

“No dogs have long hair.”

If no dogs have long hair then it’s always false that at least one dog has long hair, and vice versa.

So, the general form of the contradictory looks like this:

not-(Some A are B) = No A are B

Examples:

Claim: “Some movie stars are rich.”

Contradictory: “No movie stars are rich.”

Claim: “There is a bird in that tree.”

Note that this is equivalent to saying that there is at least one bird in that tree, or “Some bird is in that tree”. So the contradictory is

Contradictory: “No bird is in that tree.”

Another example:

Claim: “Some dogs don’t have long hair.”

Contradictory: “No dogs don’t have long hair.”

This is a bit awkward. The easiest way to say this is “All dogs have long hair”. Here we’re just applying the rule for writing the contradictory of a universal generalization:

not-(All are B) = Some A are not-B

which is the form the original claim has.

This is about all you need to know about the logic of “Some A are B”.

03:23

Only A are B

Let’s take a look at “ONLY A are B”.

  • “Only dogs make good pets.”
  • “Only Great White sharks are dangerous.”
  • “Only postal employees deliver U.S. mail.”

You won’t be surprised to learn that the logic of “Only A are B” parallels the use of “only if” in the logic of conditionals. There, “A if B” is equivalent to “B only if A”, where the antecedent is switched between “if” and “only if”.

Here, the switch is between “Only” and “All”.

  • “Only dogs make good pets” means the same thing as
    “All good pets are dogs”.
  • “Only Great White sharks are dangerous” means
    All dangerous sharks are Great Whites”.
  • “Only postal employees deliver U.S. mail” means
    “All people who deliver U.S. mail are postal employees”.

The general translation rule is this:

“Only A are B” can re-written as “All B are A”.

Note how similar this is to the translation rule for conditionals:

“A only if B” is equivalent to “B if A”.

The difference, of course, is that the As and Bs refer to very different things. In categorical logic the As and Bs refer to categories or classes of things. In propositional logic the As and Bs refer to whole claims, in this case either the antecedent or the consequent of a conditional claim.

It’s true, though, that the logical dependency relationships are very similar — in both cases when you do the translation you the switch the As and the Bs —and recognizing the analogy can be helpful when you’re doing argument analysis. We’ll come back to this in the course on “common valid and invalid argument forms”.

Now, let’s look at the contradictory of “Only A are B”.

Someone says that “only dogs make good pets”. You say “no”, that’s not true. What’s the contradictory?

The contradictory would be to say that “Some good pets are not dogs”.

How do we know this? How do we know that, for example, it’s not “Some dogs are not good pets”?

Well, you can figure it out just by thinking about the semantics and knowing what a contradictory is, but there’s a formal shortcut we can use to check the answer.

We can exploit the fact that we know that “Only dogs make good pets” is equivalent to “All good pets are dogs”.

Now, we have the original claim in the form “All A are B”, and we already know how to write the contradictory of a universal generalization, it’s “Some A are not B”. And this gives us the form of the contradictory, “Some good pets are not dogs”.

This is typical with these sorts of rules. Once you know some basic translation rules and some rules for writing contradictories, you can often rewrite an unfamiliar claim in a form that is more familiar and that allows you to apply the rules that you do know.

The general form of the contradictory looks like this:

not-(Only A are B) = Some B are not A

So, if our original claim is “Only movie stars are rich”, the contradictory is “Some rich people are not movie stars”. It’s like the rule for “ALL”, but you need to reverse the As and the Bs.

Here’s another one: “Only Starbucks makes good coffee”.

The contradictory is “Some good coffee is not made by Starbucks”.

You replace “only” with “some”, switch the As and Bs, but make sure you take the negation of the predicate class.

This is all you need to know about the logic of “Only A are B”.

01:00

The Square of Opposition

Here’s a handy diagram that might help some of you memorize the contradictories of the different categorical forms.

This diagram is sometimes called the “Square of Opposition”.

It’s not a complete version of the Square of Opposition that shows up in most textbooks. I’ve left off some relationships that appear in the complete diagram, since we haven’t talked about them. But it captures at a glance the contradictories of the categorical claims that use All, Some and No.

The contradictories are on the diagonal. At the top you have contraries. “All A are B” and “No A are B” can’t both be true, but they can both be false.

At the bottom you have what are called “subcontraries”. Can you guess what this is?

Well, as the name subjects, it’s a contrary relationship, but in this case the two claims can both be true at the same time, but they can’t both be false. That’s the opposite of a contrary, so it’s called a “subcontrary”.

Anyway, I thought I’d mention this in case anyone finds it useful.

5 questions

Test your understanding of the concepts introduced in this section.

Section 9: Formal Fallacies: Common Valid and Invalid Argument Forms
Formal Fallacies: Common Valid and Invalid Argument Forms - PDF Ebook
Article
02:42

Valid Forms Using OR

Disjunctions are compound claims of the form “A or B”. We looked at the logic of disjunctive claims in the “Basic Concepts in Propositional Logic” course.

Here we’re going to look at the valid argument forms that use the disjunction as a major premise.

1. Either you’re with me or you’re against me.
2. You’re not with me.
So, you must be against me.

This is a valid argument that uses a disjunction as a major premise.

The basic valid argument form looks like this:

1. A or B
2. not-A
Therefore, B

A and B are the disjuncts. If you can show that one of the disjuncts is false, then the remaining disjunct MUST be true.

Note that here we’re negating “A” and inferring “B”, but this is arbitrary, you can just as easily negate “B” and infer “A”.

In logic texts this argument form is sometimes called “disjunctive argument” or “disjunctive syllogism”.

Now, what about this one?

1. A or B or C or D
2. not-A
Therefore, … ?

You’re given a disjunction with four alternatives, four disjuncts. I say that the first one, “A”, is false. What can I validly infer?

Well, you can’t infer that any specific one of the remaining alternatives is true. All that you can infer is that they can’t all be false.

So the inference looks like this:

1. A or B or C or D
2. not-A
Therefore, B or C or D

Eliminating one possibility just leaves the rest. Either B or C or D must be true.

You can only arrive at a single conclusion if you can eliminate all of the remaining alternatives. This is how that argument would look.

1. A or B or C or D
2. not-A
3. not-B
4. not-C
Therefore, D

This is the basic logic behind any reasoning based on a process of elimination.

We use this kind of reasoning every day, but it you see it most prominently in areas like medical diagnosis or forensic research or detective work, or scientific reasoning generally, where we’ve got a range of possible hypotheses that might explain some piece of data, and you’ve got to find additional clues or information that will narrow down the list of possibilities.

Just to close, here’s the basic valid argument form again.

1. A or B
2. not-A
Therefore, B

As I said at the top, in logic texts this argument form is often called “disjunctive argument” or “disjunctive syllogism”.

The term “syllogism”, by the way, comes from Aristotle’s writings on logic. It’s normally used to describe three-line categorical arguments, like “All humans are mortal, Socrates is human, therefore Socrates is mortal”, but sometimes it’s used a bit more broadly, like in this case, to refer to any simple valid argument form that has two premises and a conclusion.

Now, if you’ve looked at the tutorials on propositional logic then you might be wondering about how the “inclusive OR”/“exclusive OR” distinction factors in here. It is relevant, but we’ll look at that in the next lecture on invalid argument forms that use the “OR”.

02:42

Invalid Forms Using OR

Let’s talk about invalid argument forms that use “OR”.

Consider this example again:

1. Either you’re with me or you’re against me.
2. You’re not with me.
So, you must be against me.

This argument is valid, because the disjunction states that both of these disjuncts can’t be false, at least one of them must be true, so if you can eliminate one then the remainder has to be true.

But what if I said something like this?

1. College teachers have to have either a Master’s degree or a Ph.D.
2. Professor Smith has a Master’s degree.
Therefore, he doesn’t have a Ph.D.

Is THIS a valid argument?

It doesn’t seem so. After all, why can’t it be the case that Professor Smith as BOTH a Master’s AND a PhD? Generally this is the case, if you have PhD then you also have a Masters degree, since having a Masters degree is usually a prerequisite for attaining the PhD.

But if so, then this inference is clearly INVALID.

The general form of this invalid inference looks like this:

1. A or B
2. A
Therefore, not-B

In this form you’re affirming that one of the disjuncts is true, and on the basis of this, inferring that the remaining disjunct must be false.

In general, this is not a valid inference when it’s logically possible for the two disjuncts to be true at the same time.

In other words, it’s invalid when the “OR” is an “INCLUSIVE” OR.

An inclusive OR is one that asserts that “A is true, or B is true, OR BOTH may be true.” The only case that it rules out is the case where both are FALSE.

Now, as you might expect, the case is different if the OR is “exclusive”. Here’s a clear example of an exclusive OR:

1. The coin landed heads or tails.
2. The coin landed heads.
Therefore, the coin did not land tails.

Here you’re doing the same thing, you’re affirming one of the disjuncts and inferring that the remaining disjunct must be false.

But in this case the inference is VALID, since the OR is an “exclusive or” — it excludes the case where both of the disjuncts can be true.

So, this argument form

1. A or B
2. A
Therefore, not-B

is VALID when the OR is an “exclusive OR”.

9 questions

Test your understanding of the concepts introduced in this section.

03:53

Modus Ponens

If you’ve watched the tutorials in the “Basic Concepts in Logic and Argumentation” course then you seen a number examples of this argument form already.

The term “modus ponens” is another hold-over from the days when logic was taught in universities during the middle ages and the language of instruction was Latin, so we have all these Latin names for argument forms and fallacies that people still use.

The full latin name is “modus ponendo ponens”, which means something like “the mode of affirming by affirming”. It refers to the fact that with this conditional form we’re affirming the consequent by affirming the antecedent. By everyone today just calls it “modus ponens”.

Here’s an example:

1. If your king is in checkmate then you’ve lost the game.
2. Your king is in checkmate.
Therefore, you’ve lost the game.

The conditional premise asserts that if the antecedent is true then the consequent is true. The second premise affirms that the antecedent is in fact true, and you then you validly infer that the consequent must also be true.

Now, if you’ve watched the tutorials in the “Basic Concepts in Propositional Logic” course then you know that there are lots of different ways of writing conditionals. As long as the conditional premise is equivalent in meaning to the one you see here, the argument will be an instance of modus ponens.

For example, I might write the same conditional as

1. You’ve lost the game UNLESS your king is NOT in checkmate.
2. Your king is in checkmate.
Therefore, you’ve lost the game.

As we showed in the propositional logic course, this is an equivalent way of saying “If your king is in checkmate, then you’ve lost the game”. If you’re not sure why this is so and you’re curious then you might want to check out that course.

But the point I want to make here is that this argument has the same logical form as the previous version, it’s an instance of it, even though it doesn’t use the “if A then B” syntax to express the conditional. What matters is that the claim is logically equivalent to a conditional of the form “If A then B”.

So, when we say that an argument has the form of modus ponens, we’re not saying that it’s necessarily written in the form “if A then B, A, therefore, B”, we’re saying that it’s logically equivalent to an argument of the form “if A then B, A, therefore, B”.

This is why knowing those translation rules for conditionals is important. They can help you see past the superficial grammar and into the underlying logical structure of a conditional argument.

I’d like to finish with a point that is important for all of the conditional argument forms we’ll be looking at.

1. If (not-P) then (not-Q)
2. not-P
Therefore, not-Q

You see the conditional form I’ve written above. I’ve replaced the antecedent and the consequent with negations throughout the argument.

The point I want to make is that this argument is still an instance of modus ponens, even with those negation signs.

It’s equivalent to the standard form with the obvious substitutions,

1. If A then B
2. A
Therefore, B

where A = not-P and B = not-Q. The antecedent of the argument isn’t P, it’s not-P, and the consequent isn’t Q, it’s not-Q.

In fact, you can have long, complex compound claims playing the role of the antecedent and the consequent, and as long as they’re related in the right way, they’ll still be instances of modus ponens.

Here’s an example:

1. If I get an A in Spanish and don’t fail French, then I’ll graduate this year.
2. I got an A in Spanish and didn’t fail French.
Therefore, I’ll graduate this year.

This is just modus ponens. The antecedent in this case has some structure to it, it’s a compound claim, a conjunction — “I got an A in Spanish and I don’t fail French”. It’s important when analyzing conditional arguments to understand that conditional claims can have complex parts to them and yet still be equivalent to a simple conditional of the form “If A then B”.

01:49

Modus Tollens

Modus ponens is the direct way of reasoning with conditionals. There’s an indirect way that is also commonly used, and it’s called “modus tollens”.

This is modus tollens, and it’s a valid argument form.

1. If A then B
2. not-B
Therefore, not-A

The full latin name is “modus tollendo tollens”, which means “the mode of denying by denying”. It refers to the fact that with this conditional form we’re denying the consequent by denying the antecedent. Everyone today just calls it “modus tollens”.

Let’s look at that checkmate example:

1. If your king is in checkmate then you’ve lost the game.
2. You have not lost the game.
Therefore, your king is not in checkmate.

Sounds reasonable! Put in anything for A and B and it works.

1. If the match is lit then there is oxygen in the room.
2. There is no oxygen in the room.
Therefore, the match is not lit.

Now, I want to show you an easy way of remembering the form.

From the propositional logic course you’ll remember the contrapositive of a conditional is logically equivalent to the conditional, and you can always substitute the contrapositive form for the standard form without loss of meaning.

Conditional: If A then B

Contrapositive: If not-B then not-A

If we do this, the argument form looks like this:

1. If not-B then not-A
2. not-B
Therefore, not-A.

This is the same argument with the conditional premise written in the contrapositive form.

But we should recognize the form of this argument — it’s just modus ponens. This is an argument of the form “If A then B, A, therefore B”, where the antecedent is “not-B”, and the consequent is “not-A”. Since we know that modus ponens is valid, we can see directly that modus tollens must also be valid.

So, if you’re ever confused about how to write modus tollens, just remember that if you rewrite it in terms of the contrapositive then you should recover an argument that has the form of modus ponens.

6 questions

Review the concepts introduced in this section.

03:49

Hypothetical Syllogism

Before we move on to the invalid forms, here’s one more valid conditional form that you should know. It’s sometimes called “hypothetical syllogism” or “hypothetical argument”, or more informally it’s sometimes called “reasoning in a chain”.

It’s obvious why it’s called this when you see the argument form in action:

1. If A then B
2. If B then C
Therefore, if A then C

It’s what you get when you chain a series of conditionals together, where the consequent of one becomes the antecedent of another. You can chain as many of these together as you like.

Note that both the premises and the conclusion are conditionals. In this argument form we’re never actually asserting that A or B or C is true. All we’re asserting is a set of hypothetical relationships: IF A was true, then B would follow; and IF B was true then C would follow. And from this we can assert that IF A was true, then C would follow.

In logic and mathematics, this kind of relationship is called a “transitive” relationship. Some relationships are transitive and some aren’t. Tallness, for example, is transitive. If Andrew is taller than Brian, and Brian is taller than Chris, then Andrew must be taller than Chris.

On the other hand, being an object of admiration is NOT transitive. If Andrew admires Brian, and Brian admires Chris, there’s no guarantee that Andrew will admire Chris.

At any rate, what this argument form shows is that the relation of logical implication is transitive. If A logically implies B, and B logically implies C, then A logically implies C.

This argument form is often used to represent chains of cause and effect, like in this example:

1. If the cue ball hits the red ball, then the red ball will hit the blue ball.
2. If the red ball hits the blue ball, then the blue ball will go into the pocket.
Therefore, if the cue ball hits the red ball, then the blue ball will go into the pocket.

This is precisely the reasoning that a pool player is using when they’re setting up a combination shot like this. They have to reason through a chain of conditionals to figure out how to best strike the cue ball.

It should be obvious, but in case it’s not, you can also use this form to generate a modus ponens type of argument:

1. If A then B
2. If B then C
3. A
Therefore, C

To establish this, note that from premises 1 and 2 we get the assumed premise, “If A then C”:

1. If A then B
2. If B then C
Therefore, if A then C
[by hypothetical syllogism on 1 and 2]

Then from this new premise, plus the affirmation of A, we can derive C using modus ponens:

1. If A then B
2. If B then C
3. If A then C
4. A
Therefore, C
[by modus ponens on 3 and 4]

Now, note that with hypothetical syllogism, the order of the terms is important. The argument form below, for example, is not valid:

1. If A then B
2. If B then C
Therefore, if C then A

This is arguing backwards with conditionals. The direction of logical dependency doesn’t work this way. With conditionals the only valid inference is from antecedent to consequent, you can’t go the other way.

To illustrate, let’s assume the following conditionals are true.

1. If John studies hard then he’ll pass the test.

2. If John passes the test, then he’ll pass the class.

Now, assume that John does indeed pass the class. Does it follow with deductive certainty that John studied hard for that test? That is, is this argument form valid?

1. If John studies hard then he’ll pass the test.
2. If he passes the test, then he’ll pass the class.
3. John passed the class.
Therefore, John studied hard for the test.

No, it isn’t. Maybe the teacher gave a very easy test that day that John could pass without studying hard. Maybe he bribed the teacher to pass him. There are lots of possible ways that these premises could be true and the conclusion still false.

On the other hand, if we knew that he studied hard for the test, we could validly infer that he passed the class. That would be the valid form of reasoning in a chain.

03:27

Affirming the Consequent

“Affirming the Consequent” is the name of an invalid conditional argument form. You can think of it as the invalid version of modus ponens.

Below is modus ponens, which is valid:

1. If A then B
2. A
Therefore, B

Now, below is the invalid form that you get when you try to infer the antecedent by affirming the consequent:

1. If A then B
2. B
Therefore, A

No matter what claims you substitute for A and B, any argument that has the form of I will be valid, and any argument that AFFIRMS THE CONSEQUENT will be INVALID.

Remember, what it means to say that an argument is invalid is that IF the premises are all true, the conclusion could still be false. In other words, the truth of the premises does not guarantee the truth of the conclusion.

Here’s an example:

1. If I have the flu then I’ll have a fever.
2. I have a fever.
Therefore, I have the flu.

Here we’re affirming that the consequent is true, and from this, inferring that the antecedent is also true.

But it’s obvious that the conclusion doesn’t have to be true. Lots of different illnesses can give rise to a fever, so from the fact that you’ve got a fever there’s no guarantee that you’ve got the flu.

More formally, if you were asked to justify why this argument is invalid, you’d say that it’s invalid because there exists a possible world in which the premises are all true but the conclusion turns out false, and you could defend this claim by giving a concrete example of such a world. For example, you could describe a world in which I don’t have the flu but my fever is brought on by bronchitis, or by a reaction to a drug that I’m taking.

Another example:

1. If there’s no gas in the car then the car won’t run.
2. The car won’t run.
Therefore, there’s no gas in the car.

This doesn’t follow either. Maybe the battery is dead, maybe the engine is shot. Being out of gas isn’t the only possible explanation for why the car won’t start.

Here’s a tougher one. The argument isn’t written in standard form, and the form of the conditional isn’t quite as transparent:

“You said you’d give me a call if you got home before 9 PM, and you did call, so you must have gotten home before 9 PM.”

Is this inference valid or invalid? It’s not as obvious as the other examples, and partly this is because there’s no natural causal relationship between the antecedent and the consequent that can help us think through the conditional logic. We understand that cars need gas to operate and flus cause fevers, but there’s no natural causal association between getting home before a certain time and making a phone call.

To be sure about arguments like these you need to draw upon your knowledge of conditional claims and conditional argument forms. You identify the antecedent and consequent of the conditional claim, rewrite the argument in standard form, and see whether it fits one of the valid or invalid argument forms that you know.

Here’s the argument written in standard form, where we’ve been careful to note that the antecedent of the conditional is what comes after the “if”:

1. If you got home before 9 PM, then you’ll give me a call.
2. You gave me a call.
Therefore, you got home before 9 PM.

Now it’s clearer that the argument has the form of “affirming the consequent”, which we know is invalid.

The argument would be valid if the you said that you’d give me a call ONLY IF you got home before 9 PM, but that’s not what’s being said here. If you got home at 9:30 or 10 o’clock and gave me a call, you wouldn’t be contradicting any of the premises.

If these sorts of translation exercises using conditional statements are unfamiliar to you then you should check out the tutorial course on “Basic Concepts in Propositional Logic”, which has a whole section on different ways of saying “If A then B”.

03:35

Denying the Antecedent

“Denying the antecedent” is the name of another invalid conditional argument form. You should think of this as the invalid version of modus tollens.

Below is modus tollens, which is valid;

1. If A then B
2. not-B
Therefore, not-A

Below is the invalid form, known as “denying the antecedent”:

1. If A then B
2. not-A
Therefore, not-B

It’s no mystery why it’s called this. You’re denying the antecedent and trying to infer the denial of the consequent.

Let’s look at some examples.

1. If the pavement is wet in the morning, then it rained last night.
2. The pavement is not wet this morning.
Therefore, it didn’t rain last night.

It’s not hard to see why this is invalid. It could have rained last night but it stopped early and the rain on the pavement evaporated before morning. Clearly, these premises don’t guarantee the truth of the conclusion.

Here’s another one, inspired by an example from the last tutorial:

1. If there’s no gas in the car then the car won’t run.
2. There is gas in the car.
Therefore, the car will run.

Note that we’ve eliminated the negations in the second premise and the conclusion by using double-negation on the antecedent and the consequent. This still has the form of denying the antecedent — “if A then B, not-A, therefore not-B” — but the antecedent, A, is already a negation, so by denying the antecedent you’re saying “it’s not the case that there’s no gas in the car”, which just means that there is gas in the car.

This is one is obviously invalid too. The fact there’s gas in the car is no guarantee that the car is going to run.

Let’s do a trickier one:

“I know you didn’t say your wish out loud, because if you had, it wouldn’t have come true, and your wish did come true.”

Hmm. The only way to be really sure about an argument like this is to re-write it in standard form, either in your head or on paper.

First, of all what’s the conclusion?

This is the conclusion: “You didn’t say your wish out loud”. The word “because” is an indicator word that flags this. (The “I know” isn’t part of the content of the conclusion, it just helps to indicate that this in an inference that follows from something else.)

Okay, so what is the conditional premise?

In the original it reads “if you had, it wouldn’t have come true”. This is the conditional premise. To make the antecedent explicit you need to clarify what “it” refers to; “it” refers to “your wish”.

Conditional premise:

“If you say your wish out loud, then it won’t come true.”

I’ve rewritten the conditional in the present tense, because it will sound more natural when it’s written in standard form, but you’re not altering the content of the claim in any significant way by doing this.

Now, what we have left is the phrase, “and your wish did come true”. The “and” isn’t part of the claim, the claim is “Your wish did come true”. This is the second premise.

Now we have all we need to write this in standard form:

1. If you say your wish out loud, then it won’t come true.
2. Your wish did come true.
Therefore, you didn’t say your wish out loud.

Now, does this argument have the invalid form of “denying the antecedent”?

No, it does not. This argument has the form of modus tollens, and it’s valid.

Some people can see the logical relationships right away just by glancing at the original argument, but those people are in the minority. Most of us need to check to make sure we’ve got it right, and the only way to do that is to reconstruct the argument and put it in standard form, so we can compare the form with the valid and invalid forms that we do know.

Instructor Biography

Kevin deLaplante, PhD, Lead Instructor, Critical Thinker Academy

I'm a philosopher of science by training, and I've taught philosophy and critical thinking at the university level for twenty years.

From 2009-2013 I was Chair of the Department of Philosophy & Religious Studies at Iowa State University.

In 2015 I left my tenured academic job to pursue a career as an independent online educator. I produce video tutorial courses on topics in science, philosophy, critical thinking and communication skills.

I currently live in Ottawa, Ontario, Canada, with my wife and family.

Ready to start learning?
Take This Course