Cranks, Conspiracies, and the Hidden Self
transcribed by Winter Patriot in August of 2015
NOTE: "Cranks, Conspiracies, and the Hidden Self" was Professor Quassim Cassam's Mind Lecture for 2014, marking the end of his tenure as Senior Research Fellow of the Mind Association. It was presented at the University of Warwick, in Coventry, England, in February of 2014.
You can click here to listen to the lecture courtesy of the University.
The following transcript is not (yet) complete, but I have done my best to present Professor Cassam's ideas as he presented them.
I have added a few notes [in square brackets], mainly section headings and time stamps.
[Introductions]
[0:00] [MODERATOR]
OK. Good evening, everyone. What an absolute pleasure it is for us to welcome back to the PPE [Philosophy, Politics, and Economics] Society Professor Quassim Cassam, who this evening will be giving a very special lecture.
This evening's lecture is the Mind lecture, bringing to a close Professor Cassam's tenure as Mind Senior Research Fellow. The title of the talk is "Cranks, Conspiracies, and the Hidden Self".
Professor Cassam has had a prolific career in Philosophy. Since 2009, he has been a professor here at Warwick, and from 2010 to 2012 was Head of Philosophy Department.
Like many of us here this evening, he originally studied PPE starting at Keble College, Oxford, before continuing on to do a B. Phil. and then a D. Phil. in Philosophy, which was supervised for the most part by Sir Peter Strawson.
He was a fellow and a lecturer at Oxford's Oriel and Wadham Colleges, spending 18 years there, and he has subsequently been a professor at UCL [University College London] , King's College, Cambridge, and from 2007 to 2008 was Cambridge's Knightbridge Professor of Philosophy, which is the senior professorship at the University.
From 2010 to 2011, he was also President of the Aristotelian Society.
With an interest in Kantian themes, in 1997 his first book, "Self and World," was published, in which he argued for the importance of bodily awareness for self-awareness.
In 2007, his second book, "The Possibility of Knowledge," was published which focused on how-possible questions in Philosophy, and in particular, how knowledge of particular kinds is possible, despite the apparent obstacles to such knowledge.
He now has two forthcoming books, "Berkeley's Puzzle," which was co-authored with John Campbell, and "Self-Knowledge for Humans."
So, without any further hesitation, you will please join me in offering a very warm round of applause to Professor Quassim Cassam.
[2:18] [APPLAUSE]
[2:32] [PROFESSOR CASSAM]
Ok well thanks very much for that introduction. Thanks also to Louis and the PPE Society for organizing this event so brilliantly. I also need to thank the Mind Association, whose Director is here today, for giving me a whole year in which to write a book on self-knowledge.
Paraphrasing the philosopher Barry Stroud, Mind made the book possible. All I had to do was to make it actual.
[3:01] Ok so what I want to do is to start off by telling you a story. Now as I tell you this story, it might not be apparent to you what its philosophical significance is. However what I want to suggest once I've told you this story that it's significant actually not just for philosophy but also for Psychology and for Economics.
So the ultimate target of this lecture will be a position in Philosophy which I call "Harvard Rationalism," a position in psychology which is often called "Situationism," and a particular version of Behavioural Economics.
I'll also have something positive to say, hopefully, but mainly I just want to rattle a few cages here, just make trouble for these views.
[Oliver and His Theory]
[3:54] Ok So here's the story. The story is about a fictional character who I'm going to call Oliver. Now Oliver spends a lot of time surfing the Internet and reading about the events in New York on September the 11th, 2001. Oliver indeed regards himself as something of an expert in the field of what he calls "9/11 Studies".
Now the thing about Oliver is that he has a theory about what actually happened on 9/11. And his theory is this: that the collapse of the Twin Towers on that day was not in fact caused by aircraft impacts and the resulting fires. Oliver thinks that the Twin Towers collapsed as a result of a controlled demolition. His theory is that government agents planted explosives in the building in advance, detonated those explosives just as the aircraft were approaching, and that's what resulted in the collapse of the Twin Towers.
[5:05] That's Oliver's theory.
Now, as many of you will be aware, Oliver's theory about what happened on 9/11 is actually not that unusual. There was a global opinion poll done in 2008, ten thousand respondents. And fewer than half of them believed that al-Q'aeda was responsible for the events on 9/11. Fewer than half of them believed that.
So he's not alone. Oliver's not alone. But there's one problem. The problem is that Oliver's beliefs about 9/11 are complete rubbish.
[5:48] Of course, aircraft impacts could, and indeed did, bring down the Twin Towers, and the events on 9/11 were the responsibility of al-Q'aeda. There's overwhelming evidence of that.
So a natural reaction to the case of Oliver would be to say, Well, so what? So what? He has a strange view, a conspiracy theory about what happened on that day.
His conspiracy theory happens to be shared by many people across the world. There are many Olivers -- depressingly many Olivers in the world. Perhaps there are even some in this room.
[LAUGHTER]
I mean, statistically, it seems quite likely that there are a few people here who believe Oliver's theory.
[Philosophical Significance]
So what's the philosophical significance of this phenomenon? That's my question.
Well, things start to get interesting, I think, when we ask the following question about Oliver. Why does Oliver believe what he believes about 9/11? Why does he believe it?
[7:03] Now, if you think, as Descartes thought, that we have privileged access to our own minds, then the best possible way of answering the question: "Why does Oliver believe these things?" is to ask Oliver. Who could possibly be better placed to explain why he believes these things than the subject himself?
So you ask Oliver, "Why do you believe this?" So this is how the conversation goes, ok.
As a philosopher, I'm afraid I can't resist using P's and Q's, ok.
And the relevance of this will become clear, but supposing Q is the proposition: "The collapse of the Twin Towers was caused by controlled demolition". That's Q.
And supposing P is the proposition: "Aircraft impact could not have brought down the Twin Towers, and eyewitnesses on the day heard explosions before the towers collapsed."
So you ask Oliver "Why do you believe that Q?"
[8:08] And he says, "Well, I believe that Q because I believe that P. I believe that aircraft impacts couldn't have caused the towers to collapse. That's why I believe that they were brought down by a controlled explosion. I believe that Q because I believe that P."
[8:27] Of course you can ask him further questions, "Well, why do you believe that P?" And he gives you reasons why he believes that P. Now the story that Oliver has, the explanation that Oliver has just given you of his beliefs, is what philosophers call a "rationalizing" explanation.
It's a rationalizing explanation in the sense that Oliver explains his beliefs by reference to his reasons. He represents himself as reasoning from a premise, P, to a conclusion, Q.
[9:03] And his reasoning is not obviously incompetent. His reasoning is not obviously incompetent. He takes P to provide evidence for Q.
Now of course, the problem with that is that most of us realize that he doesn't have any good reason to believe that P, right, but given that he believes that P, he infers that Q. So that's the kind of explanation that Oliver gives. He gives a rationalizing explanation for his beliefs.
[Is Oliver Irrational?]
[9:31] Now if that's right, then I think there's one temptation which we need to resist when we think about cases like Oliver. The temptation that we need to resist is to say, "Oliver is irrational."
Here's why I think we shouldn't say that. I mean, obviously a lot depends on what you take "rational" to mean. There's a kind of very broad, loose conception of "irrational" on which "irrational" just means something like "foolish" or "stupid".
[10:04] That's one reading of "irrational" so that's actually [...] it's something that Derek Parkin says: Foolish, stupid, and crazy.
Well, maybe Oliver's belief is irrational in that sense, but there's a much stricter, and I think, more useful notion of irrationality, on which Oliver's beliefs are not irrational.
So this stricter notion of irrationality is one that, for example, Scanlon defends, in his book, "What We Owe To Each Other".
So the basic idea is this:
[10:38] An attitude of yours is irrational, if and only if you hold it despite recognizing reasons -- good reasons -- for not holding it. Ok, so "irrational" in this strict sense means "contrary to your own reason".
[10:57] Ok, so that can apply not just to beliefs but to actions, intentions, and so on, So supposing you recognize that there are extremely powerful and compelling reasons for you not to smoke, but you still smoke. That might be a case of irrationality. But that's irrationality because it's a kind of inconsistency, right, it's a kind of inconsistency
[11:20] Now of course in that sense Oliver isn't irrational. It's not that Oliver believes things which by his own lights he doesn't have good reason to believe. He's certainly not irrational in that sense. There are in fact rational linkages between the various propositions that he believes. He believes that Q because he believes that P. He takes himself to have good reasons to believe that Q. And he believes that Q on the basis of those reasons.
[11:51] So he's not believing something in the face of his own reason. He's not believing something that is contrary to his own sense of what he has reason to believe. So in that sense of "irrational", Oliver is not irrational. He might be foolish, but he's not irrational. His belief might be foolish but it's not an irrational belief.
It's a false belief. Of course it's a false belief. But saying that a belief is false is not the same as saying that it's irrational. So what is going on with Oliver, in that case? How do we make sense of Oliver if not by saying that he's irrational?
[Intellectual Character]
[12:40] Well, supposing now the conversation continues, and you discover that Oliver not only believes that al-Q'aeda was not responsible for 9/11, he also believes that Lee Harvey Oswald was not solely responsible, or possibly responsible at all, for the assassination of President Kennedy. He believes that Princess Diana was killed by a hit squad hired by Prince Phillip. So he has a whole lot of conspiracy theories.
Then you talk to Oliver's friends, and you say, "Well, you know, what's this Oliver character like?" And they tell you a whole lot of stuff about Oliver.
[13:23] They tell you a whole lot of stuff about his character. They say things like "Well, he's a bit sloppy, he's quite gullible, he's careless in his thinking." Ok.
Now, of course, what Oliver believes about 9/11 starts to make some kind of sense. It makes sense because you can now see what Oliver believes about 9/11 as part of a pattern -- a pattern of beliefs or belief-formation that Oliver exemplifies. So one way of capturing this would be to introduce the notion of character. Of character.
Now of course when people talk about character, sometimes they mean "moral character", so they mean things like, you know, generosity and kindness, something like that. I'm not talking about character in that sense. I'm talking about what is sometimes called "intellectual character" or "epistemic character". So here's the suggestion:
[14:26] One way of making sense of cases like Oliver is to draw on this notion of intellectual character. So what do I mean by this?
By "intellectual character" I mean "dispositions to form beliefs and reason and enquire in particular ways".
Now intellectual character traits can be good or they can bad. So the distinction we need is the distinction between on the one hand, what I'm gonna call "epistemic virtues," and on the other hand, "epistemic vices." [...]
[15:04] So epistemic virtues would include open-mindedness, intellectual humility, tenacity, thoroughness, carefulness, fair-mindedness, determination, intellectual courage, and inquisitiveness.
[15:25] Epistemic vices would include things like negligence, idleness, cowardice, conformity, carelessness, rigidity, gullibility, prejudice, obtuseness, lack of thoroughness, and closed-mindedness.
[15:38] So the proposal is this, that at least in this particular case, and maybe in other cases too, it's genuinely illuminating to explain why Oliver believes what he believes about 9/11 in terms of his intellectual character, right, so crudely you might say: He believes these things because he's gullible. He believes these things because he's careless. He believes these things because he's intellectually negligent. Ok.
[Two Kinds of Explanation]
[16:09] Those are "character" explanations of his beliefs, Ok, and the point I want to make is this: Character explanations are not rationalizing explanations. They're not rationalizing explanations, so right so if you go back to the belief that Q, that the Twin Towers were brought down as a result of controlled demolition:
If the question is: "Why does Oliver believe that Q?" you now have two very different answers to that question. The rationalizing answer says: Oliver believes that Q because he believes that P, and because P supports Q. That's the rationalizing answer.
The non-rationalizing answer says: Oliver believes that Q, and indeed believes that P, because he's gullible, because of the kind of person that he is. He's that kind of person.
[17:05] That's a non-rationalizing explanation because, of course, being gullible is not a reason to believe anything, right. Being gullible explains why you believe what you believe, but it's not a reason for you to believe what you believe. Ok.
So you have these two kinds of explanation: character explanations, which are non-rationalizing, and rationalizing explanations. And the interesting thing about these two explanations is the following:
[17:32] The rationalizing explanation is, of course, the one that Oliver himself gives. Of course, of course Oliver will say, "I believe that Q because of other things I believe that support that belief."
The non-rationalizing explanation is not one that Oliver gives. It's one that we give, right, from the outside. It's a third-person explanation rather than a first-person explanation.
[Oliver's Self-Ignorance]
[17:59] And this brings me to the next point I want to make. The explanation of Oliver's beliefs in terms of Oliver's own character is not an explanation which Oliver himself could possibly accept.
I mean, think about it, right? You might say, "Oliver believes that Q because he's gullible." But Oliver is presumably not going to say, "I only believe that Q because I'm gullible."
Ok, so those of you who do philosophy will recognize it as a version of Moore's Paradox. This is a version of Moore's Paradox.
Ok, the thought is this: that with respect to the character determinants of his belief, Oliver is himself ignorant. Oliver doesn't realize that he's gullible. Oliver doesn't realize that he believes these things because he's gullible. Oliver doesn't realize that he's negligent, or careless. He doesn't realize that he believes these things because he's negligent, or careless.
Oliver is not going to think, "I only think these things because I'm useless."
[19:13] Oliver's just not gonna think that. He's not gonna think, "I only think these things because I'm negligent."
Ok, so in a certain sense, Oliver is self-ignorant. He's self-ignorant. He's self-ignorant in the sense that there is an answer to the question, "Why does he believe what he believes?" There's an answer to that question that he doesn't know.
[19:37] You might know he believes what he believes because he's gullible. He doesn't know that.
Now this is an example of a particular kind of self-ignorance. Ok, now when I talk about self-ignorance, let me just explain what I mean. Sometimes, indeed very often, when philosophers talk about self-knowledge, they mean knowledge of what you believe, knowledge of what you want, knowledge of what you hope, knowledge of what you fear.
Now I'm not suggesting that Oliver lacks self-knowledge in that sense. Oliver knows perfectly well what be believes about 9/11, right, I mean, Oliver knows perfectly well that he believes al-Q'aeda didn't do it. So he's not self-ignorant in that sense.
The self-ignorance which Oliver exemplifies is not ignorance of what he believes, but ignorance of why he believes what he believes, right. And it's a particular kind of explanation which Oliver doesn't know or accept, an explanation in terms of his character traits.
Now ignorance in this sense, self-ignorance in this sense, is a pervasive phenomenon, as those of you who've read any empirical psychology will know. So let me just give you a couple of other nice examples of self-ignorance.
[Empirical Examples of Self-Ignorance]
[20:59] So here's one example. The bystander effect. The bystander effect. So the bystander effect is this: people are increasingly less likely to help others in distress, as the number of bystanders increases. That's the bystander effect.
There are all these studies of people in a room, being played the sounds of what sounds like someone having an epileptic fit in the next room, right. And the studies show conclusively that the likelihood that you will go and help that person varies according to how many other people there are in the room with you, right. The more people there are in the room with you, the more bystanders, the less likely you are to go and help the person in the next room.
So that's an interesting phenomenon, right. Because if I'm trying to explain, "Why didn't she go and help?", I might say, "Well, she didn't go and help because actually there were all these other bystanders around." That's what explains why she didn't help.
But if I ask you, "Why didn't you help?" that's not the answer that you give. In these studies, everyone who was asked denied that the number of bystanders had any impact on their decision to help or not help. Right, so that's a form of self-ignorance. People are being influenced by something, in this case the number of bystanders, without realizing they're being influenced.
Here's another case:
[22:27] This is the famous pantyhose experiment done by Nesbitt and Wilson, several years ago. So in the pantyhose experiment, Nesbitt and Wilson went off to a shopping mall and asked people to assess the quality of items of clothing, right. So people were presented with four identical pairs of nylon stockings. Identical pairs of nylon stockings. And they were asked to say which one they thought was the best pair. Which one did they think was the best pair. So let me read to you what Nesbitt and Wilson say about this:
[23:04] "Subjects were asked to say which article of clothing was the best quality. And when they announced a choice, they were asked why they had chosen the article they had. In fact, there was a pronounced left-to-right position effect, such that the right-most object in the array was heavily over-chosen."
Don't forget, the stockings were identical.
[23:29] "For the stockings, the effect was quite large, with the right-most stockings being preferred over the left-most by a factor of almost 4 to 1. When asked about the reasons for their choices, no subject ever mentioned spontaneously the position of the article in the array. And when asked directly about the possible effect of the position of the article, virtually all subjects denied it, usually with a worried glance at the interviewer, suggesting that they felt that either they'd misunderstood the question, or were dealing with a madman."
Classic example, classic example of self-ignorance. Not knowing why you made the choice that you made, but you make up this story about the supposed unique qualities of the pair that you chose, even though the pair that you chose is absolutely identical to all the other pairs. The thing that was influencing you was the position. The position. But when asked, "Well, is that what you think was influencing you?" they all deny it.
[Self-Ignorance / Oliver Summary]
Now of course these cases of self-ignorance are slightly different from the case of self-ignorance I was discussing.
[24:43] What I've just told you about, in the pantyhose case and the bystander case, these are cases where your beliefs or your choices are being influenced by what you might call external factors of which you have no knowledge.
In the Oliver case, to the extent that his beliefs are being influenced by his character, it's not external factors but internal factors. Internal factors. Nevertheless, the basic phenomenon is strikingly similar. The basic phenomenon is self-ignorance.
[25:22] You make choices, you have beliefs, you have desires. You know what your choices are, you know what your beliefs are, you know what your desires are, but in a certain important sense you don't know why they are as they are. That's what I mean by self-ignorance.
[25:39] Ok so let me just sum up the three main features of the Oliver case, Ok, and then move on to what the significance is. [...]
The first feature of what I'm saying is that Oliver is certainly not irrational in the strict sense. He's not irrational in the strict sense.
Second feature: Oliver's beliefs about 9/11 are to a significant extent a reflection of his intellectual or epistemic character.
And thirdly, he knows what he believes, but in an important sense, he doesn't know why he believes what he believes.
[26:22] Now those claims strike me as obviously correct -- you should never say that in a philosophy lecture -- but they strike me as obviously correct, or, failing that, at least highly plausible.
[Re: Harvard Rationalism]
So why do I think that these claims cause problems for positions in Philosophy, Psychology, and in Economics? So let me now expand a little bit on that.
[26:48] So my philosophical target is a position which I call "Harvard Rationalism". It's Harvard Rationalism because it's a position made famous by a couple of people who are currently teaching at Harvard, someone called Richard Moran who published an extraordinarily influential, and indeed, I think, brilliant, book called "Authority and Estrangement," published in 2001, and Matthew Boyle, who's a younger person at Harvard, who's recently been publishing some great papers -- some great papers -- on this topic.
[27:23] The sense in which Harvard Rationalists are Rationalists is this: they think of us, they think of human subjects, as fundamentally in the space of reasons. They think of our beliefs and other attitudes as an expression of our reasons, as an expression of our rationality, right, so the basic idea that they have is that our beliefs and other attitudes are, on they whole, as they rationally should be -- a rather optimistic assumption, you might think.
Now there's a particular claim that Harvard Rationalists make which I want to focus on. And the particular claim they make is that reasoning, or what they sometimes call deliberation, is, for us, a fundamental source of self-knowledge. Reasoning, or deliberation, is a fundamental source of self-knowledge.
Ok now here's a quotation from Boyle that encapsulates that view, Ok so I'm going to read you this quotation and as I read it, I want you to think about Oliver, Ok.
Think about Oliver as I read this: [...]
[28:31] Boyle says,
"If I reason 'P, so Q,' this must normally put me in a position not merely to know that I believe that Q, but to know something about why I believe that Q, namely, because I believe that P and that P shows that Q. Successful deliberation normally gives us knowledge of what we believe and why we believe it."
That's the claim: Successful deliberation normally gives us knowledge of what we believe and why we believe it. So in the case in which you reason, "P, therefore Q," the thought is, that in that case, you know that you believe that Q because you believe that P, right, in the normal case.
[29:25] Now, of course, if you apply this to Oliver: Oliver reasons "P, so Q." Oliver reasons in exactly the way that Boyle is describing. Oliver is making just the kind of rational transition that Boyle characterizes.
But does that give Oliver knowledge of why be believes that Q?
Well, I'm not completely dismissing the force of rationalizing explanations. But there's a very important aspect of the Oliver case which is completely missing from the Harvard Rationalists' story.
What's missing from this is the influence of non-rational factors on Oliver's beliefs. In particular what's missing is any reference to the role of Oliver's character in determining what he believes, or, indeed, other internal or external factors.
[30:22] So the story you get from the Harvard Rationalists is the story of this perfect calculating machine, making rational transitions from one proposition to another, and thereby knowing why he thinks what he thinks, in terms of these rational transitions.
What completely goes missing from this is any reference to non-rational influences on belief formation. These Harvard Rationalists are in a way rather Cartesian, right. What they think is that the mind is in a certain sense transparent to yourself. They think that, insofar as you are able to engage in reasoning, you are thereby able to know why you think what you think.
[31:06] Ok, and cases like Oliver seem to put pressure, seem to put pressure on that view. Now of course you might say, "Oliver's just a freak, Oliver's just a kind of freak, hence, why should we, I mean Boyle says "normally" in his formulations.
It's not clear to me that that's right. It's not clear to me that that's right at all. It seems to me that actually a realistic account of human belief formation is going to be one that has to recognize the influence of a wide variety of non-rational influences on our beliefs.
Not just Bystander effects and positional effects but like character, for example, things like emotions. Think about role of the emotions, the influence of emotions on belief formation. Hoping, believing, fearing, are all tied, are all connected with one another, actually as Spinoza recognized.
So it seems to me that Harvard Rationalism is problematic at least in part because it misses out on these very important non-rational aspects of attitude formation.
[32:18] I mean historically, I think, among the great dead philosophers, I think the one who has, and this is based on my cursory knowledge of him, the one who has put the greatest emphasis on this was Nietzsche. I mean Nietzsche had a lot to say about the non-rational influences on our beliefs and desires, particular case of desires.
Ok, so that's the point I want to make about Oliver-type, Oliver-type cases, Ok what I hope to have persuaded you is that in those cases, and indeed in many other cases, there are all sorts of factors that are influencing our beliefs which go well beyond anything that a Harvard Rationalist can explain. You can't explain everything just in terms of reason.
[Re: a Position in Situationalism]
What about Situationism in Psychology? What's that?
[33:05] So Situationism: actually a good illustration of Situationism is the Bystander Effect. Situationists think the following: that if you want to explain why we behave in the ways that we behave, the best explanation will be one in terms of the situations in which we find ourselves. It's no good explaining our behaviour by reference to our character. That's Situationism.
Ok so Situationists would say things like this: If you're trying to explain why in a given situation you assisted someone in distress, whereas the person next to you didn't, the explanation is not in terms of some character trait that you have that your neighbour doesn't have. The best explanation is likely to be something much more prosaic: the number of bystanders who were present, for example.
Or there's the famous Milgram experiment, where people were conned into believing they were administering electric shocks to an unseen victim in the next room. So there was this device with buttons on it marked "100 volts", "150 volts", "extreme pain", "extremely dangerous", and then "XXX' at the top of the dial, right. And they were played sounds of someone apparently in excruciating pain as they went up, as they went up the dial.
So they were encouraged by the experimenter to go higher, to deliver greater and greater electric shocks to this unseen victim in agony in the next room. And in the Milgram experiment, basically everybody, I mean some very large proportion, I think 68 percent of subjects were willing to go all the way up to the top scale, right, in fact to the point where the screaming person in the next room fell completely silent.
[35:00] So Situationists are people who say, "Well why did all those people do that? Did they do that because of some character trait that they all had in common? Well, well no," right. The explanation that Situationists offer is that they behaved in these ways because of the situation that they found themselves in.
So the basic idea of Situationism is that explanations of action in terms of character are no good. Character is explanatorily redundant. It' s always the situation.
And from that, some Situationists have concluded, "There is no such thing as character." They think that the whole idea of character is just a myth. Ok so here's a clear statement of that thesis.[...] This is actually a philosopher, not a psychologist, but it's a philosopher who's very sympathetic to Situationism, so Gil Harmon, who's a professor at Princeton, says:
[35:58] "There is no reason to believe in character traits, as ordinarily conceived. We need to convince people to look at situational factors and stop trying to explain things in terms of character traits."
That's Situationism.
Now I think that Situationism has considerable force. It's a serious position, I think, in psychology, and many of the points that Situationists make are points that deserve to be taken extremely seriously.
However, when you think about something like Oliver, someone like Oliver, it's actually very hard to make sense of what's going on in Oliver-cases, without positing explanatory character traits.
So if you look at the list of Epistemic Vices, to say that there is literally no such thing as character would be to say that there is no such thing as negligence, or idleness, or gullibility; right, these things aren't real because they don't explain anything.
But that view now starts to -- I hope you'll agree -- starts to look ludicrous. It's very hard to make sense of what's going on in Oliver-type cases without supposing that he does have character traits, distinctive character traits, which do help to explain why he believes what he believes.
[37:29] So I think Situationists are right to this extent: they're right to be suspicious of blanket explanations of human actions in terms of moral character traits. I think that's right.
But when it comes to these sorts of rather fine-grained intellectual character traits, it's very hard to do without them when we try to explain what's going in cases like this. That's why I think the Oliver case, and similar cases, are a challenge for Situationists in Psychology.
Ok. Lastly I want to say something about Behavioural Economics. This is the PPE Society, so I feel I need to say something about Economics.
[Re: a Position in Behavioural Economics]
So what is Behavioural Economics? What is is?
Well I think I can no better than to quote two very distinguished Chicago economists, Levitt and List, in an article which they published in Science, three of four years ago. [...] So this is the Levitt and List characterization of Behavioural Economics.
[38:36] "The discipline of Economics is built on the shoulders of the mythical species Homo Economicus. Unlike his uncle, Homo Sapiens, Homo Economicus is unswervingly rational, completely selfish, and can effortlessly solve even the most difficult optimization problem.
This rational paradigm has served Economics well, providing a coherent framework for modeling human behaviour. However, a small but vocal movement has sought to dethrone Homo Economicus, replacing him with someone who acts more human.
This insurgent branch, commonly referred to as Behavioural Economics, argues that actual human behaviour deviates from the rational model in predictable ways. Incorporating these features into economic models, proponents argue, should improve our ability to explain observed behaviour."
Right, so the basic idea is this: that there's a contrast between this ideal, this mythical, this super-rational, super-selfish Homo Economicus and real human beings. Right, so if you're trying to figure out what's wrong with Economics, one thing that's wrong with it, on this view, is that it's historically focused, really, on Homo Economicus. It hasn't tried to explain human economic behaviour, bearing in mind all the respects in which Homo Sapiens are different from Homo Economicus.
[40:00] Now that seems to me to be a very powerful and intellectually respectable position in economics. I'm not especially competent to comment on it, but it seems to me to have quite a lot going for it. However, as some of you will be aware, there's a further, there's a further step which some Behavioural Economists have taken.
And that further step is to claim not just that human beings are not Homo Economicus but to claim that human beings are actually irrational. Ok so some of you will have come across what Amazon assures me is a best-seller by a Behavioural Economist called Dan Ariely.
The book is called "Predictably Irrational" and you can guess what the thesis of the book is. Humans are predictably irrational. And of course if you approach things from this kind of Ariely perspective, you might think, "Well, Oliver-cases are the perfect illustration of this. Perfect illustration of human irrationality." However, however, it seems to me that we shouldn't say that at all.
The respects in which Homo Economicus and Homo Sapiens are different from one another do not constitute respects in which humans are irrational. Not being Homo Economicus does not make you irrational, it just makes you not Homo Economicus.
And indeed when you read, when you read books like "Predictably Irrational," I mean, when I first read that, I thought, "Well, obviously the first thing I want to know is, "What does he mean by irrational?" right and that turned out to be a surprisingly difficult question to answer, despite reading the book fairly carefully, and in the end it turned out, it turns out that what people like Ariely really mean by "irrational" is actually "self-ignorant". That's actually what they mean.
[42:05] So the subtitle of "Predictably Irrational" is "The Hidden Forces That Shape Our Decisions" and that's actually Ariely's thesis. His thesis is that in fact our decisions are shaped by and influenced by all sorts of factors of which we are unaware.
[42:22] Right so one of the examples that he gives is a subscription for "The Economist" right where you've got "Internet Only", a certain percentage, "Print" a certain percentage, and "Print and Internet" a certain price. Right and then it turns out that we were being influenced by one of these three choices in ways that we weren't aware of. But that doesn't make us irrational, right. Being self-ignorant does not make you irrational.
So it seems to me that these rather exaggerated populist versions of Behavioural Economics need to be resisted. They represent themselves as talking about irrationality but what they're actually talking about is self-ignorance.
Self-ignorance is a genuine and important phenomenon, but it's not the same phenomenon as irrationality.
[What Philosophy has to Learn from Behavioural Economics]
However I do think, I do think that Philosophy actually does have something very important to learn from Behavioural Economics, and I want to end by saying just what I think Philosophy has to learn from it.
One of the ideas that I explore in the book that I've just been writing is the following idea: that just as neo-Classical Economics has concentrated on Homo Economicus, Philosophy has in fact concentrated very much on what I call Homo Philosophicus.
Right so when Philosophers try to explain human knowledge, or some other phenomenon, they very rarely consider human beings as we actually are.
Rather, what they have in mind is an incredibly Epistemically well-behaved citizen.
Right so Homo Philosphicus is a model Epistemic citizen who only believes what he has reason to believe, when he encounters evidence against his beliefs, he abandons his beliefs, and so on.
[44:16] Right, well, we're not like that. We're not like that.
There's a large number of disparities between Homo Sapiens and Homo Philosophicus which correspond to the disparities between Homo Sapiens and Homo Economicus, and one of the things I try to do in the book is to look at these disparities and try to consider what their significance is for Philosophical accounts not just of self-knowledge, but Philosophical accounts of all sorts of other things.
So the basic idea is this: if you want to give a Philosophical account of self-knowledge, you need to make sure that the account that you give is not an account of self-knowledge for Homo Philosophicus, right, who can come to know his beliefs by engaging in rational deliberation.
It would be nice if that were true of us and no doubt it is true of us some of the time, but it's also not true of us a lot of the time.
[Conclusion]
So what the Philosophy of Self-Knowledge should be trying to do give an account of what I call "A Theory of Self-Knowledge for Humans."
And when you try to think about the human predicament, I think the thing that is striking is the very opposite of the thing that struck Descartes.
The starting point for Cartesian accounts of self-knowledge is the ease with which we get self-knowledge, almost the unavoidability of self-knowledge. That's the Cartesian view of self-knowledge.
On that view, self-ignorance is just not a problem. It's just not an issue. I mean self-ignorance is not an issue in the Cartesian tradition partly because, I guess, in that tradition, there isn't even the possibility of self-ignorance.
What I've been talking about in this lecture is the prevalence and importance and depth of particular forms of self-ignorance which require considerable work to overcome, and that's really what the Philosophy of Self-Knowledge for Humans should be focused on.
What it should be doing is recognizing that self-knowledge is for us a major and difficult cognitive achievement and it requires considerable cognitive effort to achieve it.
So we need to get away from this idea that important interesting self-knowledge is easy to get.
It isn't. It's hard. That's it.
[46:46] [APPLAUSE]
[Questions from the Audience]
[Maybe Oliver Doesn't Know what an Explanation Is]
[MODERATOR]
Ok so Professor Cassam has agreed to take just a few questions. So who would like to go first?
[AUDIENCE]
Yeah, thank you for the talk. That was very interesting. One thing I'd like to quiz you a little more on is on the Oliver situation.
How much do we really need to refer to these kind of intellectual virtues or deficiencies as you refer to them? Could we not explain in terms of Oliver not having an understanding of what an explanation is and what constitutes an explanation, in the same way as if I'd watched a video about global warming denial, for instance, if I had no idea what an explanation is, I might believe it. It's got nothing to do with our ability or otherwise [unintelligible]
[48:03] [PROFESSOR CASSAM]
Ok well it's a very interesting question. I don't know if you've come across this but there's a book by the journalist David Aaronovich. The book is called "Voodoo Histories." It's a discussion of a whole range of conspiracy theories. Now one of the conspiracy theorists whom he discusses is a philosopher called Richard Popkin.
Now Popkin wrote an incredibly influential, and important, and indeed good book, on the history of philosophy. the history of philosophy since Descartes. Now one of Popkin's side interests was the assassination of JFK, right, so three or four years after the JFK assassination, Popkin published a book, the title of which is, "The Second Oswald."
[48:50] Right, so in that book Popkin defends the view that in fact Oswald wasn't the lone assassin of JFK. Or in fact I think he thinks that Oswald didn't fire the fatal bullets at all. In fact, there was someone physically similar to Oswald, the second Oswald, who was responsible.
Now, that's a ludicrous theory, right, about the JFK assassination. But if you were to say, "Why, Professor Popkin, do you believe these things?" or if we were trying to explain why he believes these things, I think it would be a bit of an ask to say, "Popkin doesn't really understand what an explanation is."
I mean, I mean, Popkin is not a stupid man, right. I mean, Popkin writes about all sorts of abstruse philosophical topics, indeed writes about topics like explanation, right, so saying that it's that kind of failure, that kind of failing, which explains what's going on, at least in his case, seems manifestly inadequate.
[49:53] Right so I'm not sug-- I'm not denying that there are, that there may indeed be, people whose defects can be explained in the way that you're suggesting. What I'm saying is that that can't be the whole story.
There are, as the Popkin case illustrates, other things going on in those cases.
[Obstacles to Self-Knowledge]
[AUDIENCE]
Yes I wanted to ask if you could talk about the nature of the difficulty that's involved in self-knowing. Because it seems to me from what you said there are two possible sources of difficulty. One is just the nature of character, that character is intrinsically difficult.
But then you connected character with the third-person perspective, so the other possible difficulty is coming to know ourselves as others know us. And I wonder if you could say a bit: Do you think they're connected in some way?
[50:52] Is character the kind of thing that we can only know in and through others? Or just if you could turn some light on the relation between those two.
[PROFESSOR CASSAM]
So one distinction that I want to draw, just to fill out the story a bit is the distinction between what I call "trivial self-knowledge", right, knowing that you believe that you're wearing socks, a perfectly trivial piece of self-knowledge, versus what I call "substantial self-knowledge" which would include knowledge of such things as your character, perhaps knowledge of some of your emotions.
So the positive account of self-knowledge that I want to defend is that self-knowledge in those cases in inferential. And it's based on evidence, ok. It's based on evidence.
So when you think about why someone might fail to have self-knowledge, knowledge of his character in these cases, you actually have a range of explanations. Ok so one explanation would be a kind of motivational explanation, where you say, "Perhaps there are aspects of your character, as it were, you avert your eyes from, because they're embarrassing or distressing to you.
[52:02] Another explanation is that you don't have, you don't have sufficient evidence to draw those conclusions, right. Maybe you've never been put in a situation where certain aspects of your character are manifested.
Yet another explanation is that maybe you're self-ignorant in these cases because, although you have the right evidence, you draw the wrong conclusions from it. So these are all examples of particular kinds of obstacle or cognitive failing which might prevent you from coming to know why you are, coming to know what kind of person you are.
[still some work to be done here!]
[...]
[If Oliver was Giving a Lecture ...]
[AUDIENCE]
[1:02:12] It it at all worrying that if Oliver was giving a lecture, he could have given almost exactly the same lecture and accused you of the epistemic vices of being gullible and believing everything the government tells you and etc. and etc. and make almost exactly the same points as you do?
[PROFESSOR CASSAM]
Well the answer is yes and no. It's not worrying in the sense that if Oliver were to do that, he would certainly be going in for the same style of explanation that I was going in for, right, and to that extent he would be right.
I mean to that extent he would be right and of course this is what's so threatening about threatening about these cases right that actually, I mean, for any of us, if you step back and ask yourself, "Well why do I fundamentally think that?" right, and somebody says, "Well, you know, there's all these non-epistemic explanations," and that's a sense that's a sense in which asking these questions about why you believe what you believe can be such an an undermining can be such an an undermining exercise.
[1:03:06] So insofar that Oliver runs the, does the same number on me, I don't have any objections, right, at least insofar as he's going in for that style of explanation. My objection is of course that he's wrong!
[One of the Things That's Actually Really Mysterious]
[AUDIENCE]
[1:03:19] [inaudible] ... [unintelligible] ... I mean you can't be a good mathematician without being rigorous ... [inaudible] ... [unintelligible] ... could require an addition ... [unintelligible] ... [inaudible] ...
[PROFESSOR CASSAM]
[1:04:02] I think that's exactly right and I think that's a really really important point. I mean one of the things that's actually really mysterious about actual conspiracy theorists is that, as you say, many of them are highly educated, highly intelligent, highly competent individuals who don't display any of these epistemic vices in lots of the areas in which they live their lives, right, so so clearly someone who says, "Well, you know, he's gullible" or "He's obtuse" or "He's careless" is going to have to contend with the fact that he isn't, all the time, right.
So it might be that one's going to have to, even if one's going in for these character explanations, one's going to have to come up with a much more fine-grained explanation, in those terms. I mean, I don't myself have a developed theory of that to offer, beyond just making that concession. But it is very very instructive actually, reading more about people who have these belief systems. And actually, just saying, just saying blankly, "Well, they're gullible or stupid," is just not gonna cut it, right.
[Isn't There a Danger?]
[AUDIENCE]
Isn't there a danger in seeking to explain the views of people other than your own based on these epistemic virtues and vices in the sense that you might look at someone else's reading of the evidence and compare it to your own world view, find it deficient and therefore fail to actually engage with their arguments if you can write them off as "They're gullible" which is manifestly wrong.
[PROFESSOR CASSAM]
[1:06:09] Well I think that not engaging with their arguments is not something that I'm recommending. I mean I think that actually, if you were if you were if I were if I was confronted with a real live Oliver, it wouldn't be enough just to say to him, "Well, you're gullible." right I mean clearly clearly you'd have to look have to try to draw his attention to the evidence, the very strong evidence that in fact it was al-Q'aeda that did it, and it was the fires and the aircraft impact that brought the towers down.
Now of course it might be I guess I guess what he's going to do is to run the same number on me that Colin was suggesting, saying "Well, you're the one who's gullible. Well you believe the 9/11 Commission Report but that's all part of the grand conspiracy." And in a way there's no answer to this, there's no, I mean, the only thing you can ever do with someone like that is to just continue the conversation up to the point where it seems useful to do so.
[1:07:07] But it would come a point when it's no longer useful to do so. And at that point, really, all you can do is to walk away. right and then you can say to someone else, "Well, look, I just gave up on that person because, you know, what do you do with someone like that?" right. and that of course is what we very often say about other people: "What do you do with someone like that?" But that's not a substitute for engaging with their wacky views, I mean it's actually quite important that people who go around spouting these things, that they're actually challenged.
[One Final Question]
[AUDIENCE]
I have a question. I was wondering about your response to the questions about real conspiracy theorists. [...] you might think that a Situationalist would be able to come in and say "They do have all these epistemic [vices], but you can explain in terms of their situation [...] you often get the feeling that they're trying to rationalize how their government could have gone to war in Iraq. Well, that was an evil thing to do, and our government was evil. Then everything makes more sense, in a way rather than an explanation [...]
[PROFESSOR CASSAM]
[1:08:22] Yeah, I'm not sure that that's what Situationists mean by situations, right I mean what you're describing is the pursuit of a certain kind of rational intelligibility that these people are after, I mean I'm sympathetic to what you're saying to this extent: I think that Situationists are onto something very important. right I mean what they're onto is the idea that we are actually prone to try to explain things in terms of character when very often there's a better explanation in terms of situations.
To that extent I think they're right so this is certainly the famous fundamental attribution error of always trying to explain things in terms of character traits when very often the situation will explain. Just, uh, explain better.
But Situationists at least in the sort of Harmon mold then take the further step of saying there is no such thing as character. That further step is just unnecessary and just seems to me completely bizarre, right. I mean a sensible position in this area will be a position that combines the good insights of Situationism with the good insights of what I call Vice Epistemology in coming up with an explanation of what's going on.
I mean it's no more acceptable to dismiss the importance of situations than it is to say there's no such thing as character. Clearly they're both part of a part of a complete explanation.
[MODERATOR]
Ok then Professor Cassam then it just remains to say that on behalf of the PPE Society and all of us here this has been an absolute pleasure, so thank you very much indeed.
[APPLAUSE]
Quassim Cassam: Cranks, Conspiracies, and the Hidden Self
Friday, February 28, 2014
Filed under
conspiracy theories,
Quassim Cassam
by Winter Patriot
on Friday, February 28, 2014
[
link |
| home
]
Subscribe to:
Posts (Atom)