Find online courses made by experts from around the world.
Take your courses with you and learn anywhere, anytime.
Learn and practice real-world skills and achieve your goals.
Not for you? No problem.
30 day money back guarantee
Learn on the go.
Desktop, iOS and Android
Certificate of completion
|Section 1: My First Section|
We introduce Game Theory by playing a game. We organize the game into players, their strategies, and their goals or payoffs; and we learn that we should decide what our goals are before we make choices. With some plausible payoffs, our game is a prisoners' dilemma. We learn that we should never choose a dominated strategy; but that rational play by rational players can lead to bad outcomes. We discuss some prisoners' dilemmas in the real world and some possible real-world remedies. With other plausible payoffs, our game is a coordination problem and has very different outcomes: so different payoffs matter. We often need to think, not only about our own payoffs, but also others' payoffs. We should put ourselves in others' shoes and try to predict what they will do. This is the essence of strategic thinking.
Strategies and Games: Theory And Practice. (Dutta): Chapter 1, Sections 1-3
Strategy: An Introduction to Game Theory. (Watson): Chapter 1
Game Theory: Lecture 1 Transcript
Professor Ben Polak: So this is Game Theory Economics 159. If you're here for art history, you're either in the wrong room or stay anyway, maybe this is the right room; but this is Game Theory, okay. You should have four handouts; everyone should have four handouts. There is a legal release form--we'll talk about it in a minute--about the videoing. There is a syllabus, which is a preliminary syllabus: it's also online. And there are two games labeled Game 1 and Game 2. Can I get you all to look at Game 1 and start thinking about it. And while you're thinking about it, I am hoping you can multitask a bit. I'll describe a bit about the class and we'll get a bit of admin under our belts. But please try and look at--somebody's not looking at it, because they're using it as a fan here--so look at Game 1 and fill out that form for me, okay?
So while you're filling that out, let me tell you a little bit about what we're going to be doing here. So what is Game Theory? Game Theory is a method of studying strategic situations. So what's a strategic situation? Well let's start off with what's not a strategic situation. In your Economics - in your Intro Economics class in 115 or 110, you saw some pretty good examples of situations that were not strategic. You saw firms working in perfect competition. Firms in perfect competition are price takers: they don't particularly have to worry about the actions of their competitors. You also saw firms that were monopolists and monopolists don't have any competitors to worry about, so that's not a particularly strategic situation. They're not price takers but they take the demand curve. Is this looking familiar for some of you who can remember doing 115 last year or maybe two years ago for some of you? Everything in between is strategic. So everything that constitutes imperfect competition is a strategic setting. Think about the motor industry, the motor car industry. Ford has to worry about what GM is doing and what Toyota is doing, and for the moment at least what Chrysler is doing but perhaps not for long. So there's a small number of firms and their actions affect each other.
So for a literal definition of what strategic means: it's a setting where the outcomes that affect you depend on actions, not just on your own actions, but on actions of others. All right, that's as much as I'm going to say for preview right now, we're going to come back and see plenty of this over the course of the next semester.
So what I want to do is get on to where this applies. It obviously applies in Economics, but it also applies in politics, and in fact, this class will count as a Political Science class if you're a Political Science major. You should go check with the DUS in Political Science. It count - Game Theory is very important in law these days. So for those of you--for the half of you--that are going to end up in law school, this is pretty good training. Game Theory is also used in biology and towards the middle of the semester we're actually going to see some examples of Game Theory as applied to evolution. And not surprisingly, Game Theory applies to sport.
So let's talk about a bit of admin. How are you doing on filling out those games? Everyone managing to multitask: filling in Game 1? Keep writing. I want to get some admin out of the way and I want to start by getting out of the way what is obviously the elephant in the room. Some of you will have noticed that there's a camera crew here, okay. So as some of you probably know, Yale is undergoing an open education project and they're videoing several classes, and the idea of this, is to make educational materials available beyond the walls of Yale. In fact, on the web, internationally, so people in places, maybe places in the U.S. or places miles away, maybe in Timbuktu or whatever, who find it difficult to get educational materials from the local university or whatever, can watch certain lectures from Yale on the web.
Some of you would have been in classes that do that before. What's going to different about this class is that you're going to be participating in it. The way we teach this class is we're going to play games, we're going to have discussions, we're going to talk among the class, and you're going to be learning from each other, and I want you to help people watching at home to be able to learn too. And that means you're going to be on film, at the very least on mike.
So how's that going to work? Around the room are three T.A.s holding mikes. Let me show you where they are: one here, one here, and one here. When I ask for classroom discussions, I'm going to have one of the T.A.s go to you with a microphone much like in "Donahue" or something, okay. At certain times, you're going to be seen on film, so the camera is actually going to come around and point in your direction.
Now I really want this to happen. I had to argue for this to happen, cause I really feel that this class isn't about me. I'm part of the class obviously, but it's about you teaching each other and participating. But there's a catch, the catch is, that that means you have to sign that legal release form.
So you'll see that you have in front of you a legal release form, you have to be able to sign it, and what that says is that we can use you being shown in class. Think of this as a bad hair day release form. All right, you can't sue Yale later if you had a bad hair day. For those of you who are on the run from the FBI, your Visa has run out, or you're sitting next to your ex-girlfriend, now would be a good time to put a paper bag over your head.
All right, now just to get you used to the idea, in every class we're going to have I think the same two people, so Jude is the cameraman; why don't you all wave to Jude: this is Jude okay. And Wes is our audio guy: this is Wes. And I will try and remember not to include Jude and Wes in the classroom discussions, but you should be aware that they're there. Now, if this is making you nervous, if it's any consolation, it's making me very nervous.
So, all right, we'll try and make this class work as smoothly as we can, allowing for this extra thing. Let me just say, no one's making any money off this--at least I'm hoping these guys are being paid--but me and the T.A.s are not being paid. The aim of this, that I think is a good aim, it's an educational project, and I'm hoping you'll help us with it. The one difference it is going to mean, is that at times I might hold some of the discussions for the class, coming down into this part of the room, here, to make it a little easier for Jude.
All right, how are we doing now on filling out those forms? Has everyone filled in their strategy for the first game? Not yet. Okay, let's go on doing a bit more admin. The thing you mostly care about I'm guessing, is the grades. All right, so how is the grade going to work for this class? 30% of the class will be on problem sets, 30% of the grade; 30% on the mid-term, and 40% on the final; so 30/30/40.
The mid-term will be held in class on October 17th; that is also in your syllabus. Please don't anybody tell me late - any time after today you didn't know when the mid-term was and therefore it clashes with 17 different things. The mid-term is on October 17th, which is a Wednesday, in class. All right, the problem sets: there will be roughly ten problem sets and I'll talk about them more later on when I hand them out. The first one will go out on Monday but it will be due ten days later. Roughly speaking they'll be every week.
The grade distribution: all right, so this is the rough grade distribution. Roughly speaking, a sixth of the class are going to end up with A's, a sixth are going to end up with A-, a sixth are going to end up with B+, a sixth are going to end up with B, a sixth are going to end up with B-, and the remaining sixth, if I added that up right, are going to end up with what I guess we're now calling the presidential grade, is that right?
That's not literally true. I'm going to squeeze it a bit, I'm going to curve it a bit, so actually slightly fewer than a sixth will get straight A's, and fewer than a sixth will get C's and below. We'll squeeze the middle to make them be more B's. One thing I can guarantee from past experience in this class, is that the median grade will be a B+. The median will fall somewhere in the B+'s. Just as forewarning for people who have forgotten what a median is, that means half of you--not approximately half, it means exactly half of you--will be getting something like B+ and below and half will get something like B+ and above.
Now, how are you doing in filling in the forms? Everyone filled them in yet? Surely must be pretty close to getting everyone filled in. All right, so last things to talk about before I actually collect them in - textbooks. There are textbooks for this class. The main textbook is this one, Dutta's book Strategy and Games. If you want a slightly tougher book, more rigorous book, try Joel Watson's book, Strategies. Both of those books are available at the bookstore.
But I want to warn everybody ahead of time, I will not be following the textbook. I regard these books as safety nets. If you don't understand something that happened in class, you want to reinforce an idea that came up in class, then you should read the relevant chapters in the book and the syllabus will tell you which chapters to read for each class, or for each week of class, all right. But I will not be following these books religiously at all. In fact, they're just there as back up.
In addition, I strongly recommend people read, Thinking Strategically. This is good bedtime reading. Do any of you suffer from insomnia? It's very good bedtime reading if you suffer from insomnia. It's a good book and what's more there's going to be a new edition of this book this year and Norton have allowed us to get advance copies of it. So if you don't buy this book this week, I may be able to make the advance copy of the new edition available for some of you next week. I'm not taking a cut on that either, all right, there's no money changing hands.
All right, sections are on the syllabus sign up - sorry on the website, sign up as usual. Put yourself down on the wait list if you don't get into the section you want. You probably will get into the section you want once we're done. All right, now we must be done with the forms. Are we done with the forms? All right, so why don't we send the T.A.s, with or without mikes, up and down the aisles and collect in your Game #1; not Game #2, just Game #1.
Just while we're doing that, I think the reputation of this class--I think--if you look at the course evaluations online or whatever, is that this class is reasonably hard but reasonably fun. So I'm hoping that's what the reputation of the class is. If you think this class is going to be easy, I think it isn't actually an easy class. It's actually quite a hard class, but I think I can guarantee it's going to be a fun class. Now one reason it's a fun class, is the nice thing about teaching Game Theory - quieten down folks--one thing about teaching Game Theory is, you get to play games, and that's exactly what we've just been doing now. This is our first game and we're going to play games throughout the course, sometimes several times a week, sometimes just once a week.
We got all these things in? Everyone handed them in? So I need to get those counted. Has anyone taken the Yale Accounting class? No one wants to - has aspirations to be - one person has. I'll have a T.A. do it, it's all right, we'll have a T.A. do it. So Kaj, can you count those for me? Is that right? Let me read out the game you've just played.
"Game 1, a simple grade scheme for the class. Read the following carefully. Without showing your neighbor what you are doing, put it in the box below either the letter Alpha or the letter Beta. Think of this as a grade bid. I will randomly pair your form with another form and neither you nor your pair will ever know with whom you were paired. Here's how the grades may be assigned for the class. [Well they won't be, but we can pretend.] If you put Alpha and you're paired with Beta, then you will get an A and your pair a C. If you and your pair both put Alpha, you'll both get B-. If you put Beta and you're paired with Alpha, you'll get a C and your pair an A. If you and your pair both put Beta, then you'll both get B+."
So that's the thing you just filled in.
Now before we talk about this, let's just collect this information in a more useful way. So I'm going to remove this for now. We'll discuss this in a second, but why don't we actually record what the game is, that we're playing, first. So this is our grade game, and what I'm going to do, since it's kind of hard to absorb all the information just by reading a paragraph of text, I'm going to make a table to record the information. So what I'm going to do is I'm going to put me here, and my pair, the person I'm randomly paired with here, and Alpha and Beta, which are the choices I'm going to make here and on the columns Alpha and Beta, the choices my pair is making.
In this table, I'm going to put my grades. So my grade if we both put Alpha is B-, if we both put Beta, was B+. If I put Alpha and she put a Beta, I got an A, and if I put Beta and she put an Alpha, I got a C. Is that correct? That's more or less right? Yeah, okay while we're here, why don't we do the same for my pair? So this is my grades on the left hand table, but now let's look at what my pair will do, what my pair will get.
So I should warn the people sitting at the back that my handwriting is pretty bad, that's one reason for moving forward. The other thing I should apologize at this stage of the class is my accent. I will try and improve the handwriting, there's not much I can do about the accent at this stage.
So once again if you both put Alpha then my pair gets a B-. If we both put Beta, then we both get a B+; in particular, my pair gets a B+. If I put Alpha and my pair puts Beta, then she gets a C. And if I put Beta and she puts Alpha, then she gets an A. So I now have all the information that was on the sheet of paper that you just handed in.
Now there's another way of organizing this that's standard in Game Theory, so we may as well get used to it now on the first day. Rather then drawing two different tables like this, what I'm going to do is I'm going to take the second table and super-impose it on top of the first table. Okay, so let me do that and you'll see what I mean. What I'm going to do is draw a larger table, the same basic structure: I'm choosing Alpha and Beta on the rows, my pair is choosing Alpha and Beta on the columns, but now I'm going to put both grades in. So the easy ones are on the diagonal: you both get B- if we both choose Alpha; we both get B+ if we both choose Beta. But if I choose Alpha and my pair chooses Beta, I get an A and she gets a C. And if I choose Beta and she chooses Alpha, then it's me who gets the C and it's her who gets the A.
So notice what I did here. The first grade corresponds to the row player, me in this case, and the second grade in each box corresponds to the column player, my pair in this case. So this is a nice succinct way of recording what was in the previous two tables. This is an outcome matrix; this tells us everything that was in the game.
Okay, so now seems a good time to start talking about what people did. So let's just have a show of hands. How many of you chose Alpha? Leave your hands up so that Jude can catch that, so people can see at home, okay. All right and how many of you chose Beta? There's far more Alphas - wave your hands the Beta's okay. All right, there's a Beta here, okay. So it looks like a lot of - well we're going to find out, we're going to count--but a lot more Alpha's than Beta's. Let me try and find out some reasons why people chose.
So let me have the Alpha's up again. So, the woman who's in red here, can we get a mike to the - yeah, is it okay if we ask you? You're not on the run from the FBI? We can ask you why? Okay, so you chose Alpha right? So why did you choose Alpha?
Student: [inaudible] realized that my partner chose Alpha, therefore I chose [inaudible].
Professor Ben Polak: All right, so you wrote out these squares, you realized what your partner was going to do, and responded to that. Any other reasons for choosing Alpha around the room? Can we get the woman here? Try not to be intimidated by these microphones, they're just mikes. It's okay.
Student: The reason I chose Alpha, regardless of what my partner chose, I think there would be better outcomes than choosing Beta.
Professor Ben Polak: All right, so let me ask your names for a second-so your name was?
Professor Ben Polak: Courtney and your name was?
Student: Clara Elise.
Professor Ben Polak: Clara Elise. So slightly different reasons, same choice Alpha. Clara Elise's reason - what did Clara Elise say? She said, no matter what the other person does, she reckons she'd get a better grade if she chose Alpha. So hold that thought a second, we'll come back to - is it Clara Elise, is that right? We'll come back to Clara Elise in a second. Let's talk to the Beta's a second; let me just emphasize at this stage there are no wrong answers. Later on in the class there'll be some questions that have wrong answers. Right now there's no wrong answers. There may be bad reasons but there's no wrong answers. So let's have the Beta's up again. Let's see the Beta's. Oh come on! There was a Beta right here. You were a Beta right? You backed off the Beta, okay. So how can I get a mike into a Beta? Let' s stick in this aisle a bit. Is that a Beta right there? Are you a Beta right there? Can I get the Beta in here? Who was the Beta in here? Can we get the mike in there? Is that possible? In here - you can leave your hand so that - there we go. Just point towards - that's fine, just speak into it, that's fine.
Student: So the reason right?
Professor Ben Polak: Yeah, go ahead.
Student: I personally don't like swings that much and it's the B-/B+ range, so I'd much rather prefer that to a swing from A to C, and that's my reason.
Professor Ben Polak: All right, so you're saying it compresses the range. I'm not sure it does compress the range. I mean if you chose Alpha, you're swinging from A to B-; and from Beta, swinging from B+ to C. I mean those are similar kind of ranges but it certainly is a reason. Other reasons for choosing? Yeah, the guy in blue here, yep, good. That's all right. Don't hold the mike; just let it point at you, that's fine.
Student: Well I guess I thought we could be more collusive and kind of work together, but I guess not. So I chose Beta.
Professor Ben Polak: There's a siren in the background so I missed the answer. Stand up a second, so we can just hear you.
Professor Ben Polak: Sorry, say again.
Student: Sure. My name is Travis. I thought we could work together, but I guess not.
Professor Ben Polak: All right good. That's a pretty good reason.
Student: If you had chosen Beta we would have all gotten B+'s but I guess not.
Professor Ben Polak: Good, so Travis is giving us a different reason, right? He's saying that maybe, some of you in the room might actually care about each other's grades, right? I mean you all know each other in class. You all go to the same college. For example, if we played this game up in the business school--are there any MBA students here today? One or two. If we play this game up in the business school, I think it's quite likely we're going to get a lot of Alpha's chosen, right? But if we played this game up in let's say the Divinity School, all right and I'm guessing that Travis' answer is reflecting what you guys are reasoning here. If you played in the Divinity School, you might think that people in the Divinity School might care about other people's grades, right? There might be ethical reasons--perfectly good, sensible, ethical reasons--for choosing Beta in this game. There might be other reasons as well, but that's perhaps the reason to focus on. And perhaps, the lesson I want to draw out of this is that right now this is not a game. Right now we have actions, strategies for people to take, and we know what the outcomes are, but we're missing something that will make this a game. What are we missing here?
Professor Ben Polak: We're missing objectives. We're missing payoffs. We're missing what people care about, all right. So we can't really start analyzing a game until we know what people care about, and until we know what the payoffs are. Now let's just say something now, which I'll probably forget to say in any other moment of the class, but today it's relevant.
Game Theory, me, professors at Yale, cannot tell you what your payoff should be. I can't tell you in a useful way what it is that your goals in life should be or whatever. That's not what Game Theory is about. However, once we know what your payoffs are, once we know what your goals are, perhaps Game Theory can you help you get there.
So we've had two different kinds of payoffs mentioned here. We had the kind of payoff where we care about our own grade, and Travis has mentioned the kind of payoff where you might care about other people's grades. And what we're going to do today is analyze this game under both those possible payoffs. To start that off, let's put up some possible payoffs for the game. And I promise we'll come back and look at some other payoffs later. We'll revisit the Divinity School later.
All right, so here once again is our same matrix with me and my pair, choosing actions Alpha and Beta, but this time I'm going to put numbers in here. And some of you will perhaps recognize these numbers, but that's not really relevant for now. All right, so what's the idea here? Well the first idea is that these numbers represent utiles or utilities. They represent what these people are trying to maximize, what they're to achieve, their goals.
The idea is - just to compare this to the outcome matrix - for the person who's me here, (A,C) yields a payoff of--(A,C) is this box--so (A,C) yields a payoff of three, whereas (B-,B-) yields a payoff of 0, and so on. So what's the interpretation? It's the first interpretation: the natural interpretation that a lot of you jumped to straight away. These are people--people with these payoffs are people--who only care about their own grades. They prefer an A to a B+, they prefer a B+ to a B-, and they prefer a B- to a C. Right, I'm hoping I the grades in order, otherwise it's going to ruin my curve at the end of the year. So these people only care about their own grades. They only care about their own grades.
What do we call people who only care about their own grades? What's a good technical term for them? In England, I think we refer to these guys - whether it's technical or not - as "evil gits." These are not perhaps the most moral people in the universe. So now we can ask a different question. Suppose, whether these are actually your payoffs or not, pretend they are for now. Suppose these are all payoffs. Now we can ask, not what did you do, but what should you do? Now we have payoffs that can really switch the question to a normative question: what should you do? Let's come back to - was it Clara Elise--where was Clara Elise before? Let's get the mike on you again. So just explain what you did and why again.
Student: Why I chose Alpha?
Professor Ben Polak: Yeah, stand up a second, if that's okay.
Professor Ben Polak: You chose Alpha; I'm assuming these were roughly your payoffs, more or less, you were caring about your grades.
Student: Yeah, I was thinking -
Professor Ben Polak: Why did you choose Alpha?
Student: I'm sorry?
Professor Ben Polak: Why did you choose Alpha? Just repeat what you said before.
Student: Because I thought the payoffs - the two different payoffs that I could have gotten--were highest if I chose Alpha.
Professor Ben Polak: Good; so what Clara Elise is saying--it's an important idea--is this (and tell me if I'm paraphrasing you incorrectly but I think this is more or less what you're saying): is no matter what the other person does, no matter what the pair does, she obtains a higher payoff by choosing Alpha. Let's just see that. If the pair chooses Alpha and she chooses Alpha, then she gets 0. If the pair chooses Alpha and she chose Beta, she gets -1. 0 is bigger than -1. If the pair chooses Beta, then if she chooses Alpha she gets 3, Beta she gets 1, and 3 is bigger than 1. So in both cases, no matter what the other person does, she receives a higher payoff from choosing Alpha, so she should choose Alpha. Does everyone follow that line of reasoning? That's a stronger line of reasoning then the reasoning we had earlier. So the woman, I have immediately forgotten the name of, in the red shirt, whose name was -
Professor Ben Polak: Courtney, so Courtney also gave a reason for choosing Alpha, and it was a perfectly good reason for choosing Alpha, nothing wrong with it, but notice that this reason's a stronger reason. It kind of implies your reason.
So let's get some definitions down here. I think I can fit it in here. Let's try and fit it in here.
Definition: We say that my strategy Alpha strictly dominates my strategy Beta, if my payoff from Alpha is strictly greater than that from Beta, [and this is the key part of the definition], regardless of what others do.
Shall we just read that back? "We say that my strategy Alpha strictly dominates my strategy Beta, if my payoff from Alpha is strictly greater than that from Beta, regardless of what others do." Now it's by no means my main aim in this class to teach you jargon. But a few bits of jargon are going to be helpful in allowing the conversation to move forward and this is certainly one. "Evil gits" is maybe one too, but this is certainly one.
Let's draw out some lessons from this. Actually, so you can still read that, let me bring down and clean this board. So the first lesson of the class, and there are going to be lots of lessons, is a lesson that emerges immediately from the definition of a dominated strategy and it's this. So Lesson One of the course is: do not play a strictly dominated strategy. So with apologies to Strunk and White, this is in the passive form, that's dominated, passive voice. Do not play a strictly dominated strategy. Why? Somebody want to tell me why? Do you want to get this guy? Stand up - yeah.
Student: Because everyone's going to pick the dominant outcome and then everyone's going to get the worst result - the collectively worst result.
Professor Ben Polak: Yeah, that's a possible answer. I'm looking for something more direct here. So we look at the definition of a strictly dominated strategy. I'm saying never play one. What's a possible reason for that? Let's - can we get the woman there?
Professor Ben Polak: "You'll always lose." Well, I don't know: it's not about winning and losing. What else could we have? Could we get this guy in the pink down here?
Student: Well, the payoffs are lower.
Professor Ben Polak: The payoffs are lower, okay. So here's an abbreviated version of that, I mean it's perhaps a little bit longer. The reason I don't want to play a strictly dominated strategy is, if instead, I play the strategy that dominates it, I do better in every case. The reason I never want to play a strictly dominated strategy is, if instead I play the strategy that dominates it, whatever anyone else does I'm doing better than I would have done. Now that's a pretty convincing argument. That sounds like a convincing argument. It sounds like too obvious even to be worth stating in class, so let me now try and shake your faith a little bit in this answer.
You're somebody who's wanted by the FBI, right?
Okay, so how about the following argument? Look at the payoff matrix again and suppose I reason as follows. Suppose I reason and say if we, me and my pair, both reason this way and choose Alpha then we'll both get 0. But if we both reasoned a different way and chose Beta, then we'll both get 1. So I should choose Beta: 1 is bigger than 0, I should choose Beta. What's wrong with that argument? My argument must be wrong because it goes against the lesson of the class and the lessons of the class are gospel right, they're not wrong ever, so what's wrong with that argument? Yes, Ale - yeah good.
Student: Well because you have to be able to agree, you have to be able to speak to them but we aren't allowed to show our partners what we wrote.
Professor Ben Polak: All right, so it involves some notion of agreeing. So certainly part of the problem here, with the reasoning I just gave you--the reasoning that said I should choose Beta, because if we both reason the same way, we both do better that way--involves some kind of magical reasoning. It's as if I'm arguing that if I reason this way and reason myself to choosing Beta, somehow I'm going to make the rest of you reason the same way too. It's like I've got ESP or I'm some character out of the X-Men, is that what it's called? The X-Men right? Now in fact, this may come as a surprise to you, I don't have ESP, I'm not a character out of the X-Men, and so you can't actually see brain waves emitting from my head, and my reasoning doesn't affect your reasoning. So if I did reason that way, and chose Beta, I'm not going to affect your choice one way or the other. That's the first thing that's wrong with that reasoning. What else is wrong with that reasoning? Yeah, that guy down here.
Student: Well, the second that you choose Beta then someone's going - it's in someone's best interest to take advantage of it.
Professor Ben Polak: All right, so someone's going to take advantage of me, but even more than that, an even stronger argument: that's true, but even a stronger argument. Well how about this? Even if I was that guy in the X-Men or the Matrix or whatever it was, who could reason his way into making people do things. Even if I could make everyone in the room choose Beta by the force of my brain waves, what should I then do? I should choose Alpha. If these are my payoffs I should go ahead and choose Alpha because that way I end up getting 3. So there's two things wrong with the argument. One, there's this magical reasoning aspect, my reasoning is controlling your actions. That doesn't happen in the real world. And two, even if that was the case I'd do better to myself choose Alpha.
So, nevertheless, there's an element of truth in what I just said. It's the fact that there's an element of truth in it that makes it seem like a good argument. The element of truth is this. It is true that by both choosing Alpha we both ended up with B-'s. We both end up with payoffs of 0, rather than payoffs of 1. It is true that by both choosing, by both following this lesson and not choosing the dominated strategy Beta, we ended up with payoffs, (0,0), that were bad.
And that's probably the second lesson of the class. So Lesson 2, and this lesson probably wouldn't be worth stating, if it wasn't for sort of a century of thought and economics that said the opposite. So rational choice [in this case, people not choosing a dominated strategy; people choosing a dominant strategy] rational choice can lead to outcomes that - what do Americans call this?--that "suck." If you want a more technical term for that (and you remember this from Economics 115), it can lead to outcomes that are "inefficient," that are "Pareto inefficient," but "suck" will do for today. Rational choices by rational players, can lead to bad outcomes.
So this is a famous example for this reason. It's a good illustration of this point. It's a famous example. What's the name of this example, somebody? This is calledPrisoner's Dilemma. How many of you have heard of the Prisoner's Dilemma before? Most of you saw it in 115, why is it called the Prisoner's Dilemma? Yes, the guy here in orange. That's okay; he can just point at you that's fine.
Student: I think it's whether or not the prisoner's cooperate in the sentence they have, and if they kind of rat out the other person, then they can have less; but if both rat out, then they like end up losing large scale.
Professor Ben Polak: Good, so in the standard story you've got these two crooks, or two accused crooks, and they're in separate cells and they're being interviewed separately--kept apart--and they're both told that if neither of them rats the other guy out, they'll go to jail for say a year. If they both rat each other out, they'll end up in jail for two years, But if you rat the other guy out and he doesn't rat you out, then you will go home free and he'll go to jail for five years. Put that all down and you pretty quickly see that, regardless whether the other guy rats you or not, you're better off ratting him out.
Now, if you have never seen that Prisoner's Dilemma, you can see it pretty much every night on a show called Law & Order. How many of you have seen Law & Order? If you haven't seen Law & Order, the way to see Law & Order is to go to a random TV set, at a random time, and turn on a random channel. This happens in every single episode, so much so that if any of you actually - I mean this might actually be true at Yale--but if you any of you or the TV guys: if any of you know the guy who writes the plots for this, have him come to the class (so I guess to see the video now) and we get some better plot lines in there.
But, of course, that's not the only example. The grade game and this is not the only example. There are lots of examples of Prisoner's Dilemmas out there. Let's try and find some other ones. So how many of you have roommates in your college? How many of you have roommates? Most of you have roommates right? So I'm guessing now, I won't make you show your hands, because it's probably embarrassing, but what is the state of your dorm rooms, your shared dorm rooms, at the end of the semester or the end of the school year?
So I'm just guessing, having been in a few of these things over the years, that by the end of the semester, or certainly by the end of the school year, the state of the average Yale dorm room is quite disgusting. Why is it disgusting? It's disgusting because people don't tidy up. They don't clean up those bits of pizza and bits of chewed bread and cheese, but why don't they tidy up?
Well let's just work it out. What would you like to happen if you're sharing a dorm room? You'd like to have the other guy tidy up, right? The best thing for you is to have the other guy tidy up and the worst thing for you is to tidy up for the other guy. But now work it out: it's a Prisoner's Dilemma. If the other guy doesn't tidy up, you're best off not tidying up either, because the last thing you want is to be tidying up for the other guy. And if the other guy does tidy up, hey the room's clean, who cares? So either way, you're not going to tidy up and you end up with a typical Yale dorm room.
Am I being unfair? Are your dorm rooms all perfect? This may be a gender thing but we're not going to go there. So there are lots of Prisoner's Dilemmas out there, anyone got any other examples? Other examples? I didn't quite hear that, sorry. Let's try and get a mike on it so we can really hear it.
Professor Ben Polak: Okay, in divorce struggles, okay. You're too young to be worrying about such things but never mind. Yeah, okay, that's a good example. All right, hiring lawyers, bringing in big guns. What about an Economics example? What about firms who are competing in prices? Both firms have an incentive to undercut the other firm, driving down profits for both. The last thing you want is to have the other firm undercut you, in an attempt to push prices down. That's good for us the consumers, but bad for the firm, bad for industry profit. What remedies do we see? We'll come back to this later on in the class, but let's have a preview. So what remedies do we see in society for Prisoner's Dilemmas? What kind of remedies do we see? Let me try and get the guy here right in front.
Professor Ben Polak: Collusion; so firms could collude. So what prevents them from colluding? One thing they could do, presumably, is they could write a contract, these firms. They could say I won't lower my prices if you don't lower your prices, and they could put this contract in with the pricy lawyer, who's taking a day off from the divorce court, and that would secure that they wouldn't lower prices on each other. Is that right? So why wouldn't that work? Why wouldn't writing a contract here work? It's against the law. It's an illegal contract. What about you with your roommates? How many of you have a written contract, stuck with a magnet on the fridge, telling you, when you're supposed to tidy up. Very few of you. Why do you manage to get some cooperation between you and your roommates even without a written contract?
Student: It's not legally enforceable.
Professor Ben Polak: Well it probably is legally enforceable actually. This guy says not, but it probably is legally enforceable. He probably could have a written contract about tidying up. The woman in here.
Student: Repetition; you do it over and over.
Professor Ben Polak: Yeah, so maybe even among your roommates, maybe you don't need a contract because you can manage to achieve the same ends, by the fact that you're going to be interacting with the same person, over and over again during your time at Yale. So we'll come back and revisit the idea that repeating an interaction may allow you to obtain cooperation, but we're not going to come back to that until after the mid-term. That's way down the road but we'll get there.
Now one person earlier on had mentioned something about communication. I think it was somebody in the front, right? So let's just think about this a second. Is communication the problem here? Is the reason people behave badly--I don't know "badly"--people choose Alpha in this game here, is it the fact that they can't communicate? Suppose you'd been able to talk before hand, so suppose the woman here whose name was…?
Professor Ben Polak: …Mary, had been able to talk to the person next to her whose name is…?
Professor Ben Polak: Erica. And they said, suppose we know we're going to be paired together, I'll choose Beta if you choose Beta. Would that work? Why wouldn't that work?
Student: There's no enforcement.
Professor Ben Polak: There's no enforcement. So it isn't a failure of communication per se. A contract is more then communication, a contract is communication with teeth. It actually changes the payoffs. So I could communicate with Alice on agreements, but back home I'm going to go ahead and choose Alpha anyway; all the better if he's choosing Beta. So we'll come back and talk about more of these things as the course goes on, but let's just come back to the two we forgot there: so the collusion case and the case back in Law & Order with the prisoners in the cell. How do they enforce their contracts? They don't always rat each other out and some firms manage to collude? How do they manage to enforce those contracts? Those agreements, how are they enforced?
Student: They trust each other.
Professor Ben Polak: It could be they trust each other, although if you trust a crook that's not… What else could it be? The guy here again with the beard, yeah.
Student: Could be a zero sum game.
Professor Ben Polak: Well, but this is the game. So here's the game.
Student: No, but the pay, the way they value, the way of valuing each--
Professor Ben Polak: Okay, so the payoffs may be different. I have something simpler in mind. Suppose they have a written contract, or even an unwritten contract, what enforces the contract for colluding firms or crooks in jail? Yeah.
Student: Gets off Scott free in five years when the other guy gets out, he might run into a situation where [inaudible]
Professor Ben Polak: Yeah, so a short version of that is, it's a different kind of contract. If you rat someone out in jail, someone puts a contract out on you. Tony Soprano enforces those contracts. That's the purpose of Tony Soprano. It's the purpose of the mafia. The reason the mafia thrives in countries where it's hard to write legal contracts--let's say some new parts of the former Soviet Union or some parts of Africa--the reason the mafia thrives in those environments, is that it substitutes for the law and enforces both legal and illegal contracts.
So I promised a while ago now, that we were going to come back and look at this game under some other possible payoffs. So I wasn't under a contract but let's come back and fulfill that promise anyway. So we're going to revisit, if not the Divinity School, at least in people who have more morality than my friends up in the business school.
We're going to ask for the same grade game we played at the beginning. What would happen if player's payoffs looked different? So these are "possible payoffs (2)." I'll give these a name.. We called the other guys "evil gits." We'll call these guys "indignant angels." I can never spell indignant.. Is that roughly right? Does that look right? I think it's right. In-dig-nant isn't it: indignant. Indignant angels, and we'll see why in a second. So here are their payoffs and once again the basic structure of the game hasn't changed. It's still I'm choosing Alpha and Beta, my pair is choosing Alpha and Beta, and the grades are the same as they were before. They're hidden by that board but you saw them before.
But this time the payoffs are as follows. On the lead diagonal we still have (0,0) and (1,1). But now the grades here are -1--I'm sorry--the payoffs are -1 and -3, and here they're -3 and -1. What's the idea here? These aren't the only other possible payoffs. It's just an idea. Suppose I get an A and my pair gets a C, then sure I get that initial payoff of 3, but unfortunately I can't sleep at night because I'm feeling so guilty. I have some kind of moral conscience and after I've subtracted off my guilt feelings I end up at -1, so think of this as guilt: some notion of morality.
Conversely, if I chose a Beta and my pair chooses an Alpha, so I end up with a C and she ends up with an A, then you know I have a bad time explaining to my parents why I got a C in this class, and I have to say about how I'm going to be president anyway. But then, in addition, I feel indignation against this person. It isn't just that I got a C; I got a C because she made me get a C, so that moral indignation takes us down to -3.
So again, I'm not claiming these are the only other possible payoffs, but just another possibility to look at. So suppose these were the payoffs in the game. Again, suspend disbelief a second and imagine that these actually are your payoffs, and let me ask you what you would have done in this case. So think about it a second. Write it down. Write down what you're going to do on the corner of your notepad. Just write down an Alpha or Beta: what you're going to do here. You're not all writing. The guy in the England shirt isn't writing. You've got to be writing if you are in an England shirt.
Show it to your neighbor. Let's have a show of hands, again I want you to keep your hands up so that Jude can see it now. So how many of you chose Alpha in this case? Raise your hands. Come on, don't be shy. Raise your hands. How many chose Beta in this case? How many people abstained? Not allowed to abstain: let's try it again. Alpha in this case? No abstentions here. Beta in this case? So we're roughly splitting the room. Someone who chose Alpha? Again: raise the Alpha's again. Let me get this guy here. So why did you choose Alpha?
Student: You would minimize your losses; you'd get 0 or -1 instead of -3 or 1.
Professor Ben Polak: All right, so this gentleman is saying -
Student: There's no dominant strategy so -
Professor Ben Polak: Right, so this gentleman's saying, a good reason for choosing Alpha in this game is it's less risky. The worst case scenario is less bad, is a way of saying it. What about somebody who chose Beta? A lot of you chose Beta. Let's have a show of hands on the Beta's again. Let me see the Beta's again. So, raise your hands. Can we get the woman here? Can we ask her why she chose Beta?
Student: Because if you choose Alpha, the best case scenario is you get 0, so that's -
Professor Ben Polak: Okay good, that's a good counter argument. So the gentleman here was looking at the worst case scenario, and the woman here was looking at the best case scenario. And the best case scenario here looks like getting a 1 here. Now, let's ask a different question. Is one of the strategies dominated in this game? No, neither strategy is dominated. Let's just check. If my pair chooses Alpha, then my choosing Alpha yields 0, Beta -3: so Alpha would be better. But if my pair chooses Beta then Alpha yields -1, Beta yields 1: in this case Beta would be better. So Alpha in this case is better against Alpha, and Beta is better against Beta, but neither dominates each other.
So here's a game where we just change the payoffs. We have the same basic structure, the same outcomes, but we imagine people cared about different things and we end up with a very different answer. In the first game, it was kind of clear that we should choose Alpha and here it's not at all clear what we can do--what we should do. In fact, this kind of game has a name and we'll revisit it later on in the semester. This kind of game is called a "coordination problem." We'll talk about coordination problems later on.
The main lesson I want to get out of this for today, is a simpler lesson. It's the lesson that payoffs matter. We change the payoffs, we change what people cared about, and we get a very different game with a very different outcome. So the basic lesson is that payoffs matter, but let me say it a different way. So without giving away my age too much--I guess it will actually--when I was a kid growing up in England, there was this guy - there was a pop star--a slightly post-punk pop star called Joe Jackson, who none of you would have heard of, because you were all about ten years old, my fault. And Joe Jackson had this song which had the lyric, something like, you can't get what you want unless you know what you want.
As a statement of logic, that's false. It could be that what you want just drops into your lap without you knowing about it. But as a statement of strategy, it's a pretty good idea. It's a good idea to try and figure out what your goals are--what you're trying to achieve--before you go ahead and analyze the game. So payoffs matter. Let's put it in his version. "You can't get what you want, till you know what you want."
Be honest, how many of you have heard of Joe Jackson? That makes me feel old, oh man, okay. Goes down every year.
So far we've looked at this game as played by people who are evil gits, and we've looked at this game as played by people who are indignant angels. But we can do something more interesting. We can imagine playing this game on a sort of mix and match. For example, imagine--this shouldn't be hard for most of you--imagine that you are an evil git, but you know that the person you're playing against is an indignant angel. So again, imagine that you know you're an evil git, but you know that the person you're playing against or with, is an indignant angel.
What should you do in that case? What should we do? Who thinks you should choose Alpha in that case? Let's pan the room again if we can. Keep your hands up so that you can see. Who thinks you should choose Beta in that case? Who's abstaining here? Not allowed to abstain in this class: it's a complete no-no. Okay, we'll allow some abstention in the first day but not beyond today. Let's have a look. Let's analyze this combined game.
So what does this game look like? It's an evil git versus an indignant angel and we can put the payoff matrix together by combining the matrices we had before. So in this case, this is me as always. This is my pair, the column player. My payoffs are going to be what? My payoffs are going to be evil-git payoffs, so they come from the matrix up there. So if someone will just help me reading it off there. That's a 0, a 3,a -1, and a 1. My opponent or my partner's payoffs come from the indignant angel matrix. So they come from here. There's a 0, a -3, a -1, and a 1.
Everyone see how I constructed that? So just to remind you again, the first payoff is the row player's payoff, in this case the evil git. And the second payoff is the column player's payoff, in this case the indignant angel. Now we've set it up as a matrix, let&am
At the start of the lecture, we introduce the "formal ingredients" of a game: the players, their strategies and their payoffs. Then we return to the main lessons from last time: not playing a dominated strategy; and putting ourselves into others' shoes. We apply these first to defending the Roman Empire against Hannibal; and then to picking a number in the game from last time. We learn that, when you put yourself in someone else's shoes, you should consider not only their goals, but also how sophisticated are they (are they rational?), and how much do they know about you (do they know that you are rational?). We introduce a new idea: the iterative deletion of dominated strategies. Finally, we discuss the difference between something being known and it being commonly known.
Strategies and Games: Theory And Practice. (Dutta): Chapter 2, Section 3; Chapters 3-4
Strategy: An Introduction to Game Theory. (Watson): Chapters 6-8
Thinking Strategically. (Dixit and Nalebuff): Chapter 3, Sections 1-3
Game Theory: Lecture 2 Transcript
Professor Ben Polak: Okay, so last time we looked at and played this game. You had to choose grades, so you had to choose Alpha and Beta, and this table told us what outcome would arise. In particular, what grade you would get and what grade your pair would get. So, for example, if you had chosen Beta and your pair had chosen Alpha, then you would get a C and your pair would get an A.
One of the first things we pointed out, is that this is not quite a game yet. It's missing something. This has outcomes in it, it's an outcome matrix, but it isn't a game, because for a game we need to know payoffs. Then we looked at some possible payoffs, and now it is a game. So this is a game, just to give you some more jargon, this is a normal-form game. And here we've assumed the payoffs are those that arise if players only care about their own grades, which I think was true for a lot of you. It wasn't true for the gentleman who's sitting there now, but it was true for a lot of people.
We pointed out, that in this game, Alpha strictly dominates Beta. What do we mean by that? We mean that if these are your payoffs, no matter what your pair does, you attain a higher payoff from choosing Alpha, than you do from choosing Beta. Let's focus on a couple of lessons of the class before I come back to this. One lesson was, do not play a strictly dominated strategy. Everybody remember that lesson? Then much later on, when we looked at some more complicated payoffs and a more complicated game, we looked at a different lesson which was this: put yourself in others' shoes to try and figure out what they're going to do.
So in fact, what we learned from that is, it doesn't just matter what your payoffs are -- that's obviously important -- it's also important what other people's payoffs are, because you want to try and figure out what they're going to do and then respond appropriately. So we're going to return to both of these lessons today. Both of these lessons will reoccur today. Now, a lot of today is going to be fairly abstract, so I just want to remind you that Game Theory has some real world relevance.
Again, still in the interest of recapping, this particular game is called the Prisoners' Dilemma. It's written there, the Prisoners' Dilemma. Notice, it's Prisoners, plural. And we mentioned some examples last time. Let me just reiterate and mention some more examples which are actually written here, so they'll find their way into your notes. So, for example, if you have a joint project that you're working on, perhaps it's a homework assignment, or perhaps it's a video project like these guys, that can turn into a Prisoners' Dilemma. Why? Because each individual might have an incentive to shirk. Price competition -- two firms competing with one another in prices -- can have a Prisoners' Dilemma aspect about it. Why? Because no matter how the other firm, your competitor, prices you might have an incentive to undercut them. If both firms behave that way, prices will get driven down towards marginal cost and industry profits will suffer.
In the first case, if everyone shirks you end up with a bad product. In the second case, if both firms undercut each other, you end up with low prices, that's actually good for consumers but bad for firms. Let me mention a third example. Suppose there's a common resource out there, maybe it's a fish stock or maybe it's the atmosphere. There's a Prisoners' Dilemma aspect to this too. You might have an incentive to over fish. Why? Because if the other countries with this fish stock--let's say the fish stock is the Atlantic--if the other countries are going to fish as normal, you may as well fish as normal too. And if the other countries aren't going to cut down on their fishing, then you want to catch the fish now, because there aren't going to be any there tomorrow.
Another example of this would be global warming and carbon emissions. Again, leaving aside the science, about which I'm sure some of you know more than me here, the issue of carbon emissions is a Prisoners' Dilemma. Each of us individually has an incentive to emit carbons as usual. If everyone else is cutting down I don't have too, and if everyone else does cut down I don't have to, I end up using hot water and driving a big car and so on.
In each of these cases we end up with a bad outcome, so this is socially important. This is not just some abstract thing going on in a class in Yale. We need to think about solutions to this, right from the start of the class, and we already talked about something. We pointed out, that this is not just a failure of communication. Communication per se will not get you out of a Prisoners' Dilemma. You can talk about it as much as you like, but as long as you're going to go home and still drive your Hummer and have sixteen hot showers a day, we're still going to have high carbon emissions.
You can talk about working hard on your joint problem sets, but as long as you go home and you don't work hard, it doesn't help. In fact, if the other person is working hard, or is cutting back on their carbon emissions, you have every bit more incentive to not work hard or to keep high carbon emissions yourself. So we need something more and the kind of things we can see more: we can think about contracts; we can think about treaties between countries; we can think about regulation. All of these things work by changing the payoffs. Not just talking about it, but actually changing the outcomes actually and changing the payoffs, changing the incentives.
Another thing we can do, a very important thing, is we can think about changing the game into a game of repeated interaction and seeing how much that helps, and we'll come back and revisit that later in the class. One last thing we can think of doing but we have to be a bit careful here, is we can think about changing the payoffs by education. I think of that as the "Maoist" strategy. Lock people up in classrooms and tell them they should be better people. That may or may not work -- I'm not optimistic -- but at least it's the same idea. We're changing payoffs.
So that's enough for recap and I want to move on now. And in particular, we left you hanging at the end last time. We played a game at the very end last time, where each of you chose a number -- all of you chose a number -- and we said the winner was going to be the person who gets closest to two-thirds of the average in the class. Now we've figured that out, we figured out who the winner is, and I know that all of you have been trying to see if you won, is that right? I'm going to leave you in suspense. I am going to tell you today who won. We did figure it out, and we'll get there, but I want to do a little bit of work first. So we're just going to leave it in suspense. That'll stop you walking out early if you want to win the prize.
So there's going to be lots of times in this class when we get to play games, we get to have classroom discussions and so on, but there's going to be some times when we have to slow down and do some work, and the next twenty minutes are going to be that. So with apologies for being a bit more boring for twenty minutes, let's do something we'll call formal stuff. In particular, I want to develop and make sure we all understand, what are the ingredients of a game? So in particular, we need to figure out what formally makes something into a game.
The formal parts of a game are this. We need players -- and while we're here let's develop some notation. So the standard notation for players, I'm going to use things like little i and little j. So in that numbers game, the game when all of you wrote down a number and handed it in at the end of last time, the players were who? The players were you. You'all were the players. Useful text and expression meaning you plural. In the numbers game, you'all, were the players.
Second ingredient of the game are strategies. (There's a good clue here. If I'm writing you should be writing.) Notation: so I'm going to use little "si" to be a particular strategy of Player i. So an example in that game might have been choosing the number 13. Everyone understand that? Now I need to distinguish this from the set of possible strategies of Player I, so I'm going to use capital "Si" to be what? To be the set of alternatives. The set of possible strategies of Player i. So in that game we played at the end last time, what were the set of strategies? They were the sets 1, 2, 3, all the way up to 100.
When distinguishing a particular strategy from the set of possible strategies. While we're here, our third notation for strategy, I'm going to use little "s" without an "i," (no subscripts): little "s" without an "i," to mean a particular play of the game. So what do I mean by that? All of you, at the end last time, wrote down this number and handed them in so we had one number, one strategy choice for each person in the class. So here they are, here's my collected in, sort of strategy choices. Here's the bundle of bits of paper you handed in last time. This is a particular play of the game.
I've got each person's name and I've got a number from each person: a strategy from each person. We actually have it on a spreadsheet as well: so here it is written out on a spreadsheet. Each of your names is on this spreadsheet and the number you chose. So that's a particular play of the game and that has a different name. We sometimes call this "a strategy profile." So in the textbook, you'll sometimes see the term a strategy profile or a strategy vector, or a strategy list. It doesn't really matter. What it's saying is one strategy for each player in the game.
So in the numbers game this is the spreadsheet -- or an example of this is the spreadsheet. (I need to make it so you can still see that, so I'm going to pull down these boards. And let me clean something.) So you might think we're done right? We've got players. We've got the choices they could make: that's their strategy sets. We've got those individual strategies. And we've got the choices they actually did make: that's the strategy profile. Seems like we've got everything you could possibly want to describe in a game.
What are we missing here? Shout it out. "Payoffs." We're missing payoffs. So, to complete the game, we need payoffs. Again, I need notation for payoffs. So in this course, I'll try and use "U" for utile, to be Player i's payoff. So "Ui" will depend on Player 1's choice … all the way to Player i's own choice … all the way up to Player N's choices. So Player i's payoff "Ui," depends on all the choices in the class, in this case, including her own choice. Of course, a shorter way of writing that would be "Ui(s)," it depends on the profile.
So in the numbers game what is this? In the numbers game "Ui(s)" can be two things. It can be 5 dollars minus your error in pennies, if you won. I guess it could be something if there was a tie, I won't bother writing that now. And it's going to be 0 otherwise. So we've now got all of the ingredients of the game: players, strategies, payoffs. Now we're going to make an assumption today and for the next ten weeks or so; so for almost all the class. We're going to assume that these are known. We're going to assume that everybody knows the possible strategies everyone else could choose and everyone knows everyone else's payoffs. Now that's not a very realistic assumption and we are going to come back and challenge it at the end of semester, but this will be complicated enough to give us a lot of material in the next ten weeks.
I need one more piece of notation and then we can get back to having some fun. So one more piece of notation, I'm going to write "s-i" to mean what? It's going to mean a strategy choice for everybody except person "i." It's going to be useful to have that notation around. So this is a choice for all except person "i" or Player i. So, in particular, if you're person 1 and then "s-i" would be "s2, s3, s4" up to "sn" but it wouldn't include "s1." It's useful why? Because sometimes it's useful to think about the payoffs, as coming from "i's" own choice and everyone else's choices. It's just a useful way of thinking about things.
Now this is when I want to stop for a second and I know that some of you, from past experience, are somewhat math phobic. You do not have to wave your hands in the air if you're math phobic, but since some of you are, let me just get you all to take a deep breath. This goes for people who are math phobic at home too. So everyone's in a slight panic now. You came here today. You thought everything was going to fine. And now I'm putting math on the board. Take a deep breath. It's not that hard, and in particular, notice that all I'm doing here is writing down notation. There's actually no math going on here at all. I'm just developing notation.
I don't want anybody to quit this class because they're worried about math or math notation. So if you are in that category of somebody who might quit it because of that, come and talk to me, come and talk to the TAs. We will get you through it. It's fine to be math phobic. I'm phobic of all sorts of things. Not necessarily math, but all sorts of things. So a serious thing, a lot of people get put off by notation, it looks scarier than it is, there's nothing going on here except for notation at this point.
So let's have an example to help us fix some ideas. (And again, I'll have to clean the board, so give me a second.) I think an example might help those people who are disturbed by the notation. So here's a game which we're going to discuss briefly. It involves two players and we'll call the Players I and II and Player I has two choices, top and bottom, and Player II has three choices left, center, and right. It's just a very simple abstract example for now. And let's suppose the payoffs are like this. They're not particularly interesting. We're just going to do it for the purpose of illustration. So here are the payoffs: (5, -1), (11, 3), (0, 0), (6, 4), (0, 2), (2, 0).
Let's just map the notation we just developed into this game. So first of all, who are the players here? Well there's no secret there, the players are -- let's just write it down why don't we. The players here in this game are Player I and Player II. What about the strategy sets or the strategy alternatives? So here Player I's strategy set, she has two choices top or bottom, represented by the rows, which are hopefully the top row and the bottom row. Player II has three choices, this game is not symmetric, so they have different number of choices, that's fine. Player II has three choices left, center, and right, represented by the left, center, and right column in the matrix.
Just to point out in passing, up to now, we've been looking mostly at symmetric games. Notice this game is not symmetric in the payoffs or in the strategies. There's no particular reason why games have to be symmetric. Payoffs: again, this is not rocket science, but let's do it anyway. So just an example of payoffs. So Player I's payoff, if she chooses top and Player II chooses center, we read by looking at the top row and the center column, and Player I's payoff is the first of these payoffs, so it's 11. Player II's payoff, from the same choices, top for Player I, center for Player II, again we go along the top row and the center column, but this time we choose Player II's payoff, which is the second payoff, so it's 3.
So again, I'm hoping this is calming down the math phobics in the room. Now how do we think this game is going to be played? It's not a particularly interesting game, but while we're here, why don't we just discuss it for a second. If our mike guys get a little bit ready here. So how do we think this game should be played? Well let's ask somebody at random perhaps. Ale, do you want to ask this guy in the blue shirt here, does Player I have a dominated strategy?
Student: No, Player I doesn't have a dominated strategy. For instance, if Player II picks left then Player I wants to pick bottom, but if Player II picks center, Player I wants to pick center.
Professor Ben Polak: Good. Excellent. Very good. I should have had you stand up. I forgot that. Never mind. But that was very clear, thank you. Was that loud enough so people could hear it? Did people hear that? People in the back, did you hear it? So even that wasn't loud enough okay, so we we really need to get people--That was very clear, very nice, but we need people to stand up and shout, or these people at the back can't hear. So your name is?
Professor Ben Polak: What Patrick said was: no, Player I does not have a dominated strategy. Top is better than bottom against left -- sorry, bottom is better than top against left because 6 is bigger than 5, but top is better than bottom against center because 11 is bigger than 0. Everyone see that? So it's not the case that top always beats--it's not the case that top always does better than bottom, or that bottom always does better than top. What about, raise hands this time, what about Player II? Does Player II have a dominated strategy? Everyone's keeping their hands firmly down so as not to get spotted here. Ale, can we try this guy in white? Do you want to stand up and wait until Ale gets there, and really yell it out now.
Student: I believe right is a dominated strategy because if Player I chooses top, then Player II will choose center, and if-- I'm getting confused now, it looks better on my paper. But yeah, right is never the best choice.
Professor Ben Polak: Okay, good. Let's be a little bit careful here. So your name is?
Professor Ben Polak: So Thomas said something which was true, but it doesn't quite match with the definition of a dominated strategy. What Thomas said was, right is never a best choice, that's true. But to be a dominated strategy we need something else. We need that there's another strategy of Player II that always does better. That turns out also to be true in this case, but let's just see.
So in this particular game, I claim that center dominates right. So let's just see that. If Player I chose top, center yields 3, right yields 0: 3 is bigger than 0. And if Player I chooses bottom, then center yields 2, right yields 0: 2 is bigger than 0 again. So in this game, center strictly dominates right. What you said was true, but I wanted something specifically about domination here. So what we know here, we know that Player II should not choose right. Now, in fact, that's as far as we can get with dominance arguments in this particular game, but nevertheless, let's just stick with it a second.
I gave you the definition of strict dominance last time and it's also in the handout. By the way, the handout on the web. But let me write that definition again, using or making use of the notation from the class. So definition so: Player i's strategy "s'i" is strictly dominated by Player i's strategy "si," and now we can use our notation, if "UI" from choosing "si," when other people choose "s-i," is strictly bigger than UI(s'i) when other people choose "s-i," and the key part of the definition is, for all "s-i."
So to say it in words, Player i's strategy "s'i" is strictly dominated by her strategy "si," if "si" always does strictly better -- always yields a higher payoff for Player i -- no matter what the other people do. So this is the same definition we saw last time, just being a little bit more nerdy and putting in some notation. People panicking about that, people look like deer in the headlamps yet? No, you look all right: all rightish.
Let's have a look at another example. People okay, I can move this? All right, so it's a slightly more exciting example now. So imagine the following example, an invader is thinking about invading a country, and there are two ways -- there are two passes if you like -- through which he can lead his army. You are the defender of this country and you have to decide which of these passes or which of these routes into the country, you're going to choose to defend. And the catch is, you can only defend one of these two routes.
If you want a real world example of this, think about the third Century B.C., someone can correct me afterwards. I think it's the third Century B.C. when Hannibal is thinking of crossing the Alps. Not Hannibal Lecter: Hannibal the general in the third Century B.C.. The one with the elephants. Okay, so the key here is going to be that there are two passes. One of these passes is a hard pass. It goes over the Alps. And the other one is an easy pass. It goes along the coast. If the invader chooses the hard pass he will lose one battalion of his army simply in getting over the mountains, simply in going through the hard pass. If he meets your army, whichever pass he chooses, if he meets your army defending a pass, then he'll lose another battalion.
I haven't given you--I've given you roughly the choices, the choice they're going to be for the attacker which pass to choose, and for the defender which pass to defend. But let's put down some payoffs so we can start talking about this. So in this game, the payoffs for this game are going to be as follows. It's a simple two by two game. This is going to be the attacker, this is Hannibal, and this is going to be the defender, (and I've forgotten which general was defending and someone's about to tell me that). And there are two passes you could defend: the easy pass or the hard pass. And there's two you could use to attack through, easy or hard. (Again, easy pass here just means no mountains, we're not talking about something on the New Jersey Turnpike.)
So the payoffs here are as follows, and I'll explain them in a second. So his payoff, the attacker's payoff, is how many battalions does he get to bring into your country? He only has two to start with and for you, it's how many battalions of his get destroyed? So just to give an example, if he goes through the hard pass and you defend the hard pass, he loses one of those battalions going over the mountains and the other one because he meets you. So he has none left and you've managed to destroy two of them. Conversely, if he goes on the hard pass and you defend the easy pass, he's going to lose one of those battalions. He'll have one left. He lost it in the mountains. But that's the only one he's going to lose because you were defending the wrong pass.
Everyone understand the payoffs of this game? So now imagine yourself as a Roman general. This is going to be a little bit of a stretch for imagination, but imagination yourself as a Roman general, and let's figure out what you're going to do. You're the defender. What are you going to do? So let's have a show of hands. How many of you think you should defend the easy pass? Raise your hands, let's raise your hands so Jude can see them. Keep them up. Wave them in the air with a bit of motion. Wave them in the air. We should get you flags okay, because these are the Romans defending the easy pass. And how many of you think you're going to defend the hard pass? We have a huge number of people who don't want to be Roman generals here.
Let's try it again, no abstentions, right? I'm not going to penalize you for giving the wrong answer. So how many of you think you're going to defend the easy pass? Raise your hands again. And how many think you're going to defend the hard pass? So we have a majority choosing easy pass -- had a large majority. So what's going on here? Is it the case that defending the easy pass dominates defending the hard pass? Is that the case? Is it the case that defending the easy pass dominates defending the hard pass? You can shout out. No, it's not.
In fact, we could check that if the attacker attacks through the easy pass, not surprisingly, you do better if you defend the easy pass than the hard pass: 1 versus 0. But if the attacker was to attack through the hard pass, again not surprisingly, you do better if you defend the hard pass than the easy pass. So that's not an unintuitive finding. It isn't the case that defending easy dominates defending hard. You just want to match with the attacker. Nevertheless, almost all of you chose easy. What's going on? Can someone tell me what's going on? Let's get the mikes going a second. So can we catch the guy with the--can we catch this guy with the beard? Just wait for the mike to get there. If you could stand up: stand up and shout. There you go.
Student: Because you want to minimize the amount of enemy soldiers that reach Rome or whatever location it is.
Professor Ben Polak: You want to minimize the number of soldiers that reach Rome, that's true. On the other hand, we've just argued that you don't have a dominant strategy here; it's not the case that easy dominates hard. What else could be going on? While we've got you up, why don't we get the other guy who's got his hand up there in the middle. Again, stand up and shout in that mike. Point your face towards the mike. Good.
Student: It seems as though while you don't have a dominating strategy, it seems like Hannibal is better off attacking through--It seems like he would attack through the easy pass.
Professor Ben Polak: Good, why does it seem like that? That's right, we're on the right lines now. Why does it seem like he's going to attack through the easy pass?
Student: Well if you're not defending the easy pass, he doesn't lose anyone, and if he attacks through the hard pass he's going to lose at least one battalion.
Professor Ben Polak: So let's look at it from--Let's do the exercise--Let's do the second lesson I emphasized at the beginning. Let's put ourselves in Hannibal's shoes, they're probably boots or something. Whatever you do when you're riding an elephant, whatever you wear. Let's put ourselves in Hannibal's shoes and try and figure out what Hannibal's going to do here. So it could be--From Hannibal's point of view he doesn't know which pass you're going to defend, but let's have a look at his payoffs.
If you were to defend the easy pass and he goes through the easy pass, he will get into your country with one battalion and that's the same as he would have got if he went through the hard pass. So if you defend the easy pass, from his point of view, it doesn't matter whether he chooses the easy pass and gets one in there or the hard pass, he gets one in there. But if you were to defend the hard pass, if you were to defend the mountains, then if he chooses the easy pass, he gets both battalions in and if he chooses the hard pass, he gets no battalions in. So in this case, easy is better.
We have to be a little bit careful. It's not the case that for Hannibal, choosing the easy pass to attack through, strictly dominates choosing the hard pass, but it is the case that there's a weak notion of domination here. It is the case -- to introduce some jargon -- it is the case that the easy pass for the attacker, weakly dominates the hard pass for the attacker. What do I mean by weakly dominate? It means by choosing the easy pass, he does at least as well, and sometimes better, than he would have done had he chosen the hard pass.
So here we have a second definition, a new definition for today, and again we can use our jargon. Definition- Player i's strategy, "s'i" is weakly dominated by her strategy "si" if--now we're going to take advantage of our notation--if Player i's payoff from choosing "si" against "s-i" is always as big as or equal, to her payoff from choosing "s'i" against "s-i" and this has to be true for all things that anyone else could do. And in addition, Player i's payoff from choosing "si" against "s-i" is strictly better than her payoff from choosing "s'i" against "s-i," for at least one thing that everyone else could do.
Just check, that exactly corresponds to the easy and hard thing we just had before. I'll say it again, Player i's strategy "s'i" is weakly dominated by her strategy "si" if she always does at least as well by choosing "si" than choosing "s'i" regardless of what everyone else does, and sometimes she does strictly better. It seems a pretty powerful lesson. Just as we said you should never choose a strictly dominated strategy, you're probably never going to choose a weakly dominated strategy either, but it's a little more subtle.
Now that definition, if you're worried about what I've written down here and you want to see it in words, on the handout I've already put on the web that has the summary of the first class, I included this definition in words as well. So compare the definition of words with what's written here in the nerdy notation on the board. Now since we think that Hannibal, the attacker, is not going to play a weakly dominated strategy, we think Hannibal is not going to choose the hard pass. He's going to attack on the easy pass. And given that, what should we defend? We should defend easy which is what most of you chose.
So be honest now: was that why most of you chose easy? Yeah, probably was. We're able to read this. So, by putting ourselves in Hannibal's shoes, we could figure out that his hard attack strategy was weakly dominated. He's going to choose easy, so we should defend easy. Having said that of course, Hannibal went through the mountains which kind of screws up the lesson, but too late now.
Now then, I promised you we'd get back to the game from last time. So where have we got to so far in this class. We know from last time that you should not choose a dominated strategy, and we also know we probably aren't going to choose a weakly dominated strategy, and we also know that you should put yourself in other people's shoes and figure out that they're not going to play strongly or strictly or weakly dominated strategies. That seems a pretty good way to predict how other people are going to play. So let's take those ideas and go back to the numbers game from last time.
Now before I do that, I don't need the people at home to see this, but how many of you were here last time? How many of you were not. I asked the wrong question. How many of you were not here last time? So we handed out again that game. We handed out again the game with the numbers, but just in case, let me just read out the game you played. This was the game you played.
"Without showing your neighbor what you are doing, put it in the box below a whole number between 1 and a 100. We will (and in fact have) calculated the average number chosen in the class and the winner of this game is the person who gets closest to two-thirds times the average number. They will win five dollars minus the difference in pennies."
So everybody filled that in last time and I have their choices here. So before we reveal who won, let's discuss this a little bit. Let me come down hazardously off this stage, and figure out--Let's get the mics up a bit for a second, we can get some mics ready.
So let me find out from people here and see what people did a second. You can be honest here since I've got everything in front of me. So how many of you chose some number like 32, 33, 34? One hand. Actually I can tell you, nine of you did. So should I read out the names? Should I embarrass people? We've got Lynette, Lukucin, we've Kristin, Bargeon; there's nine of you here. Let's try it again.How many of you chose numbers between 32 and 34? Okay, a good number of you. Now we're seeing some hands up. So keep your hands up a second, those people. So let me ask people why? Can you get your hand into the guy? What's your name? If we can get him to stand up. Stand up a second and shout out to the class. What's your name?
Professor Ben Polak: Chris, you're on this list somewhere. Maybe you're not on this list somewhere. Never mind, what did you choose?
Student: I think I chose 30.
Professor Ben Polak: Okay 30, so that's pretty close. So why did you choose 30?
Student: Because I thought everyone was going to be around like the 45 range because 66 is two-thirds, or right around of 100, and they were going to go two-thirds less than that and I did one less than that one.
Professor Ben Polak: Okay, thank you. Let's get one of the others. There was another one in here. Can you just raise your hands again, the people who were around 33, 34. There's somebody in here. Can we get you to stand up (and you're between mikes). So that would be--Yep, go ahead. Shout it out. What's your name first of all?
Professor Ben Polak: Ryan, I must have you here as well, never mind. What did you choose?
Student: 33, I think.
Professor Ben Polak: 33. Oh you did. You are Ryan Lowe?
Professor Ben Polak: You are Ryan Lowe, okay. Good, go ahead.
Student: I thought similar to Chris actually and I also thought that if we got two-thirds and everyone was choosing numbers in between 1 and 100 ends up with 33, would be around the number (indiscernible).
Professor Ben Polak: So just to repeat the argument that we just heard. Again, you have to shout it out more because I'm guessing people didn't hear that in room. So I'll just repeat it to make sure everyone hears it. A reason for choosing a number like 33 might go as follows. If people in the room choose randomly between 1 and 100, then the average is going to be around 50 say and two-thirds of 50 is around 33, 33 1/3 actually. So that's a pretty good piece of reasoning. What's wrong with that reasoning? What's wrong with that? Can we get the guy, the woman in the striped shirt here, sorry. We haven't had a woman for a while, so let's have a woman. Thank you.
Student: That even if everyone else had the same reasoning as you, it's still going to be way too high.
Professor Ben Polak: So in particular, if everyone else had the same reasoning as you, it's going to be way too high. So if everyone else reasons that way then everyone in the room would choose a number like 33 or 34, and in that case, the average would be what? Sorry, that two-thirds of the average would be what? Something like 22.
So the flaw in the argument that Chris and Ryan had -- it isn't a bad argument, it's a good starting point -- but the flaw in the argument, the mistake in the argument was the first sentence in the argument. The first sentence in the argument was, if the people in the room choose random, then they will choose around 50. That's true. The problem is that people in the room aren't going to choose at random. Look around the room a second. Look around yourselves. Do any of you look like a random number generator? Actually, from here I can see some of the people, but I'm not going to put. Actually looking at some of your answers maybe some of you are.
On the whole, Yale students are not random number generators. They're trying to win the game. So they're unlikely to choose numbers at random. As a further argument, if in fact everyone thought that way, and if you figured out everyone was going to think that way, then you would expect everyone to choose a number like 33 and in that case you should choose a number like 22.
How many of you, raise your hands a second. How many of you chose numbers in the range 21 through 23? There's way more of you than that. I'll start reading you out as well. Actually about twelve of you, raise your hands. There should be twelve hands going up somewhere. There's two, three hands going up, four, five hands going up. There's actually 12 people who chose exactly 22, so considerably more if include 23 and 21. So those people, I'm guessing, were thinking this way, is that right? Let me get one of my 22's up again. Here's a 22. You want to get this guy? What's your name sir? Stand up and shout.
Professor Ben Polak: You chose 22?
Student: I chose 22 because I thought that most people would play the game dividing by two-thirds a couple of times, and give numbers averaging around the low 30's.
Professor Ben Polak: So if you think people are going to play a particular way, in particular if you think people are going to choose the strategy of Ryan and Chris, and choose around 33, then 22 seems a great answer. But you underestimate your Yale colleagues. In fact, 22 was way too high. Now, again, let's just iterate the point here. Let me just repeat the point here. The point here is when you're playing a game, you want to think about what other people are trying to do, to try and predict what they're trying to do, and it's not necessarily a great starting point to assume that the people around you are random number generators. They have aims- trying to win, and they have strategies too.
Let me take this back to the board a second. So, in particular, are there any strategies here we can really rule out? We said already people are not random. Are there any choices we can just rule out? We know people are not going to choose those choices. Let's have someone here. Can we have the guy in green? Wait for Ale, there we go. Good. Stand up. Give me your name.
Student: My name's Nick.
Professor Ben Polak: Shout it out so people can hear.
Student: No one is going to choose a number over 50.
Professor Ben Polak: No one is going to choose a number over 50. Okay, I was going--okay that's fair enough. Some people did. That's fair enough. I was thinking of something a little bit less, that's fine. I was thinking of something a little bit less ambitious. Somebody said 66. So let's start analyzing this. So, in particular, there's something about these strategy choices that are greater than 67 at any rate. Certainly, I mean 66 let's go up a little bit, so these numbers bigger than 67. What's wrong with numbers bigger than 67? What's wrong with--Raise your hands if you have answer. What's wrong? Can we get the guy in red who's right close to the mike? Stand up, give me your name. Stand up. Shout it out to the crowd.
Professor Ben Polak: Yep.
Student: If everyone chooses a 100 it would be 67.
Professor Ben Polak: Good, so even if everyone in the number--everyone in the room didn't choose randomly but they all chose a 100, a very unlikely circumstance, but even if everyone had chosen 100, the highest, the average, sorry, the highest two-thirds of the average could possibly be is 66 2/3, hence 67 would be a pretty good choice in that case. So numbers bigger than 67 seem pretty crazy choices, but crazy isn't the word I'm looking for here. What can we say about those choices, those strategies 67 and above, bigger than 67, 68 and above? What can we say about those choices? Somebody right behind you, the woman right behind you, shout it out.
Student: They have no payoffs for…
Professor Ben Polak: They have no payoffs. What's the jargon here? Let's use our jargon. Somebody shout it out, what's the jargon about that? They're dominated. So these strategies are dominated. Actually, they're only weakly dominated but that's okay, they're certainly dominated. In particular, a strategy like 80 is dominated by choosing 67. You will always get a higher payoff from choosing 67, at least as high and sometimes higher, than the payoff you would have got, had you chosen 80, no matter what else happened in the room. So these strategies are dominated. We know, from the very first lesson of the class last time, that no one should choose these strategies. They're dominated strategies.
So did anyone choose strategies bigger than 67? Okay, I'm not going to read out names here, but, turns out four of you did. I'm not going to make you wave your--okay. So okay, for the four of you who did, never mind, but … well mind actually, yeah. So once we've eliminated the possibility that anyone in the room is going to choose a strategy bigger than 67, it's as if those numbers 68 through 100 are irrelevant. It's really as if the game is being played where the only choices available on the table are 1 through 67. Is that right? We know no one's going to choose 68 and above, so we can just forget them. We can delete those strategies and once we delete those strategies, all that's left are choices 1 through 67.
So can somebody help me out now? What can I conclude, now I've concluded that the strategies 68 through 100 essentially don't exist or have been deleted. What can I conclude? Let me see if I can get a mike in here. Stand up and wait for the mike. And here comes the mike. Good. Shout out.
Student: That all strategies 45 and above are hence also ruled out.
Professor Ben Polak: Good, so your name is?
Professor Ben Polak: So Henry is saying once we've figured out that no one should choose a strategy bigger than 67, then we can go another step and say, if those strategies never existed, then the same argument rules out -- or a similar argument rules out -- strategies bigger than 45. Let's be careful here. The strategies that are less than 67 but bigger than 45, I think these strategies are not, they're not dominated strategies in the original game. In particular, we just argued that if everyone in the room chose a 100, then 67 would be a winning strategy. So it's not the case that the strategies between 45 and 67 are dominated strategies. But it is the case that they're dominated once we delete the dominated strategies: once we delete 67 and above.
So these strategies -- let's be careful here with the word weakly here -- these strategies are not weakly dominated in the original game. But they are dominated -- they're weakly dominated -- once we delete 68 through 100. So all of the strategies 45 through 67, are gone now. So okay, let's have a look. Did anyone choose -- raise your hands, Be brave here. Did anyone choose a strategy between 45 and 67? Or between 46 and 67? No one's raising their hand, but I know some of you did because I got it in front of me, at least four of you did and I won't read out those names yet, but I might read them out next time. So four more people chose those strategies.
Now notice, there's a different part of this, this argument. The argument that eliminates strategies 67 and above, or 68 upwards, that strategy just involves the first lesson of last time: do not choose a dominated strategy, admittedly weak here, but still. But the second slice, strategies 45 through 67, getting rid of those strategies involves a little bit more. You've got to put yourself in the shoes of your fellow classmen and figure out, that they're not going to choose 67 and above.
So the first argument, that's a straight forward argument, the second argument says, I put myself in other peoples shoes, I realize they're not going to play a dominated strategy, and therefore, having realized they're not going to play a dominated strategy, I shouldn't play a strategy between 45 and 67. So this argument is an 'in shoes' argument. Now what? Where can we go now? Yeah, so let's have the guy in the beard, but let the mike get to him. Yell out your name.
Student: You just repeat the same reasoning again and again, and you eventually get down to 1.
Professor Ben Polak: We'll do that but let's go one step at a time. So now we've ruled out the possibility that anyone's going to choose a strategy 68 and above because they're weakly dominated, and we've ruled out the possibility that anyone's going to choose a strategy between 46 and 67, because those strategies are dominated, once we've ruled out the dominated strategies.
So we know no one's choosing any strategies above 45., It's as if the numbers 46 and above don't exist. So we know that the highest anyone could ever choose is 45, and two-thirds of 45 is roughly … someone help me out here … 30 right: roughly 30. So we know that all the numbers between 45 and 30, these strategies were not dominated. And they weren't dominated even after deleting the dominated strategies. But they are dominated once we deleted not just the dominated strategies, but also the strategies that were dominated once we deleted the dominated strategies. I'm not going to try and write that, but you should try and write it in your notes.
So without writing that argument down in detail, notice that we can rule out the strategies 30 through 45, not by just examining our own payoffs; not just by putting ourselves in other people's shoes and realizing they're not going to choose a dominated strategy; but by putting our self in other people's shoes while they're putting themselves in someone else's shoes and figuring out what they're going to do. So this is an 'in shoes', be careful where we are here, this is an 'in shoes in shoes' argument, at which point you might want to invent the sock.
Now, where's this going? We were told where it's going. We're able to rule out 68 and above. Then we were able to rule out 46 and above. Now we're able to rule out 31 and above. By the next slice down we'll be able to eliminate -- what is it -- about 20 and above, so 30 down to above 20, and this will be an 'in shoes, in shoes, in shoes'. These strategies aren't dominated, nor are they dominated once you delete the dominated strategies, nor once we dominated the strategies dominated once we've deleted the dominated strategies, but they are dominated once we delete the strategies that have been dominated in the--you get what I'm doing here.
So where is this argument going to go? Where's this argument going to go? It's going to go all the way down to 1: all the way down to 1. We could repeat this argument all the way down to 1. Notice that once we've deleted the dominated strategies, you know I had said before about four people chose this strategy, and in here, about four people chose this strategy, but in this range 30 through 45, I had lots of people. How many of you chose a number between 30 and 45? Well more than that. I can guarantee you more than that chose a number between 30 and 45. In fact, the people who chose where we started off 33 chose in that range. A lot more of you chose numbers between 20 and 30, so we're really getting into the meat of the distribution. But we're seeing that these are choices, that perhaps, are ruled out by this kind of reasoning.
Now, I'm still not going to quite reveal yet who won. I want to take this just one step more abstract. So I want to just discuss this a little bit more. I want to discuss the consequence of rationality in playing games, slightly philosophical for a few minutes. So I claim that if you are a rational player, by which I mean somebody who is trying to maximize their payoffs by their play of the game, that simply being rational, just being a rational player, rules out playing these dominated strategies. So the four of you who chose numbers bigger than 67, whose names I'm not going to read out, maybe they were making a mistake.
|Game Theory (ECON 159). We apply the main idea from last time, iterative deletion of dominated strategies, to analyze an election where candidates can choose their policy positions. We then consider how good is this classic model as a description of the real political process, and how we might build on it to improve it. Toward the end of the class, we introduce a new idea to get us beyond iterative deletion. We think about our beliefs about what the other player is going to do, and then ask what is the best strategy for us to choose given those beliefs? This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We continue the idea (from last time) of playing a best response to what we believe others will do. More particularly, we develop the idea that you should not play a strategy that is not a best response for any belief about others' choices. We use this idea to analyze taking a penalty kick in soccer. Then we use it to analyze a profit-sharing partnership. Toward the end, we introduce a new notion: Nash Equilibrium. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We first define formally the new concept from last time: Nash equilibrium. Then we discuss why we might be interested in Nash equilibrium and how we might find Nash equilibrium in various games. As an example, we play a class investment game to illustrate that there can be many equilibria in social settings, and that societies can fail to coordinate at all or may coordinate on a bad equilibrium. We argue that coordination problems are common in the real world. Finally, we discuss why in such coordination problems--unlike in prisoners' dilemmas--simply communicating may be a remedy. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We apply the notion of Nash Equilibrium, first, to some more coordination games; in particular, the Battle of the Sexes. Then we analyze the classic Cournot model of imperfect competition between firms. We consider the difficulties in colluding in such settings, and we discuss the welfare consequences of the Cournot equilibrium as compared to monopoly and perfect competition. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We first consider the alternative "Bertrand" model of imperfect competition between two firms in which the firms set prices rather than setting quantities. Then we consider a richer model in which firms still set prices but in which the goods they produce are not identical. We model the firms as stores that are on either end of a long road or line. Customers live along this line. Then we return to models of strategic politics in which it is voters that are spread along a line. This time, however, we do not allow candidates to choose positions: they can only choose whether or not to enter the election. We play this "candidate-voter game" in the class, and we start to analyze both as a lesson about the notion of equilibrium and a lesson about politics. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We first complete our discussion of the candidate-voter model showing, in particular, that, in equilibrium, two candidates cannot be too far apart. Then we play and analyze Schelling's location game. We discuss how segregation can occur in society even if no one desires it. We also learn that seemingly irrelevant details of a model can matter. We consider randomizations first by a central authority (such as in a bussing policy), and then decentralized randomization by the individuals themselves, "mixed strategies." Finally, we look at rock, paper, scissors to see an example of a mixed-strategy equilibrium to a game. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We continue our discussion of mixed strategies. First we discuss the payoff to a mixed strategy, pointing out that it must be a weighed average of the payoffs to the pure strategies used in the mix. We note a consequence of this: if a mixed strategy is a best response, then all the pure strategies in the mix must themselves be best responses and hence indifferent. We use this idea to find mixed-strategy Nash equilibria in a game within a game of tennis. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We develop three different interpretations of mixed strategies in various contexts: sport, anti-terrorism strategy, dating, paying taxes and auditing taxpayers. One interpretation is that people literally randomize over their choices. Another is that your mixed strategy represents my belief about what you might do. A third is that the mixed strategy represents the proportions of people playing each pure strategy. Then we discuss some implications of the mixed equilibrium in games; in particular, we look how the equilibrium changes in the tax-compliance/auditor game as we increase the penalty for cheating on your taxes.|
|Game Theory (ECON 159). We discuss evolution and game theory, and introduce the concept of evolutionary stability. We ask what kinds of strategies are evolutionarily stable, and how this idea from biology relates to concepts from economics like domination and Nash equilibrium.|
|Game Theory (ECON 159). We apply the idea of evolutionary stability to consider the evolution of social conventions. Then we consider games that involve aggressive (Hawk) and passive (Dove) strategies, finding that sometimes, evolutionary populations are mixed. We discuss how such games can help us to predict how behavior might vary across settings. Finally, we consider a game in which there is no evolutionary stable population and discuss an example from nature. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We consider games in which players move sequentially rather than simultaneously, starting with a game involving a borrower and a lender. We analyze the game using "backward induction." The game features moral hazard: the borrower will not repay a large loan. We discuss possible remedies for this kind of problem. One remedy involves incentive design: writing contracts that give the borrower an incentive to repay. Another involves commitment strategies; in this case providing collateral. We consider other commitment strategies such as burning boats. But the key lesson of the day is the idea of backward induction. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We first apply our big idea--backward induction--to analyze quantity competition between firms when play is sequential, the Stackelberg model. We do this twice: first using intuition and then using calculus. We learn that this game has a first-mover advantage, and that it comes commitment and from information in the game rather than the timing per se. We notice that in some games having more information can hurt you if other players know you will have that information and hence alter their behavior. Finally, we show that, contrary to myth, many games do not have first-mover advantages. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We first discuss Zermelo's theorem: that games like tic-tac-toe or chess have a solution. That is, either there is a way for player 1 to force a win, or there is a way for player 1 to force a tie, or there is a way for player 2 to force a win. The proof is by induction. Then we formally define and informally discuss both perfect information and strategies in such games. This allows us to find Nash equilibria in sequential games. But we find that some Nash equilibria are inconsistent with backward induction. In particular, we discuss an example that involves a threat that is believed in an equilibrium but does not seem credible. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). In the first half of the lecture, we consider the chain-store paradox. We discuss how to build the idea of reputation into game theory; in particular, in setting like this where a threat or promise would otherwise not be credible. The key idea is that players may not be completely certain about other players' payoffs or even their rationality. In the second half of the lecture, we stage a duel, a game of preemption. The key strategic question in such games is when; in this case, when to fire. We use two ideas from earlier lectures, dominance and backward induction, to analyze the game. Finally we discuss two biases found in Americans: overconfidence and over-valuing being pro-active. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We develop a simple model of bargaining, starting from an ultimatum game (one person makes the other a take it or leave it offer), and building up to alternating offer bargaining (where players can make counter-offers). On the way, we introduce discounting: a dollar tomorrow is worth less than a dollar today. We learn that, if players are equally patient, if offers can be in rapid succession, and if each side knows how much the game is worth to the other side, then the first offer is for an equal split of the pie and this offer is accepted. But this result depends on those assumptions; for example, bargaining power may depend on wealth. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We consider games that have both simultaneous and sequential components, combining ideas from before and after the midterm. We represent what a player does not know within a game using an information set: a collection of nodes among which the player cannot distinguish. This lets us define games of imperfect information; and also lets us formally define subgames. We then extend our definition of a strategy to imperfect information games, and use this to construct the normal form (the payoff matrix) of such games. A key idea here is that it is information, not time per se, that matters. We show that not all Nash equilibria of such games are equally plausible: some are inconsistent with backward induction; some involve non-Nash behavior in some (unreached) subgames. To deal with this, we introduce a more refined equilibrium notion, called sub-game perfection. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We analyze three games using our new solution concept, subgame perfect equilibrium (SPE). The first game involves players' trusting that others will not make mistakes. It has three Nash equilibria but only one is consistent with backward induction. We show the other two Nash equilibria are not subgame perfect: each fails to induce Nash in a subgame. The second game involves a matchmaker sending a couple on a date. There are three Nash equilibria in the dating subgame. We construct three corresponding subgame perfect equilibria of the whole game by rolling back each of the equilibrium payoffs from the subgame. Finally, we analyze a game in which a firm has to decide whether to invest in a machine that will reduce its costs of production. We learn that the strategic effects of this decision--its effect on the choices of other competing firms--can be large, and if we ignore them we will make mistakes. This course was recorded in Fall 2007.|
|Game Theory (ECON 159). We discuss repeated games, aiming to unpack the intuition that the promise of rewards and the threat of punishment in the future of a relationship can provide incentives for good behavior today. In class, we play prisoners' dilemma twice and three times, but this fails to sustain cooperation. The problem is that, in the last stage, since there is then is future, there is no incentive to cooperate, and hence the incentives unravel from the back. We related this to the real-world problems of a lame duck leader and of maintaining incentives for those close to retirement. But it is possible to sustain good behavior in early stages of some repeated games (even if they are only played a few times) provided the stage games have two or more equilibria to be used as rewards and punishments. This may require us to play bad equilibria tomorrow. We relate this to the trade off between ex ante and ex post efficiency in the law. Finally, we play a game in which the players do not know when the game will end, and we start to consider strategies for this potentially infinitely repeated game.|
|Game Theory (ECON 159). In business or personal relationships, promises and threats of good and bad behavior tomorrow may provide good incentives for good behavior today, but, to work, these promises and threats must be credible. In particular, they must come from equilibrium behavior tomorrow, and hence form part of a subgame perfect equilibrium today. We find that the grim strategy forms such an equilibrium provided that we are patient and the game has a high probability of continuing. We discuss what this means for the personal relationships of seniors in the class. Then we discuss less draconian punishments, and find there is a trade off between the severity of punishments and the required probability that relationships will endure. We apply this idea to a moral-hazard problem that arises with outsourcing, and find that the high wage premiums found in foreign sectors of emerging markets may be reduced as these relationships become more stable.|
|Game Theory (ECON 159). We look at two settings with asymmetric information; one side of a game knows something that the other side does not. We should always interpret attempts to communicate or signal such information taking into account the incentives of the person doing the signaling. In the first setting, information is verifiable. Here, the failure explicitly to reveal information can be informative, and hence verifiable information tends to come out even when you don't want it to. We consider examples of such information unraveling. Then we move to unverifiable information. Here, it is hard to convey such information even if you want to. Nevertheless, differentially costly signals can sometimes provide incentives for agents with different information to distinguish themselves. In particular, we consider how the education system can allow future workers to signal their abilities. We discuss some implications of this rather pessimistic view of education.|
|Game Theory (ECON 159). We discuss auctions. We first distinguish two extremes: common values and private values. We hold a common value auction in class and discover the winner's curse, the winner tends to overpay. We discuss why this occurs and how to avoid it: you should bid as if you knew that your bid would win; that is, as if you knew your initial estimate of the common value was the highest. This leads you to bid much below your initial estimate. Then we discuss four forms of auction: first-price sealed-bid, second-price sealed-bid, open ascending, and open descending auctions. We discuss bidding strategies in each auction form for the case when values are private. Finally, we start to discuss which auction forms generate higher revenues for the seller, but a proper analysis of this will have to await the next course.|
Hours of video content