Uploaded by Mayyan Wong

Games of Strategy 4th Fourth Edition

advertisement
Games of Strategy
Fourth Edition
■
6841D CH00 UG.indd 1
12/18/14 3:08 PM
6841D CH00 UG.indd 2
12/18/14 3:08 PM
G AMES OF
STRATE G Y
Fourth Edition
Avinash D ixit
Princeton University
Susan Skeath
Wellesley College
David Reiley
Google
B
W. W. No r to n & Co mpany
New Yo rk • L o ndo n
6841D CH00 UG.indd 3
12/18/14 3:08 PM
■
W. W. Norton & Company has been independent since its founding in 1923, when William Warder
Norton and Mary D. Herter Norton first published lectures delivered at the People’s Institute, the
adult education division of New York City’s Cooper Union. The firm soon expanded its program
beyond the Institute, publishing books by celebrated academics from America and abroad.
By mid-century, the two major pillars of Norton’s publishing program—trade books and college
texts—were firmly established. In the 1950s, the Norton family transferred control of the company
to its employees, and today—with a staff of four hundred and a comparable number of trade,
college, and professional titles published each year—W. W. Norton & Company stands as the
largest and oldest publishing house owned wholly by its employees.
Copyright © 2015, 2009, 2004, 1999 by W. W. Norton & Company, Inc.
All rights reserved.
Printed in the United States of America.
Editor: Jack Repcheck
Editorial Assistant: Theresia Kowara
Copyeditor: Christopher Curioli
Project Editor: Sujin Hong
Electronic Media Editor: Carson Russell
Marketing Manager, Economics: Janise Turso
Production Manager: Sean Mintus
Text Design: Jack Meserole
Composition: A-R Editions
Manufacturing: Courier Kendallville
Library of Congress Cataloging-in-Publication Data
Dixit, Avinash K.
Games of strategy / Avinash Dixit, Susan Skeath, David Reiley.—Fourth edition.
pages cm
Includes bibliographical references and index.
ISBN 978-0-393-91968-4 (hardcover)
1. Game theory. 2. Policy sciences. 3. Decision making. I. Skeath, Susan.
II. Reiley, David. III. Title.
HB144.D59 2015
519.3—dc23
2014037581
W. W. Norton & Company, Inc., 500 Fifth Avenue, New York, N.Y. 10110
www.wwnorton.com
W. W. Norton & Company Ltd., Castle House, 75/76 Wells Street, London W1T 3QT
1 2 3 4 5 6 7 8 9 0
6841D CH00 UG.indd 4
12/18/14 3:08 PM
■
To the memory of my father,
Kamalakar Ramachandra Dixit
— Avinash Dixit
To the memory of my father,
James Edward Skeath
— Susan Skeath
To my mother,
Ronie Reiley
— David Reiley
6841D CH00 UG.indd 5
12/18/14 3:08 PM
6841D CH00 UG.indd 6
12/18/14 3:08 PM
■
Contents
Preface
xx
Part O ne
Introduction and General Principles
1
Basic Ideas and Examples
3
1
what is a game of s trategy ? 4
2
some ex amples and s tories of strategic gam es 6
A. Which Passing Shot? 6
B. The GPA Rat Race 7
C. “We Can’t Take the Exam Because We Had a Flat Tire” 9
D. Why Are Professors So Mean? 10
E. Roommates and Families on the Brink 11
F. The Dating Game 13
3
our strat egy for studying games of s trategy 14
vii
6841D CH00 UG.indd 7
12/18/14 3:08 PM
v i i i C O N T E N T S
2
How to Think about Strategic Games
1
de cisions ve rsus games 2
cl a ssifying games 17
18
20
A. Are the Moves in the Game Sequential or Simultaneous? 20
B. Are the Players’ Interests in Total Conflict or Is There Some Commonality? 21
C. Is the Game Played Once or Repeatedly, and with the Same or
Changing Opponents? 22
D. Do the Players Have Full or Equal Information? 23
E. Are the Rules of the Game Fixed or Manipulable? 25
F. Are Agreements to Cooperate Enforceable? 26
3
some terminology and background a ssumptions 27
A. Strategies 27
B. Payoffs 28
C. Rationality 29
D. Common Knowledge of Rules 31
E. Equilibrium 32
F. Dynamics and Evolutionary Games 34
G. Observation and Experiment 35
4
the uses of game the ory 5
the structure of the cha pters to follow summary 38
41
key te rms exe rcises 36
41
42
P art T w o
Concepts and Techniques
3
Games with Sequential Moves
1
game trees 47
48
A. Nodes, Branches, and Paths of Play 48
B. Uncertainty and “Nature’s Moves” 48
C. Outcomes and Payoffs 50
D. Strategies 50
E. Tree Construction 51
6841D CH00 UG.indd 8
12/18/14 3:08 PM
C O N T E N T S i x
2
solving gam es by u sing trees 3
adding more pl ay e rs 4
orde r advantag es 5
adding more moves 52
57
62
63
A. Tic-Tac-Toe 63
B. Chess 65
C. Checkers 69
6
evide nc e conce rning rollback 7
strate gies in survivor summary 75
80
key te rms exe rcises 4
71
81
81
Simultaneous-Move Games: Discrete Strategies
1
depicting simultan eous - move gam es with
discre te strategi es 2
91
nas h equilibrium 92
94
A. Some Further Explanation of the Concept of Nash Equilibrium 95
B. Nash Equilibrium as a System of Beliefs and Choices 97
3
dominance 99
A. Both Players Have Dominant Strategies 100
B. One Player Has a Dominant Strategy 101
C. Successive Elimination of Dominated Strategies 104
4
best - r esponse analy sis 106
5
three pl ayers 6
multipl e e quilibria in pure strategies 108
111
A. Will Harry Meet Sally? Pure Coordination 111
B. Will Harry Meet Sally? And Where? Assurance 113
C. Will Harry Meet Sally? And Where? Battle of the Sexes 114
D. Will James Meet Dean? Chicken 116
7
no e quilibrium in pure strategies summary 120
key te rms exe rcises 6841D CH00 UG.indd 9
118
120
121
12/18/14 3:08 PM
x C O N T E N T S
5
Simultaneous-Move Games: Continuous Strategies,
Discussion, and Evidence
1
pure strat egies that ar e continuous variables 133
134
A. Price Competition 134
B. Some Economics of Oligopoly 138
C. Political Campaign Advertising 139
D. General Method for Finding Nash Equilibria 142
2
critical discu ssion of the nas h equilibrium concept 143
A. The Treatment of Risk in Nash Equilibrium 144
B. Multiplicity of Nash Equilibria 146
C. Requirements of Rationality for Nash Equilibrium 148
3
rationaliz abilit y 149
A. Applying the Concept of Rationalizability 150
B. Rationalizability Can Take Us All the Way to Nash Equilibrium 152
4
empirical evide nce concerning nash equilibrium 155
A. Laboratory Experiments 156
B. Real-World Evidence 161
summary 165
key terms exe rcises 166
166
appendix : Finding a Value to Maximize a Function 176
6
Combining Sequential and Simultaneous Moves
1
180
games with both simultaneous and
sequ e ntial mov es 181
A. Two-Stage Games and Subgames 181
B. Configurations of Multistage Games 185
2
changing the order o f mov es in a gam e 187
A. Changing Simultaneous-Move Games into Sequential-Move Games 188
B. Other Changes in the Order of Moves 193
3
change in th e me thod of analysis 194
A. Illustrating Simultaneous-Move Games by Using Trees 194
B. Showing and Analyzing Sequential-Move Games in Strategic Form 196
6841D CH00 UG.indd 10
12/18/14 3:08 PM
C O N T E N T S x i
4
three - pl ayer games summary 203
key te rms exe rcises 7
200
204
204
Simultaneous-Move Games: Mixed Strategies
1
what is a mixe d strategy ? 2
mixing moves 214
215
216
A. The Benefit of Mixing 216
B. Best Responses and Equilibrium 218
3
nas h equilibrium a s a s yst em of beli efs and respon ses 4
mixing in non - zero - s um games 221
222
A. Will Harry Meet Sally? Assurance, Pure Coordination, and
Battle of the Sexes 223
B. Will James Meet Dean? Chicken 226
5
ge neral discussion o f mix ed - strategy equilibria 227
A. Weak Sense of Equilibrium 227
B. Counterintuitive Changes in Mixture Probabilities in Zero-Sum Games 228
C. Risky and Safe Choices in Zero-Sum Games 230
6
mixing when on e p l ayer ha s three or mor e
pure strat egies 233
A. A General Case 233
B. Exceptional Cases 236
7
mixing when both pl ayers hav e thr ee strategies 237
A. Full Mixture of All Strategies 237
B. Equilibrium Mixtures with Some Strategies Unused 239
8
how to u se mix ed strategies in p ractice 9
evide nc e on mixing 242
244
A. Zero-Sum Games 244
B. Non-Zero-Sum Games 248
summary 249
key te rms exe rcises 6841D CH00 UG.indd 11
249
250
12/18/14 3:08 PM
xii
CONTENTS
appendix : Probability and Expected Utility 263
the bas ic alge bra of probabilities 263
A. The Addition Rule 264
B. The Multiplication Rule 265
C. Expected Values 266
summary 267
key te rms 267
Part T hre e
Some Broad Classes of Games and Strategies
8
Uncertainty and Information
1
271
imperfe ct information : dealing with risk 273
A. Sharing of Risk 273
B. Paying to Reduce Risk 276
C. Manipulating Risk in Contests 277
2
as ymme tric informat ion : bas ic idea s 3
dire ct communication , or
“ chea p
279
talk ” 281
A. Perfectly Aligned Interests 282
B. Totally Conflicting Interests 283
C. Partially Aligned Interests 284
D. Formal Analysis of Cheap Talk Games 290
4
adverse selection , signaling , and s creening 294
A. Adverse Selection and Market Failure 294
B. The Market for “Lemons” 295
C. Signaling and Screening: Sample Situations 298
D. Experimental Evidence 303
5
signaling in th e l abor mark et 304
A. Screening to Separate Types 305
B. Pooling of Types 308
C. Many Types 309
6
equilibria in t wo - pl ayer s ignaling gam es 310
A. Basic Model and Payoff Structure 311
B. Separating Equilibrium 312
6841D CH00 UG.indd 12
12/18/14 3:08 PM
C O N T E N T S x i i i
C. Pooling Equilibrium 315
D. Semiseparating Equilibrium 317
summary 319
key te rms exe rcises 320
321
appendix : Risk Attitudes and Bayes’ Theorem 335
1
at titudes toward risk and expected utilit y 2
infe rring p robabilities from observing con sequ ences summary 341
Strategic Moves
1
338
341
key te rms 9
335
342
a cl assi fication of s trategic mov es 343
A. Unconditional Strategic Moves 344
B. Conditional Strategic Moves 345
2
credibilit y of strategic moves 3
commitme nts 4
threats and promises 346
348
352
A. Example of a Threat: U.S.–Japan Trade Relations 353
B. Example of a Promise: The Restaurant Pricing Game 357
C. Example Combining Threat and Promise: Joint U.S.–China Political Action 359
5
some additional topics 360
A. When Do Strategic Moves Help? 360
B. Deterrence versus Compellence 361
6
acquiring credibilit y 362
A. Reducing Your Freedom of Action 362
B. Changing Your Payoffs 364
7
countering your opponent ’ s strategic mov es 368
A. Irrationality 368
B. Cutting Off Communication 368
C. Leaving Escape Routes Open 369
D. Undermining Your Opponent’s Motive to Uphold His Reputation 369
E. Salami Tactics 369
summary 6841D CH00 UG.indd 13
370
12/18/14 3:08 PM
x i v C O N T E N T S
key te rms exe rcises 10
371
371
The Prisoners’ Dilemma and Repeated Games
1
the bas ic game
2
solutions i : repe tition 377
( r eview ) 378
379
A. Finite Repetition 380
B. Infinite Repetition 381
C. Games of Unknown Length 385
D. General Theory 387
3
solutions ii : penalties and r ewards 4
solutions iii : lead ershi p 5
experimental evide nce 6
real - world dilemmas 389
392
395
399
A. Evolutionary Biology 399
B. Price Matching 400
C. International Environmental Policy: The Kyoto Protocol 402
summary 405
key te rms exe rcises 405
406
appendix : Infinite Sums 414
11
Collective-Action Games
1
coll ective - action games with t wo pl ayer s 417
418
A. Collective Action as a Prisoners’ Dilemma 419
B. Collective Action as Chicken 421
C. Collective Action as Assurance 422
D. Collective Inaction 422
2
coll ective - action problems in l arg e grou ps 423
A. Multiplayer Prisoners’ Dilemma 425
B. Multiplayer Chicken 427
C. Multiplayer Assurance 429
6841D CH00 UG.indd 14
12/18/14 3:08 PM
C O N T E N T S x v
3
spillove rs , or e xternalities 431
A. Commuting and Spillovers 431
B. Spillovers: The General Case 433
C. Commuting Revisited: Negative Externalities 435
D. Positive Spillovers 439
4
a brief hi story of ideas 443
A. The Classics 443
B. Modern Approaches and Solutions 444
C. Applications 450
5 “ he lp !”:
a game of chicken with mixed strategies summary 458
key te rms exe rcises 12
454
459
459
Evolutionary Games
1
the framework 2
prisone rs ’ dilemma 465
466
470
A. The Repeated Prisoners’ Dilemma 472
B. Multiple Repetitions 476
C. Comparing the Evolutionary and Rational-Player Models 477
3
chicke n 479
4
the assurance game 5
three phe not ypes in the pop ul ation 482
484
A. Testing for ESS 484
B. Dynamics 485
6
the hawk – dove game 488
A. Rational Strategic Choice and Equilibrium 489
B. Evolutionary Stability for V . C 489
C. Evolutionary Stability for V , C 490
D. V , C : Stable Polymorphic Population 491
E. V , C : Each Player Mixes Strategies 491
F. Some General Theory 493
7
interaction s by pop ul ation and across spe cies 495
A. Playing the Field 496
B. Interactions across Species 496
6841D CH00 UG.indd 15
12/18/14 3:08 PM
x v i C O N T E N T S
8
evolution of cooperation and altrui s m summary 503
key te rms exe rcises 13
499
504
504
Mechanism Design
515
1
price discrimination 516
2
some terminology 3
cost - plu s and fix ed - p rice contracts 521
522
A. Highway Construction: Full Information 522
B. Highway Construction: Asymmetric Information 524
4
evide nc e conce rning information
revel ation mechanis ms 5
527
incentiv es for eff ort : the simples t case 529
A. Managerial Supervision 529
B. Insurance Provision 533
6
incentiv es for eff ort : evidence and extensions 537
A. Nonlinear Incentive Schemes 537
B. Incentives in Teams 539
C. Multiple Tasks and Outcomes 540
D. Incentives over Time 541
summary 543
key te rms exe rcises 543
544
Part F our
Applications to Specific Strategic Situations
14
6841D CH00 UG.indd 16
Brinkmanship: The Cuban Missile Crisis
1
a brief narrativ e of ev ents 2
a simple game - the oretic exp l anation 559
560
567
12/18/14 3:08 PM
C O N T E N T S x v i i
3
accounting for additional com pl exities 4
a probabilis tic thre at 5
practicing brinkmanship summary 575
579
583
key te rms exe rcises 15
569
584
584
Strategy and Voting
1
589
voting rules and p rocedur es 590
A. Binary Methods 591
B. Plurative Methods 591
C. Mixed Methods 593
2
voting paradox es 594
A. The Condorcet Paradox 595
B. The Agenda Paradox 596
C. The Reversal Paradox 597
D. Change the Voting Method, Change the Outcome 598
3
evaluating voting system s 600
A. Black’s Condition 601
B. Robustness 602
C. Intensity Ranking 602
4
strate gic manipul ation of vot es 604
A. Plurality Rule 604
B. Pairwise Voting 606
C. Strategic Voting with Incomplete Information 609
D. Scope for Manipulability 612
5
the me dian vote r theor em 613
A. Discrete Political Spectrum 614
B. Continuous Political Spectrum 617
summary 620
key te rms exe rcises 6841D CH00 UG.indd 17
620
621
12/18/14 3:08 PM
x v i i i C O N T E N T S
16
Bidding Strategy and Auction Design
1
t ypes of auctions 632
633
A. Auction Rules 633
B. Auction Environments 635
2
the winner ’ s curse 3
bidding strate gies 636
639
A. The English Auction 639
B. First-Price, Sealed-Bid, and Dutch Auctions: The Incentive to Shade 639
C. Second-Price, Sealed-Bid Auctions: Vickrey’s Truth Serum 640
4
all - pay auctions 642
5
how to se ll at auction 645
A. Risk-Neutral Bidders and Independent Estimates 646
B. Risk-Averse Bidders 647
C. Correlated Estimates 648
6
some adde d t wists to con sider 649
A. Multiple Objects 649
B. Defeating the System 651
C. Information Disclosure 652
D. Online Auctions 653
7
additional re ading summary 657
key te rms exe rcises 17
656
657
658
Bargaining
1
663
nas h ’ s cooperative solution 665
A. Numerical Example 665
B. General Theory 666
6841D CH00 UG.indd 18
2
variable - threat bargaining 672
3
alt ernating - offe rs mod el i : total value d ecays 4
experimental evide nce 5
alt ernating - offe rs mod el ii : impatience 674
677
680
12/18/14 3:08 PM
C O N T E N T S x i x
6
manipul ating information in bargaining 685
7
bargaining with many parti es and issues 688
A. Multi-Issue Bargaining 688
B. Multiparty Bargaining 690
summary 690
key te rms exe rcises 6841D CH00 UG.indd 19
691
691
Glossary
695
Index
712
12/18/14 3:08 PM
■
Preface
W
e wrote this textbook to make possible the teaching of game theory to
first- or second-year college students at an introductory or “principles”
level without requiring any prior knowledge of the fields where game
theory is used—economics, political science, evolutionary biology, and
so forth—and requiring only minimal high school mathematics. Our aim has
succeeded beyond our expectations. Many such courses now exist where none
did 20 years ago; indeed, some of these courses have been inspired by our textbook. An even better sign of success is that competitors and imitators are appearing on the market.
However, success does not justify complacency. We have continued to improve the material in each new edition in response to feedback from teachers
and students in these courses and from our own experiences of using the book.
For the fourth edition, the main new innovation concerns mixed strategies.
In the third edition, we treated this in two chapters on the basis of a distinction
between simple and complex topics. Simple topics included the solution and
interpretation of mixed-strategy equilibria in two-by-two games; the main complex topic was the general theory of mixing in games with more than two pure
strategies, when some of them may go unused in equilibrium. But we found
that few teachers used the second of these two chapters. We have now chosen to
gather the simple topics and some basic concepts from the more complex topics into just one chapter on mixed strategies (Chapter 7). Some of the omitted
xx
6841D CH00 UG.indd 20
12/18/14 3:08 PM
P r e fa ce x x i
material will be available as online appendices for those readers who want to
know more about the advanced topics.
We have improved and simplified our treatment of information in games
(Chapter 8). We give an expanded exposition and example of cheap talk that
clarifies the relationship between the alignment of interest and the possibility of
truthful communication. We have moved the treatment of examples of signaling
and screening to an earlier section of the chapter than that of the third edition,
better to impress upon students the importance of this topic and prepare the
ground for the more formal theory to follow.
The games in some applications in later chapters were sufficiently simple
that they could be discussed without drawing an explicit game tree or showing a
payoff table. But that weakened the connection between earlier methodological
chapters and the applications. We have now shown more of the tools of reasoning about the applications explicitly.
We have continued and improved the collection of exercises. As in the third
edition, the exercises in each chapter are split into two sets—solved and unsolved. In most cases, these sets run in parallel: for each solved exercise, there
is a corresponding unsolved one that presents variation and gives students further practice. The solutions to the solved set for each chapter are available to
all readers at wwnorton.com/studyspace/disciplines/economics.asp. The solutions to the unsolved set for each chapter will be reserved for instructors who
have adopted the textbook. Instructors should contact the publisher about getting access to the instructors’ Web site. In each of the solved and unsolved sets,
there are two kinds of exercises. Some provide repetition and drill in the techniques developed in the chapter. In others—and in our view those with the most
educational value—we take the student step by step through the process of construction of a game-theoretic model to analyze an issue or problem. Such experience, gained in some solved exercises and repeated in corresponding unsolved
ones, will best develop the students’ skills in strategic thinking.
Most other chapters were updated, improved, reorganized, and streamlined. The biggest changes occur in the chapters on the prisoners’ dilemma
(Chapter 10), collective action (Chapter 11), evolutionary games (Chapter 12),
and voting (Chapter 15). We omitted the final chapter of the third edition
(Markets and Competition) because in our experience almost no one used it.
Teachers who want it can find it in the third edition.
We thank numerous readers of previous editions who provided comments
and suggestions; they are thanked by name in the prefaces of those editions.
The substance and writing in the book have been improved by the perceptive and constructive pieces of advice offered by faculty who have used the
text in their courses and others who have read all or parts of the book in other
6841D CH00 UG.indd 21
12/18/14 3:08 PM
x x i i p r e fa ce
contexts. For the fourth edition, we have also had the added benefit of extensive comments from Christopher Maxwell (Boston College), Alex Brown (Texas
A&M University), Jonathan Woon (University of Pittsburgh), Klaus Becker
(Texas Tech University), Huanxing Yang (Ohio State University), Matthew Roelofs
(Western Washington University), and Debashis Pal (University of Cincinnati).
Thank you all.
Avinash Dixit
Susan Skeath
David Reiley
6841D CH00 UG.indd 22
12/18/14 3:08 PM
PART ONE
■
Introduction and
General Principles
6841D CH01 UG.indd 1
12/18/14 3:10 PM
6841D CH01 UG.indd 2
12/18/14 3:10 PM
1
■
Basic Ideas and Examples
A
begin by attempting to convince the student readers that the subject is of great importance in the world and
therefore merits their attention. The physical sciences and engineering
claim to be the basis of modern technology and therefore of modern life;
the social sciences discuss big issues of governance—for example, democracy
and taxation; the humanities claim that they revive your soul after it has been
deadened by exposure to the physical and social sciences and to engineering.
Where does the subject games of strategy, often called game theory, fit into this
picture, and why should you study it?
We offer a practical motivation that is much more individual and probably
closer to your personal concerns than most other subjects. You play games of
strategy all the time: with your parents, siblings, friends, and enemies, and even
with your professors. You have probably acquired a lot of instinctive expertise
in playing such games, and we hope you will be able to connect what you have
already learned to the discussion that follows. We will build on your experience,
systematize it, and develop it to the point where you will be able to improve
your strategic skills and use them more methodically. Opportunities for such
uses will appear throughout your life; you will go on playing such games with
your employers, employees, spouses, children, and even strangers.
Not that the subject lacks wider importance. Similar games are played in
business, politics, diplomacy, and wars—in fact, whenever people interact to
strike mutually agreeable deals or to resolve conflicts. Being able to recognize
such games will enrich your understanding of the world around you and will
ll introductory textbooks
3
6841D CH01 UG.indd 3
12/18/14 3:10 PM
4 [ C h . 1 ] b a s i c i d e a s a n d e x a m p l e s
make you a better participant in all its affairs. Understanding games of strategy
will also have a more immediate payoff in your study of many other subjects.
Economics and business courses already use a great deal of game-theoretic
thinking. Political science, psychology, and philosophy are also using game theory to study interactions, as is biology, which has been importantly influenced
by the concepts of evolutionary games and has in turn exported these ideas to
economics. Psychology and philosophy also interact with the study of games of
strategy. Game theory provides concepts and techniques of analysis for many
disciplines, one might say all disciplines except those dealing with completely
inanimate objects.
1 WHAT IS A GAME OF STRATEGY?
The word game may convey an impression that the subject is frivolous or unimportant in the larger scheme of things—that it deals with trivial pursuits such as
gambling and sports when the world is full of weightier matters such as war and
business and your education, career, and relationships. Actually, games of strategy are not “just a game”; all of these weighty matters are instances of games,
and game theory helps us understand them all. But it will not hurt to start with
game theory as applied to gambling or sports.
Most games include chance, skill, and strategy in varying proportions. Playing double or nothing on the toss of a coin is a game of pure chance, unless you
have exceptional skill in doctoring or tossing coins. A hundred-yard dash is a
game of pure skill, although some chance elements can creep in; for example, a
runner may simply have a slightly off day for no clear reason.
Strategy is a skill of a different kind. In the context of sports, it is a part of
the mental skill needed to play well; it is the calculation of how best to use your
physical skill. For example, in tennis, you develop physical skill by practicing
your serves (first serves hard and flat, second serves with spin or kick) and passing shots (hard, low, and accurate). The strategic skill is knowing where to put
your serve (wide, or on the T) or passing shot (crosscourt, or down the line). In
football, you develop such physical skills as blocking and tackling, running and
catching, and throwing. Then the coach, knowing the physical skills of his own
team and those of the opposing team, calls the plays that best exploit his team’s
skills and the other team’s weaknesses. The coach’s calculation constitutes the
strategy. The physical game of football is played on the gridiron by jocks; the
strategic game is played in the offices and on the sidelines by coaches and by
nerdy assistants.
A hundred-yard dash is a matter of exercising your physical skill as best
you can; it offers no opportunities to observe and react to what other runners in
6841D CH01 UG.indd 4
12/18/14 3:10 PM
w h at i s a g a m e o f s t r at e g y ? 5
the race are doing and therefore no scope for strategy. Longer races do entail
strategy—whether you should lead to set the pace, how soon before the finish
you should try to break away, and so on.
Strategic thinking is essentially about your interactions with others, as they
do similar thinking at the same time and about the same situation. Your opponents in a marathon may try to frustrate or facilitate your attempts to lead, given
what they think best suits their interests. Your opponent in tennis tries to guess
where you will put your serve or passing shot; the opposing coach in football
calls the play that will best counter what he thinks you will call. Of course, just
as you must take into account what the other player is thinking, he is taking into
account what you are thinking. Game theory is the analysis, or science, if you
like, of such interactive decision making.
When you think carefully before you act—when you are aware of your objectives or preferences and of any limitations or constraints on your actions and
choose your actions in a calculated way to do the best according to your own
criteria—you are said to be behaving rationally. Game theory adds another dimension to rational behavior—namely, interaction with other equally rational
decision makers. In other words, game theory is the science of rational behavior
in interactive situations.
We do not claim that game theory will teach you the secrets of perfect play or
ensure that you will never lose. For one thing, your opponent can read the same
book, and both of you cannot win all the time. More importantly, many games
are complex and subtle, and most actual situations include enough idiosyncratic
or chance elements that game theory cannot hope to offer surefire recipes for action. What it does is provide some general principles for thinking about strategic
interactions. You have to supplement these ideas and some methods of calculation with many details specific to your situation before you can devise a successful strategy for it. Good strategists mix the science of game theory with their own
experience; one might say that game playing is as much art as science. We will
develop the general ideas of the science but will also point out its limitations and
tell you when the art is more important.
You may think that you have already acquired the art from your experience
or instinct, but you will find the study of the science useful nonetheless. The science systematizes many general principles that are common to several contexts
or applications. Without general principles, you would have to figure out from
scratch each new situation that requires strategic thinking. That would be especially difficult to do in new areas of application—for example, if you learned your
art by playing games against parents and siblings and must now practice strategy
against business competitors. The general principles of game theory provide you
with a ready reference point. With this foundation in place, you can proceed much
more quickly and confidently to acquire and add the situation-specific features or
elements of the art to your thinking and action.
6841D CH01 UG.indd 5
12/18/14 3:10 PM
6 [ C h . 1 ] b a s i c i d e a s a n d e x a m p l e s
2 SOME EXAMPLES AND STORIES OF STRATEGIC GAMES
With the aims announced in Section 1, we will begin by offering you some simple examples, many of them taken from situations that you have probably encountered in your own lives, where strategy is of the essence. In each case we
will point out the crucial strategic principle. Each of these principles will be
discussed more fully in a later chapter, and after each example we will tell you
where the details can be found. But don’t jump to them right away; for a while,
just read all the examples to get a preliminary idea of the whole scope of strategy
and of strategic games.
A. Which Passing Shot?
Tennis at its best consists of memorable duels between top players: John McEnroe versus Ivan Lendl, Pete Sampras versus Andre Agassi, and Martina Navratilova versus Chris Evert. Picture the 1983 U.S. Open final between Evert and
Navratilova.1 Navratilova at the net has just volleyed to Evert on the baseline.
Evert is about to hit a passing shot. Should she go down the line or crosscourt?
And should Navratilova expect a down-the-line shot and lean slightly that way
or expect a crosscourt shot and lean the other way?
Conventional wisdom favors the down-the-line shot. The ball has a shorter
distance to travel to the net, so the other player has less time to react. But this
does not mean that Evert should use that shot all of the time. If she did, Navratilova would confidently come to expect it and prepare for it, and the shot would
not be so successful. To improve the success of the down-the-line passing shot,
Evert has to use the crosscourt shot often enough to keep Navratilova guessing
on any single instance.
Similarly in football, with a yard to go on third down, a run up the middle
is the percentage play—that is, the one used most often—but the offense must
throw a pass occasionally in such situations “to keep the defense honest.”
Thus, the most important general principle of such situations is not what
Evert should do but what she should not do: she should not do the same thing all
the time or systematically. If she did, then Navratilova would learn to cover that,
and Evert’s chances of success would fall.
Not doing any one thing systematically means more than not playing the
same shot in every situation of this kind. Evert should not even mechanically
switch back and forth between the two shots. Navratilova would spot and exploit
1
Chris Evert won her first title at the U.S. Open in 1975. Navratilova claimed her first title in the
1983 final.
6841D CH01 UG.indd 6
12/18/14 3:10 PM
s o m e e x a m p l e s a n d s t o r i e s o f s t r at e g i c g a m e s 7
this pattern or indeed any other detectable system. Evert must make the choice
on each particular occasion at random to prevent this guessing.
This general idea of “mixing one’s plays” is well known, even to sports commentators on television. But there is more to the idea, and these further aspects
require analysis in greater depth. Why is down-the-line the percentage shot?
Should one play it 80% of the time or 90% or 99%? Does it make any difference if
the occasion is particularly big; for example, does one throw that pass on third
down in the regular season but not in the Super Bowl? In actual practice, just
how does one mix one’s plays? What happens when a third possibility (the lob) is
introduced? We will examine and answer such questions in Chapter 7.
The movie The Princess Bride (1987) illustrates the same idea in the “battle of
wits” between the hero (Westley) and a villain (Vizzini). Westley is to poison one
of two wineglasses out of Vizzini’s sight, and Vizzini is to decide who will drink
from which glass. Vizzini goes through a number of convoluted arguments as to
why Westley should poison one glass. But all of the arguments are innately contradictory, because Westley can anticipate Vizzini’s logic and choose to put the
poison in the other glass. Conversely, if Westley uses any specific logic or system
to choose one glass, Vizzini can anticipate that and drink from the other glass,
leaving Westley to drink from the poisoned one. Thus, Westley’s strategy has to
be random or unsystematic.
The scene illustrates something else as well. In the film, Vizzini loses the
game and with it his life. But it turns out that Westley had poisoned both glasses;
over the last several years, he had built up immunity to the poison. So Vizzini
was actually playing the game under a fatal information disadvantage. Players
can sometimes cope with such asymmetries of information; Chapters 8 and 13
examine when and how they can do so.
B. The GPA Rat Race
You are enrolled in a course that is graded on a curve. No matter how well you
do in absolute terms, only 40% of the students will get As, and only 40% will get
Bs. Therefore, you must work hard, not just in absolute terms, but relative to
how hard your classmates (actually, “class enemies” seems a more fitting term
in this context) work. All of you recognize this, and after the first lecture you
hold an impromptu meeting in which all students agree not to work too hard.
As weeks pass by, the temptation to get an edge on the rest of the class by working just that little bit harder becomes overwhelming. After all, the others are not
able to observe your work in any detail; nor do they have any real hold over you.
And the benefits of an improvement in your grade point average are substantial.
So you hit the library more often and stay up a little longer.
The trouble is, everyone else is doing the same. Therefore, your grade is
no better than it would have been if you and everyone else had abided by the
6841D CH01 UG.indd 7
12/18/14 3:10 PM
8 [ C h . 1 ] b a s i c i d e a s a n d e x a m p l e s
agreement. The only difference is that all of you have spent more time working
than you would have liked.
This is an example of the prisoners’ dilemma.2 In the original story, two suspects are being separately interrogated and invited to confess. One of them, say
A, is told, “If the other suspect, B, does not confess, then you can cut a very good
deal for yourself by confessing. But if B does confess, then you would do well to
confess, too; otherwise the court will be especially tough on you. So you should
confess no matter what the other does.” B is told to confess, with the use of similar reasoning. Faced with this choice, both A and B confess. But it would have
been better for both if neither had confessed, because the police had no really
compelling evidence against them.
Your situation is similar. If the others slack off, then you can get a much better grade by working hard; if the others work hard, then you had better do the
same or else you will get a very bad grade. You may even think that the label
“prisoner” is very fitting for a group of students trapped in a required course.
Professors and schools have their own prisoners’ dilemmas. Each professor
can make his course look good or attractive by grading it slightly more liberally,
and each school can place its students in better jobs or attract better applicants
by grading all of its courses a little more liberally. Of course, when all do this,
none has any advantage over the others; the only result is rampant grade inflation, which compresses the spectrum of grades and therefore makes it difficult
to distinguish abilities.
People often think that in every game there must be a winner and a loser.
The prisoners’ dilemma is different—both or all players can come out losers.
People play (and lose) such games every day, and the losses can range from
minor inconveniences to potential disasters. Spectators at a sports event stand
up to get a better view but, when all stand, no one has a better view than when
they were all sitting. Superpowers acquire more weapons to get an edge over
their ­rivals but, when both do so, the balance of power is unchanged; all that has
happened is that both have spent economic resources that they could have used
for better purposes, and the risk of accidental war has escalated. The magnitude
of the potential cost of such games to all players makes it important to understand the ways in which mutually beneficial cooperation can be achieved and
sustained. All of Chapter 10 deals with the study of this game.
Just as the prisoners’ dilemma is potentially a lose-lose game, there are winwin games, too. International trade is an example; when each country produces
more of what it can do relatively best, all share in the fruits of this international
division of labor. But successful bargaining about the division of the pie is
2
There is some disagreement regarding the appropriate grammatical placement of the apostrophe
in the term prisoners’ dilemma. Our placement acknowledges the facts that there must be at least
two prisoners in order for there to be any dilemma at all and that the (at least two) prisoners therefore jointly possess the dilemma.
6841D CH01 UG.indd 8
12/18/14 3:10 PM
s o m e e x a m p l e s a n d s t o r i e s o f s t r at e g i c g a m e s 9
needed if the full potential of trade is to be realized. The same applies to many
other bargaining situations. We will study these in Chapter 17.
C. “We Can’t Take the Exam Because We Had a Flat Tire”
Here is a story, probably apocryphal, that circulates on the undergraduate
e-mail networks; each of us has independently received it from our students:
There were two friends taking chemistry at Duke. Both had done pretty well
on all of the quizzes, the labs, and the midterm, so that going into the final
they each had a solid A. They were so confident the weekend before the final
that they decided to go to a party at the University of Virginia. The party was
so good that they overslept all day Sunday, and got back too late to study for
the chemistry final that was scheduled for Monday morning. Rather than
take the final unprepared, they went to the professor with a sob story. They
said they each had gone up to UVA and had planned to come back in good
time to study for the final but had a flat tire on the way back. Because they
­didn’t have a spare, they had spent most of the night looking for help. Now
they were really too tired, so could they please have a makeup final the next
day? The professor thought it over and agreed.
The two studied all of Monday evening and came well prepared on Tuesday morning. The professor placed them in separate rooms and handed the
test to each. The first question on the first page, worth 10 points, was very
easy. Each of them wrote a good answer, and greatly relieved, turned the
page. It had just one question, worth 90 points. It was: “Which tire?”
The story has two important strategic lessons for future partygoers. The first
is to recognize that the professor may be an intelligent game player. He may
suspect some trickery on the part of the students and may use some device to
catch them. Given their excuse, the question was the likeliest such device. They
should have foreseen it and prepared their answer in advance. This idea that one
should look ahead to future moves in the game and then reason backward to
calculate one’s best current action is a very general principle of strategy, which
we will elaborate on in Chapter 3. We will also use it, most notably, in Chapter 9.
But it may not be possible to foresee all such professorial countertricks; after
all, professors have much more experience seeing through students’ excuses
than students have making up such excuses. If the two students in the story are
unprepared, can they independently produce a mutually consistent lie? If each
picks a tire at random, the chances are only 25% that the two will pick the same
one. (Why?) Can they do better?
You may think that the front tire on the passenger side is the one most
likely to suffer a flat, because a nail or a shard of glass is more likely to lie closer
to that side of the road than to the middle, and the front tire on that side will
6841D CH01 UG.indd 9
12/18/14 3:10 PM
1 0 [ C h . 1 ] b a s i c i d e a s a n d e x a m p l e s
encounter the nail or glass first. You may think this is good logic, but that is not
enough to make it a good choice. What matters is not the logic of the choice but
making the same choice as your friend does. Therefore, you have to think about
whether your friend would use the same logic and would consider that choice
equally obvious. But even that is not the end of the chain of reasoning. Would
your friend think that the choice would be equally obvious to you? And so on.
The point is not whether a choice is obvious or logical, but whether it is obvious
to the other that it is obvious to you that it is obvious to the other. . . . In other
words, what is needed is a convergence of expectations about what should be
chosen in such circumstances. Such a commonly expected strategy on which
the players can successfully coordinate is called a focal point.
There is nothing general or intrinsic to the structure of these games that
creates such convergence. In some games, a focal point may exist because of
chance circumstances about the labeling of strategies or because of some experience or knowledge shared by the players. For example, if the front passenger
side of a car were for some reason called the Duke’s side, then two Duke students would be very likely to choose it without any need for explicit prior understanding. Or, if the front driver’s side of all cars were painted orange (for safety,
to be easily visible to oncoming cars), then two Princeton students would be
very likely to choose that tire, because orange is the Princeton color. But without
some such clue, tacit coordination might not be possible at all.
We will study focal points in more detail in Chapter 4. Here in closing we
merely point out that when asked in classrooms, more than 50% of students
choose the front driver’s side. They are generally unable to explain why, except
to say that it seems the obvious choice.
D. Why Are Professors So Mean?
Many professors have inflexible rules not to give makeup exams and never to accept late submission of problem sets or term papers. Students think the professors must be really hardhearted to behave in this way. The true strategic reason
is often exactly the opposite. Most professors are kindhearted and would like to
give their students every reasonable break and accept any reasonable excuse.
The trouble lies in judging what is reasonable. It is hard to distinguish between
similar excuses and almost impossible to verify their truth. The professor knows
that on each occasion he will end up by giving the student the benefit of the
doubt. But the professor also knows that this is a slippery slope. As the students
come to know that the professor is a soft touch, they will procrastinate more and
produce ever-flimsier excuses. Deadlines will cease to mean anything, and examinations will become a chaotic mix of postponements and makeup tests.
Often the only way to avoid this slippery slope is to refuse to take even
the first step down it. Refusal to accept any excuses at all is the only realistic
6841D CH01 UG.indd 10
12/18/14 3:10 PM
s o m e e x a m p l e s a n d s t o r i e s o f s t r at e g i c g a m e s 1 1
alternative to accepting them all. By making an advance commitment to the “no
excuses” strategy, the professor avoids the temptation to give in to all.
But how can a softhearted professor maintain such a hardhearted commitment? He must find some way to make a refusal firm and credible. The simplest
way is to hide behind an administrative procedure or university-wide policy. “I
wish I could accept your excuse, but the university won’t let me” not only puts
the professor in a nicer light, but also removes the temptation by genuinely
leaving him no choice in the matter. Of course, the rules may be made by the
same collectivity of professors that hides behind them, but once they are made,
no individual professor can unmake the rules in any particular instance.
If the university does not provide such a general shield, then the professor
can try to make up commitment devices of his own. For example, he can make
a clear and firm announcement of the policy at the beginning of the course.
Any time an individual student asks for an exception, he can invoke a fairness
principle, saying, “If I do this for you, I would have to do it for everyone.” Or the
professor can acquire a reputation for toughness by acting tough a few times.
This may be an unpleasant thing for him to do and it may run against his true
inclination, but it helps in the long run over his whole career. If a professor is believed to be tough, few students will try excuses on him, so he will actually suffer
less pain in denying them.
We will study commitments, and related strategies, such as threats and
promises, in considerable detail in Chapter 9.
E. Roommates and Families on the Brink
You are sharing an apartment with one or more other students. You notice that
the apartment is nearly out of dishwasher detergent, paper towels, cereal, beer,
and other items. You have an agreement to share the actual expenses, but the
trip to the store takes time. Do you spend your own time going to the store or
do you hope that someone else will spend his, leaving you more time to study
or relax? Do you go and buy the soap or stay in and watch TV to catch up on the
soap operas?3
In many situations of this kind, the waiting game goes on for quite a while
before someone who is really impatient for one of the items (usually beer) gives
in and spends the time for the shopping trip. Things may deteriorate to the point
of serious quarrels or even breakups among the roommates.
This game of strategy can be viewed from two perspectives. In one, each
of the roommates is regarded as having a simple binary choice—to do the
3
This example comes from Michael Grunwald’s “At Home” column, “A Game of Chicken,” in the
Boston Globe Magazine, April 28, 1996.
6841D CH01 UG.indd 11
12/18/14 3:10 PM
1 2 [ C h . 1 ] b a s i c i d e a s a n d e x a m p l e s
shopping or not. The best outcome for you is where someone else does the shopping and you stay at home; the worst is where you do the shopping while the others get to use their time better. If both do the shopping (unknown to each other,
on the way home from school or work), there is unnecessary duplication and
perhaps some waste of perishables; if neither does the shopping, there can be serious inconvenience or even disaster if the toilet paper runs out at a crucial time.
This is analogous to the game of chicken that used to be played by American
teenagers. Two of them drove their cars toward each other. The first to swerve
to avoid a collision was the loser (chicken); the one who kept driving straight
was the winner. We will analyze the game of chicken further in Chapter 4 and in
Chapters 7, 11, and 12.
A more interesting dynamic perspective on the same situation regards it as
a “war of attrition,” where each roommate tries to wait out the others, hoping
that someone else’s patience will run out first. In the meantime, the risk escalates that the apartment will run out of something critical, leading to serious
inconvenience or a blowup. Each player lets the risk escalate to the point of
his own tolerance; the one revealed to have the least tolerance loses. Each sees
how close to the brink of disaster the others will let the situation go. Hence the
name “brinkmanship” for this strategy and this game. It is a dynamic version of
chicken, offering richer and more interesting possibilities.
One of us (Dixit) was privileged to observe a brilliant example of brinkmanship at a dinner party one Saturday evening. Before dinner, the company was
sitting in the living room when the host’s 15-year-old daughter appeared at the
door and said, “Bye, Dad.” The father asked, “Where are you going?” and the
daughter replied, “Out.” After a pause that was only a couple of seconds but
seemed much longer, the host said, “All right, bye.”
Your strategic observer of this scene was left thinking how it might have
gone differently. The host might have asked, “With whom?” and the daughter
might have replied, “Friends.” The father could have refused permission unless the daughter told him exactly where and with whom she would be. One or
the other might have capitulated at some such later stage of this exchange or it
could have led to a blowup.
This was a risky game for both the father and the daughter to play. The
daughter might have been punished or humiliated in front of strangers; an argument could have ruined the father’s evening with his friends. Each had to judge
how far to push the process, without being fully sure whether and when the
other might give in or whether there would be an unpleasant scene. The risk of
an explosion would increase as the father tried harder to force the daughter to
answer and as she defied each successive demand.
In this respect, the game played by the father and the daughter was just
like that between a union and a company’s management who are negotiating
a labor contract or between two superpowers that are encroaching on each
6841D CH01 UG.indd 12
12/18/14 3:10 PM
s o m e e x a m p l e s a n d s t o r i e s o f s t r at e g i c g a m e s 1 3
other’s sphere of influence in the world. Neither side can be fully sure of the other’s intentions, so each side explores them through a succession of small incremental steps, each of which escalates the risk of mutual disaster. The daughter
in our story was exploring previously untested limits of her freedom; the father
was exploring previously untested—and perhaps unclear even to himself—
limits of his authority.
This was an example of brinkmanship, a game of escalating mutual risk, par
excellence. Such games can end in one of two ways. In the first way, one of the
players reaches the limit of his own tolerance for risk and concedes. (The father
in our story conceded quickly, at the very first step. Other fathers might be more
successful strict disciplinarians, and their daughters might not even initiate
a game like this.) In the second way, before either has conceded, the risk that
they both fear comes about, and the blowup (the strike or the war) occurs. The
feud in our host’s family ended “happily”; although the father conceded and the
daughter won, a blowup would have been much worse for both.
We will analyze the strategy of brinkmanship more fully in Chapter 9; in
Chapter 14, we will examine a particularly important instance of it—namely, the
Cuban missile crisis of 1962.
F. The Dating Game
When you go on a date, you want to show off the best attributes of your personality to your date and to conceal the worst ones. Of course, you cannot hope
to conceal them forever if the relationship progresses, but you are resolved to
improve or hope that by that stage the other person will accept the bad things
about you with the good ones. And you know that the relationship will not
progress at all unless you make a good first impression; you won’t get a second
chance to do so.
Of course, you want to find out everything, good and bad, about the other
person. But you know that if the other is as good at the dating game as you are,
he or she will similarly try to show the best side and hide the worst. You will
think through the situation more carefully and try to figure out which signs of
good qualities are real and which ones can easily be put on for the sake of making a good impression. Even the worst slob can easily appear well groomed for
a big date; ingrained habits of courtesy and manners that are revealed in a hundred minor details may be harder to simulate for a whole evening. Flowers are
relatively cheap; more expensive gifts may have value, not for intrinsic reasons,
but as credible evidence of how much the other person is willing to sacrifice
for you. And the “currency” in which the gift is given may have different significance, depending on the context; from a millionaire, a diamond may be worth
less in this regard than the act of giving up valuable time for your company or
time spent on some activity at your request.
6841D CH01 UG.indd 13
12/18/14 3:10 PM
1 4 [ C h . 1 ] b a s i c i d e a s a n d e x a m p l e s
You should also recognize that your date will similarly scrutinize your actions for their information content. Therefore, you should take actions that are
credible signals of your true good qualities, and not just the ones that anyone
can imitate. This is important not just on a first date; revealing, concealing, and
eliciting information about the other person’s deepest intentions remain important throughout a relationship. Here is a story to illustrate that.
Once upon a time in New York City there lived a man and a woman who had
separate rent-controlled apartments, but their relationship had reached the
point at which they were using only one of them. The woman suggested to
the man that they give up the other apartment. The man, an economist,
explained to her a fundamental principle: it is always better to have more
choice available. The probability of their splitting up might be small but,
given even a small risk, it would be useful to retain the second low-rent apartment. The woman took this very badly and promptly ended the relationship!
Economists who hear this story say that it just confirms the principle that
greater choice is better. But strategic thinking offers a very different and more
compelling explanation. The woman was not sure of the man’s commitment
to the relationship, and her suggestion was a brilliant strategic device to elicit
the truth. Words are cheap; anyone can say, “I love you.” If the man had put his
property where his mouth was and had given up his rent-controlled apartment,
this would have been concrete evidence of his love. The fact that he refused to
do so constituted hard evidence of the opposite, and the woman did right to end
the relationship.
These are examples, designed to appeal to your immediate experience, of
a very important class of games—namely, those where the real strategic issue
is manipulation of information. Strategies that convey good information about
yourself are called signals; strategies that induce others to act in ways that will
credibly reveal their private information, good or bad, are called screening devices. Thus, the woman’s suggestion of giving up one of the apartments was a
screening device, which put the man in the situation of offering to give up his
apartment or else revealing his lack of commitment. We will study games of information, as well as signaling and screening, in Chapters 8 and 13.
3 OUR STRATEGY FOR STUDYING GAMES OF STRATEGY
We have chosen several examples that relate to your experiences as amateur
strategists in real life to illustrate some basic concepts of strategic thinking and
strategic games. We could continue, building a whole stock of dozens of similar
stories. The hope would be that, when you faced an actual strategic situation,
6841D CH01 UG.indd 14
12/18/14 3:10 PM
o u r s t r at e g y f o r s t u d y i n g g a m e s o f s t r at e g y 1 5
you might recognize a parallel with one of these stories, which would help you
decide the appropriate strategy for your own situation. This is the case study
approach taken by most business schools. It offers a concrete and memorable
vehicle for the underlying concepts. However, each new strategic situation typically consists of a unique combination of so many variables that an intolerably
large stock of cases is needed to cover all of them.
An alternative approach focuses on the general principles behind the examples and so constructs a theory of strategic action—namely, formal game theory.
The hope here is that, facing an actual strategic situation, you might recognize
which principle or principles apply to it. This is the route taken by the more academic disciplines, such as economics and political science. A drawback to this
approach is that the theory is presented in a very abstract and mathematical
manner, without enough cases or examples. This makes it difficult for most beginners to understand or remember the theory and to connect the theory with
reality afterward.
But knowing some general theory has an overwhelming compensating advantage. It gives you a deeper understanding of games and of why they have
the outcomes they do. This helps you play better than you would if you merely
read some cases and knew the recipes for how to play some specific games. With
the knowledge of why, you can think through new and unexpected situations
where a mechanical follower of a “how” recipe would be lost. A world champion of checkers, Tom Wiswell, has expressed this beautifully: “The player who
knows how will usually draw; the player who knows why will usually win.”4 This
is not to be taken literally for all games; some games may be hopeless situations
for one of the players no matter how knowledgeable he may be. But the statement contains the germ of an important general truth—knowing why gives you
an advantage beyond what you can get if you merely know how. For example,
knowing the why of a game can help you foresee a hopeless situation and avoid
getting into such a game in the first place.
Therefore, we will take an intermediate route that combines some of the advantages of both approaches—case studies (how) and theory (why). We will organize the subject around its general principles, generally one in each of the Chapters
3–7. Therefore, you don’t have to figure them out on your own from the cases. But
we will develop the general principles through illustrative cases rather than abstractly, so the context and scope of each idea will be clear and evident. In other
words, we will focus on theory but build it up through cases, not abstractly. Starting
with Chapter 8, we will apply this theory to several types of strategic situations.
Of course, such an approach requires some compromises of its own. Most
important, you should remember that each of our examples serves the purpose
4
Quoted in Victor Niederhoffer, The Education of a Speculator (New York: Wiley, 1997), p. 169. We
thank Austin Jaffe of Pennsylvania State University for bringing this aphorism to our attention.
6841D CH01 UG.indd 15
12/18/14 3:10 PM
1 6 [ C h . 1 ] b a s i c i d e a s a n d e x a m p l e s
of conveying some general idea or principle of game theory. Therefore, we will
leave out many details of each case that are incidental to the principle at stake.
If some examples seem somewhat artificial, please bear with us; we have generally considered the omitted details and left them out for good reasons.
A word of reassurance. Although the examples that motivate the development
of our conceptual or theoretical frameworks are deliberately selected for that purpose (even at the cost of leaving out some other features of reality), once the theory has been constructed, we pay a lot of attention to its connection with reality.
Throughout the book, we examine factual and experimental evidence in regard to
how well the theory explains reality. The frequent answer—very well in some respects and less well in others—should give you cautious confidence in using the
theory and should be a spur to contributing to the formulation of better theories.
In appropriate places, we examine in great detail how institutions evolve in practice to solve some problems pointed out by the theories; note in particular the discussion in Chapter 10 of how prisoners’ dilemmas arise and are solved in reality
and a similar discussion of more general collective-­action problems in Chapter
11. Finally, in Chapter 14, we will examine the use of brinkmanship in the Cuban
missile crisis. Theory-based case studies, which take rich factual details of a situation and subject them to an equally detailed theoretical analysis, are becoming
common in such diverse fields as business studies, political science, and economic
history; we hope our original study of an important episode in the diplomatic and
military areas will give you an interesting introduction to this genre.
To pursue our approach, in which examples lead to general theories that are
then tested against reality and used to interpret reality, we must first identify the
general principles that serve to organize the discussion. We will do so in Chapter
2 by classifying or dichotomizing games along several key dimensions of different strategic matters or concepts. Along each dimension, we will identify two
extreme pure types. For example, one such dimension concerns the order of
moves, and the two pure types are those in which the players take turns making
moves (sequential games) and those in which all players act at once (simultaneous games). Actual games rarely correspond to exactly one of these conceptual
categories; most partake of some features of each extreme type. But each game
can be located in our classification by considering which concepts or dimensions bear on it and how it mixes the two pure types in each dimension. To decide how to act in a specific situation, one then combines in appropriate ways
the lessons learned for the pure types.
Once this general framework has been constructed in Chapter 2, the chapters that follow will build on it, developing several general ideas and principles
for each player’s strategic choice and the interaction of all players’ strategies in
games.
6841D CH01 UG.indd 16
12/18/14 3:10 PM
2
■
How to Think about
Strategic Games
C
1 gave some simple examples of strategic games and strategic
thinking. In this chapter, we begin a more systematic and analytical approach to the subject. We choose some crucial conceptual categories or
dimensions, each of which has a dichotomy of types of strategic interactions. For example, one such dimension concerns the timing of the players’
actions, and the two pure types are games where the players act in strict turns
(sequential moves) and where they act at the same time (simultaneous moves).
We consider some matters that arise in thinking about each pure type in this dichotomy, as well as in similar dichotomies with respect to other matters, such as
whether the game is played only once or repeatedly and what the players know
about each other.
In Chapters 3–7, we will examine each of these categories or dimensions
in more detail; in Chapters 8–17, we will show how the analysis can be used in
several contexts. Of course, most actual applications are not of a pure type but
rather a mixture. Moreover, in each application, two or more of the categories
have some relevance. The lessons learned from the study of the pure types must
therefore be combined in appropriate ways. We will show how to do this by
using the context of our applications.
In this chapter, we state some basic concepts and terminology—such as
strategies, payoffs, and equilibrium—that are used in the analysis and briefly describe solution methods. We also provide a brief discussion of the uses of game
theory and an overview of the structure of the remainder of the book.
hapter
17
6841D CH02 UG.indd 17
12/18/14 3:09 PM
1 8 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
1 DECISIONS VERSUS GAMES
When a person (or team or firm or government) decides how to act in dealings with other people (or teams or firms or governments), there must be some
cross-effect of their actions; what one does must affect the outcome for the
other. When George Pickett (of Pickett’s Charge at the battle of Gettysburg) was
asked to explain the Confederacy’s defeat in the Civil War, he responded, “I think
the Yankees had something to do with it.”1
For the interaction to become a strategic game, however, we need something more—namely, the participants’ mutual awareness of this cross-effect.
What the other person does affects you; if you know this, you can react to his actions, or take advance actions to forestall the bad effects his future actions may
have on you and to facilitate any good effects, or even take advance actions so as
to alter his future reactions to your advantage. If you know that the other person
knows that what you do affects him, you know that he will be taking similar actions. And so on. It is this mutual awareness of the cross-effects of actions and
the actions taken as a result of this awareness that constitute the most interesting aspects of strategy.
This distinction is captured by reserving the label strategic games (or sometimes just games, because we are not concerned with other types of games, such
as those of pure chance or pure skill) for interactions between mutually aware
players and decisions for action situations where each person can choose without concern for reaction or response from others. If Robert E. Lee (who ordered
Pickett to lead the ill-fated Pickett’s Charge) had thought that the Yankees had
been weakened by his earlier artillery barrage to the point that they no longer
had any ability to resist, his choice to attack would have been a decision; if he
was aware that the Yankees were prepared and waiting for his attack, then the
choice became a part of a (deadly) game. The simple rule is that unless there are
two or more players, each of whom responds to what others do (or what each
thinks the others might do), it is not a game.
Strategic games arise most prominently in head-to-head confrontations of
two participants: the arms race between the United States and the Soviet Union
from the 1950s through the 1980s; wage negotiations between General Motors
and the United Auto Workers; or a Super Bowl matchup between two “pirates,”
the Tampa Bay Buccaneers and the Oakland Raiders. In contrast, interactions
among a large number of participants seem less susceptible to the issues raised
by mutual awareness. Because each farmer’s output is an insignificant part of
the whole nation’s or the world’s output, the decision of one farmer to grow
1
James M. McPherson, “American Victory, American Defeat,” in Why the Confederacy Lost,
ed. Gabor S. Boritt (New York: Oxford University Press, 1993), p. 19.
6841D CH02 UG.indd 18
12/18/14 3:09 PM
d e c i s i o n s v e r s u s g a m e s 1 9
more or less corn has almost no effect on the market price, and not much appears to hinge on thinking of agriculture as a strategic game. This was indeed
the view prevalent in economics for many years. A few confrontations between
large companies—as in the U.S. auto market, which was once dominated by
GM, Ford, and Chrysler—were usefully thought of as strategic games, but most
economic interactions were supposed to be governed by the impersonal forces
of supply and demand.
In fact, game theory has a much greater scope. Many situations that start
out as impersonal markets with thousands of participants turn into strategic
­interactions of two or just a few. This happens for one of two broad classes of
reasons—mutual commitments or private information.
Consider commitment first. When you are contemplating building a house,
you can choose one of several dozen contractors in your area; the contractor
can similarly choose from several potential customers. There appears to be an
impersonal market. Once each side has made a choice, however, the customer
pays an initial installment, and the builder buys some materials for the plan of
this particular house. The two become tied to each other, separately from the
market. Their relationship becomes bilateral. The builder can try to get away
with a somewhat sloppy job or can procrastinate, and the client can try to delay
payment of the next installment. Strategy enters the picture. Their initial contract in the market has to anticipate their individual incentives in the game to
come and specify a schedule of installments of payments that are tied to successive steps in the completion of the project. Even then, some adjustments have to
be made after the fact, and these adjustments bring in new elements of strategy.
Next, consider private information. Thousands of farmers seek to borrow
money for their initial expenditures on machinery, seed, fertilizer, and so forth,
and hundreds of banks exist to lend to them. Yet the market for such loans is not
impersonal. A borrower with good farming skills who puts in a lot of effort will
be more likely to be successful and will repay the loan; a less-skilled or lazy borrower may fail at farming and default on the loan. The risk of default is highly
personalized. It is not a vague entity called “the market” that defaults, but individual borrowers who do so. Therefore each bank will have to view its lending
relation with each individual borrower as a separate game. It will seek collateral
from each borrower or will investigate each borrower’s creditworthiness. The
farmer will look for ways to convince the bank of his quality as a borrower; the
bank will look for effective ways to ascertain the truth of the farmer’s claims.
Similarly, an insurance company will make some efforts to determine the
health of individual applicants and will check for any evidence of arson when
a claim for a fire is made; an employer will inquire into the qualifications of
individual employees and monitor their performance. More generally, when
participants in a transaction possess some private information bearing on the
outcome, each bilateral deal becomes a game of strategy, even though the larger
picture may have thousands of very similar deals going on.
6841D CH02 UG.indd 19
12/18/14 3:09 PM
2 0 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
To sum up, when each participant is significant in the interaction, either
because each is a large player to start with or because commitments or private
information narrow the scope of the relationship to a point where each is an
important player within the relationship, we must think of the interaction as a
strategic game. Such situations are the rule rather than the exception in business, in politics, and even in social interactions. Therefore, the study of strategic
games forms an important part of all fields that analyze these matters.
2 CLASSIFYING GAMES
Games of strategy arise in many different contexts and accordingly have many
different features that require study. This task can be simplified by grouping these
features into a few categories or dimensions, along each of which we can identify
two pure types of games and then recognize any actual game as a mixture of the
pure types. We develop this classification by asking a few questions that will be
pertinent for thinking about the actual game that you are playing or studying.
A. Are the Moves in the Game Sequential or Simultaneous?
Moves in chess are sequential: White moves first, then Black, then White again,
and so on. In contrast, participants in an auction for an oil-drilling lease or a
part of the airwave spectrum make their bids simultaneously, in ignorance of
competitors’ bids. Most actual games combine aspects of both. In a race to research and develop a new product, the firms act simultaneously, but each competitor has partial information about the others’ progress and can respond.
During one play in football, the opposing offensive and defensive coaches simultaneously send out teams with the expectation of carrying out certain plays, but
after seeing how the defense has set up, the quarterback can change the play at
the line of scrimmage or call a time-out so that the coach can change the play.
The distinction between sequential and simultaneous moves is important
because the two types of games require different types of interactive thinking.
In a sequential-move game, each player must think: If I do this, how will my opponent react? Your current move is governed by your calculation of its future
consequences. With simultaneous moves, you have the trickier task of trying to
figure out what your opponent is going to do right now. But you must recognize
that, in making his own calculation, your opponent is also trying to figure out
your current move, while at the same time recognizing that you are doing the
same with him. . . . Both of you have to think your way out of this circle.
In the next three chapters, we will study the two pure cases. In Chapter 3,
we examine sequential-move games, where you must look ahead to act now;
6841D CH02 UG.indd 20
12/18/14 3:09 PM
c l a s s i f y i n g g a m e s 2 1
in Chapters 4 and 5, the subject is simultaneous-move games, where you must
square the circle of “He thinks that I think that he thinks . . .” In each case, we will
devise some simple tools for such thinking—trees and payoff tables—and obtain some simple rules to guide actions.
The study of sequential games also tells us when it is an advantage to move
first and when it is an advantage to move second. Roughly speaking, this depends on the relative importance of commitment and flexibility in the game in
question. For example, the game of economic competition among rival firms in
a market has a first-mover advantage if one firm, by making a firm commitment
to compete aggressively, can get its rivals to back off. But, in political competition, a candidate who has taken a firm stand on an issue may give his rivals a
clear focus for their attack ads, and the game has a second-mover advantage.
Knowledge of the balance of these considerations can also help you devise
ways to manipulate the order of moves to your own advantage. That in turn
leads to the study of strategic moves, such as threats and promises, which we
will take up in Chapter 9.
B. Are the Players’ Interests in Total Conflict or Is There Some Commonality?
In simple games such as chess or football, there is a winner and a loser. One
player’s gain is the other’s loss. Similarly, in gambling games, one player’s winnings are the others’ losses, so the total is 0. This is why such situations are
called zero-sum games. More generally, the idea is that the players’ interests
are in complete conflict. Such conflict arises when players are dividing up any
fixed amount of possible gain, whether it be measured in yards, dollars, acres, or
scoops of ice cream. Because the available gain need not always be exactly 0, the
term constant-sum game is often substituted for zero-sum game; we will use
the two terms interchangeably.
Most economic and social games are not zero-sum. Trade, or economic activity more generally, offers scope for deals that benefit everyone. Joint ventures
can combine the participants’ different skills and generate synergy to produce
more than the sum of what they could have produced separately. But the interests are not completely aligned either; the partners can cooperate to create a
larger total pie, but they will clash when it comes to deciding how to split this
pie among them.
Even wars and strikes are not zero-sum games. A nuclear war is the most
striking example of a situation where there can only be losers, but the concept
is far older. Pyrrhus, the king of Epirus, defeated the Romans at Heraclea in
280 b.c. but at such great cost to his own army that he exclaimed, “Another such victory and we are lost!” Hence the phrase “Pyrrhic victory.” In the 1980s, at the height
of the frenzy of business takeovers, the battles among rival bidders led to such
costly escalation that the successful bidder’s victory was often similarly Pyrrhic.
6841D CH02 UG.indd 21
12/18/14 3:09 PM
2 2 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
Most games in reality have this tension between conflict and cooperation,
and many of the most interesting analyses in game theory come from the need
to handle it. The players’ attempts to resolve their conflict—distribution of territory or profit—are influenced by the knowledge that, if they fail to agree, the
outcome will be bad for all of them. One side’s threat of a war or a strike is its attempt to frighten the other side into conceding its demands.
Even when a game is constant-sum for all players, if it has three (or more)
players, we have the possibility that two of them will cooperate at the expense of
the third; this leads to the study of alliances and coalitions. We will examine and
illustrate these ideas later, especially in Chapter 17 on bargaining.
C. Is the Game Played Once or Repeatedly, and
with the Same or Changing Opponents?
A game played just once is in some respects simpler and in others more complicated than one that includes many interactions. You can think about a oneshot game without worrying about its repercussions on other games you might
play in the future against the same person or against others who might hear of
your actions in this one. Therefore actions in one-shot games are more likely to
be unscrupulous or ruthless. For example, an automobile repair shop is much
more likely to overcharge a passing motorist than a regular customer.
In one-shot encounters, each player doesn’t know much about the others;
for example, what their capabilities and priorities are, whether they are good at
calculating their best strategies or have any weaknesses that can be exploited,
and so on. Therefore in one-shot games, secrecy or surprise is likely to be an important component of good strategy.
Games with ongoing relationships require the opposite considerations. You
have an opportunity to build a reputation (for toughness, fairness, honesty, reliability, and so forth, depending on the circumstances) and to find out more
about your opponent. The players together can better exploit mutually beneficial prospects by arranging to divide the spoils over time (taking turns to “win”)
or to punish a cheater in future plays (an eye for an eye or tit-for-tat). We will
consider these possibilities in Chapter 10 on the prisoners’ dilemma.
More generally, a game may be zero-sum in the short run but have scope
for mutual benefit in the long run. For example, each football team likes to win,
but they all recognize that close competition generates more spectator interest, which benefits all teams in the long run. That is why they agree to a drafting
scheme where teams get to pick players in reverse order of their current standing, thereby reducing the inequality of talent. In long-distance races, the runners or cyclists often develop a lot of cooperation; two or more of them can help
one another by taking turns following in one another’s slipstream. But near the
end of the race, the cooperation collapses as all of them dash for the finish line.
6841D CH02 UG.indd 22
12/18/14 3:09 PM
c l a s s i f y i n g g a m e s 2 3
Here is a useful rule of thumb for your own strategic actions in life. In a game
that has some conflict and some scope for cooperation, you will often think up a
great strategy for winning big and grinding a rival into the dust but have a nagging worry at the back of your mind that you are behaving like the worst 1980s
yuppie. In such a situation, the chances are that the game has a repeated or ongoing aspect that you have overlooked. Your aggressive strategy may gain you a
short-run advantage, but its long-run side effects will cost you even more. Therefore, you should dig deeper and recognize the cooperative element and then alter
your strategy accordingly. You will be surprised how often niceness, integrity, and
the golden rule of doing to others as you would have them do to you turn out
to be not just old nostrums, but good strategies as well when you consider the
whole complex of games that you will be playing in the course of your life.
D. Do the Players Have Full or Equal Information?
In chess, each player knows exactly the current situation and all the moves that
led to it, and each knows that the other aims to win. This situation is exceptional; in most other games, the players face some limitation of information.
Such limitations come in two kinds. First, a player may not know all the information that is pertinent for the choice that he has to make at every point in the
game. This type of information problem arises because of the player’s uncertainty about relevant variables, both internal and external to the game. For example, he may be uncertain about external circumstances, such as the weekend
weather or the quality of a product he wishes to purchase; we call this situation
one of external uncertainty. Or he may be uncertain about exactly what moves
his opponent has made in the past or is making at the same time he makes his
own move; we call this strategic uncertainty. If a game has neither external nor
strategic uncertainty, we say that the game is one of perfect information; otherwise the game has imperfect information. We will give a more precise technical
definition of perfect information in Chapter 6, Section 3.A, after we have introduced the concept of an information set. We will develop the theory of games
with imperfect information (uncertainty) in three future chapters. In Chapter 4,
we discuss games with contemporaneous (simultaneous) actions, which entail
strategic uncertainty, and we analyze methods for making choices under external uncertainty in Chapter 8 and its appendix.
Trickier strategic situations arise when one player knows more than another
does; they are called situations of incomplete or, better, asymmetric information. In such situations, the players’ attempts to infer, conceal, or sometimes
convey their private information become an important part of the game
and the strategies. In bridge or poker, each player has only partial knowledge
of the cards held by the others. Their actions (bidding and play in bridge, the
number of cards taken and the betting behavior in poker) give information to
6841D CH02 UG.indd 23
12/18/14 3:09 PM
2 4 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
opponents. Each player tries to manipulate his actions to mislead the opponents (and, in bridge, to inform his partner truthfully), but in doing so each
must be aware that the opponents know this and that they will use strategic
thinking to interpret that player’s actions.
You may think that if you have superior information, you should always
conceal it from other players. But that is not true. For example, suppose you are
the CEO of a pharmaceutical firm that is engaged in an R&D competition to develop a new drug. If your scientists make a discovery that is a big step forward,
you may want to let your competitors know, in the hope that they will give up
their own searches and you won’t face any future competition. In war, each side
wants to keep its tactics and troop deployments secret; but, in diplomacy, if your
intentions are peaceful, then you desperately want other countries to know and
believe this fact.
The general principle here is that you want to release your information selectively. You want to reveal the good information (the kind that will draw responses from the other players that work to your advantage) and conceal the
bad (the kind that may work to your disadvantage).
This raises a problem. Your opponents in a strategic game are purposive,
rational players, and they know that you are, too. They will recognize your incentive to exaggerate or even to lie. Therefore, they are not going to accept your
unsupported declarations about your progress or capabilities. They can be convinced only by objective evidence or by actions that are credible proof of your
information. Such actions on the part of the more-informed player are called
signals, and strategies that use them are called signaling. Conversely, the lessinformed party can create situations in which the more-informed player will
have to take some action that credibly reveals his information; such strategies
are called screening, and the methods they use are called screening devices.
The word screening is used here in the sense of testing in order to sift or separate, not in the sense of concealing.
Sometimes the same action may be used as a signal by the informed player
or as a screening device by the uninformed player. Recall that in the dating
game in Section 2.F of Chapter 1, the woman was screening the man to test his
commitment to their relationship, and her suggestion that the pair give up
one of their two rent-­controlled apartments was the screening device. If the
man had been committed to the relationship, he might have acted first and volunteered to give up his apartment; this action would have been a signal of his
commitment.
Now we see how, when different players have different information, the
manipulation of information itself becomes a game, perhaps more important
than the game that will be played after the information stage. Such information games are ubiquitous, and playing them well is essential for success in
life. We will study more games of this kind in greater detail in Chapter 8 and
also in Chapter 13.
6841D CH02 UG.indd 24
12/18/14 3:09 PM
c l a s s i f y i n g g a m e s 2 5
E. Are the Rules of the Game Fixed or Manipulable?
The rules of chess, card games, or sports are given, and every player must follow them, no matter how arbitrary or strange they seem. But in games of business, politics, and ordinary life, the players can make their own rules to a greater
or lesser extent. For example, in the home, parents constantly try to make the
rules, and children constantly look for ways to manipulate or circumvent those
rules. In legislatures, rules for the progress of a bill (including the order in which
amendments and main motions are voted on) are fixed, but the game that sets
the agenda—which amendments are brought to a vote first—can be manipulated. This is where political skill and power have the most scope, and we will
address these matters in detail in Chapter 15.
In such situations, the real game is the “pregame” where rules are made, and
your strategic skill must be deployed at that point. The actual playing out of the
subsequent game can be more mechanical; you could even delegate it to someone
else. However, if you “sleep” through the pregame, you might find that you have
lost the game before it ever began. For many years, American firms ignored the
rise of foreign competition in just this way and ultimately paid the price. But some
entrepreneurs, such as oil magnate John D. Rockefeller Sr., adopted the strategy of
limiting their participation to games in which they could also participate in making the rules.2
The distinction between changing rules and acting within the chosen rules
will be most important for us in our study of strategic moves, such as threats and
promises. Questions of how you can make your own threats and promises credible or how you can reduce the credibility of your opponent’s threats basically
have to do with a pregame that entails manipulating the rules of the subsequent
game in which the promises or threats may have to be carried out. More generally, the strategic moves that we will study in Chapter 9 are essentially ploys for
such manipulation of rules.
But if the pregame of rule manipulation is the real game, what fixes the rules
of the pregame? Usually these pregame rules depend on some hard facts related
to the players’ innate abilities. In business competition, one firm can take preemptive actions that alter subsequent games between it and its rivals; for example, it can expand its factory or advertise in a way that twists the results of
subsequent price competition more favorably to itself. Which firm can do this
best or most easily depends on which one has the managerial or organizational
resources to make the investments or to launch the advertising campaigns.
Players may also be unsure of their rivals’ abilities. This often makes the
pregame one of incomplete or asymmetric information, requiring more subtle
strategies and occasionally resulting in some big surprises. We will comment on
all these matters in the appropriate places in the chapters that follow.
2
For more on the methods used in Rockefeller’s rise to power, see Ron Chernow, Titan (New York:
Random House, 1998).
6841D CH02 UG.indd 25
12/18/14 3:09 PM
2 6 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
F. Are Agreements to Cooperate Enforceable?
We saw that most strategic interactions consist of a mixture of conflict and common interest. Then there is a case to be made that all participants should get together and reach an agreement about what everyone should do, balancing their
mutual interest in maximizing the total benefit and their conflicting interests in
the division of gains. Such negotiations can take several rounds in which agreements are made on a tentative basis, better alternatives are explored, and the
deal is finalized only when no group of players can find anything better. However, even after the completion of such a process, additional difficulties often
arise in putting the final agreement into practice. For instance, all the players
must perform, in the end, the actions that were stipulated for them in the agreement. When all others do what they are supposed to do, any one participant can
typically get a better outcome for himself by doing something different. And, if
each one suspects that the others may cheat in this way, he would be foolish to
adhere to his stipulated cooperative action.
Agreements to cooperate can succeed if all players act immediately and in
the presence of the whole group, but agreements with such immediate implementation are quite rare. More often the participants disperse after the agreement has been reached and then take their actions in private. Still, if these
actions are observable to the others, and a third party—for example, a court of
law—can enforce compliance, then the agreement of joint action can prevail.
However, in many other instances individual actions are neither directly observable nor enforceable by external forces. Without enforceability, agreements
will stand only if it is in all participants’ individual interests to abide by them.
Games among sovereign countries are of this kind, as are many games with private information or games where the actions are either outside the law or too trivial or too costly to enforce in a court of law. In fact, games where agreements for
joint action are not enforceable constitute a vast majority of strategic interactions.
Game theory uses a special terminology to capture the distinction between
situations in which agreements are enforceable and those in which they are not.
Games in which joint-action agreements are enforceable are called cooperative
games; those in which such enforcement is not possible, and individual participants must be allowed to act in their own interests, are called noncooperative
games. This has become standard terminology, but it is somewhat unfortunate
because it gives the impression that the former will produce cooperative outcomes and the latter will not. In fact, individual action can be compatible with
the achievement of a lot of mutual gain, especially in repeated interactions. The
important distinction is that in so-called noncooperative games, cooperation
will emerge only if it is in the participants’ separate and individual interests to
continue to take the prescribed actions. This emergence of cooperative outcomes from noncooperative behavior is one of the most interesting findings of
game theory, and we will develop the idea in Chapters 10, 11, and 12.
6841D CH02 UG.indd 26
12/18/14 3:09 PM
s o m e t e r m i n o l o g y a n d b a c k g r o u n d a s s u m p t i o n s 2 7
We will adhere to the standard usage, but emphasize that the terms cooperative and noncooperative refer to the way in which actions are implemented or
enforced—collectively in the former mode and individually in the latter—and
not to the nature of the outcomes.
As we said earlier, most games in practice do not have adequate mechanisms for external enforcement of joint-action agreements. Therefore, most of
our analytical discussion will deal with the noncooperative mode. The one exception comes in our discussion of bargaining in Chapter 17.
3 SOME TERMINOLOGY AND BACKGROUND ASSUMPTIONS
When one thinks about a strategic game, the logical place to begin is by specifying its structure. This includes all the strategies available to all the players, their
information, and their objectives. The first two aspects will differ from one game
to another along the dimensions discussed in the preceding section, and one
must locate one’s particular game within that framework. The objectives raise
some new and interesting considerations. Here we consider aspects of all these
matters.
A. Strategies
Strategies are simply the choices available to the players, but even this basic notion requires some further study and elaboration. If a game has purely simultaneous moves made only once, then each player’s strategy is just the action taken
on that single occasion. But if a game has sequential moves, then a player who
moves later in the game can respond to what other players have done (or what
he himself has done) at earlier points. Therefore, each such player must make a
complete plan of action, for example: “If the other does A, then I will do X, but if
the other does B, then I will do Y.” This complete plan of action constitutes the
strategy in such a game.
A very simple test determines whether your strategy is complete: Does it
specify such full detail about how you would play the game—describing your
action in every contingency—that, if you were to write it all down, hand it to
someone else, and go on vacation, this other person acting as your representative could play the game just as you would have played it? He would know what
to do on each occasion that could conceivably arise in the course of play without ever needing to disturb your vacation for instructions on how to deal with
some situation that you had not foreseen.
This test will become clearer in Chapter 3, when we develop and apply it in
some specific contexts. For now, you should simply remember that a strategy is
a complete plan of action.
6841D CH02 UG.indd 27
12/18/14 3:09 PM
2 8 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
This notion is similar to the common usage of the word strategy to denote
a longer-term or larger-scale plan of action, as distinct from tactics that pertain
to a shorter term or a smaller scale. For example, generals in the military make
strategic plans for a war or a large-scale battle, while lower-level officers devise
tactics for a smaller skirmish or a particular theater of battle based on local conditions. But game theory does not use the term tactics at all. The term strategy
covers all the situations, meaning a complete plan of action when necessary and
meaning a single move if that is all that is needed in the particular game being
studied.
The word strategy is also commonly used to describe a person’s decisions
over a fairly long time span and sequence of choices, even though there is no
game in our sense of purposive interaction with other people. Thus, you have
probably already chosen a career strategy. When you start earning an income,
you will make saving and investment strategies and eventually plan a retirement
strategy. This usage of the term strategy has the same sense as ours—a plan for a
succession of actions in response to evolving circumstances. The only difference
is that we are reserving it for a situation—namely, a game—where the circumstances evolve because of actions taken by other purposive players.
B. Payoffs
When asked what a player’s objective in a game is, most newcomers to strategic
thinking respond that it is “to win,” but matters are not always so simple. Sometimes the margin of victory matters; for example, in R&D competition, if your
product is only slightly better than the nearest rival’s, your patent may be more
open to challenge. Sometimes there may be smaller prizes for several participants, so winning isn’t everything. Most important, very few games of strategy
are purely zero-sum or win-lose; they combine some common interest and some
conflict among the players. Thinking about such mixed-motive games requires
more refined calculations than the simple dichotomy of winning and losing—for
example, comparisons of the gains from cooperating versus cheating.
We will give each player a complete numerical scale with which to compare
all logically conceivable outcomes of the game, corresponding to each available
combination of choices of strategies by all the players. The number associated with
each possible outcome will be called that player’s payoff for that outcome. Higher
payoff numbers attach to outcomes that are better in this player’s rating system.
Sometimes the payoffs will be simple numerical ratings of the outcomes,
the worst labeled 1, the next worst 2, and so on, all the way to the best. In other
games, there may be more natural numerical scales—for example, money income or profit for firms, viewer-share ratings for television networks, and so on.
In many situations, the payoff numbers are only educated guesses. In such cases,
we need to check that the results of our analysis do not change significantly if we
vary these guesses within some reasonable margin of error.
6841D CH02 UG.indd 28
12/18/14 3:09 PM
s o m e t e r m i n o l o g y a n d b a c k g r o u n d a s s u m p t i o n s 2 9
Two important points about the payoffs need to be understood clearly. First,
the payoffs for one player capture everything in the outcomes of the game that
he cares about. In particular, the player need not be selfish, but his concern
about others should be already included in his numerical payoff scale. Second,
we will suppose that, if the player faces a random prospect of outcomes, then
the number associated with this prospect is the average of the payoffs associated with each component outcome, each weighted by its probability. Thus, if
in one player’s ranking, outcome A has payoff 0 and outcome B has payoff 100,
then the prospect of a 75% probability of A and a 25% probability of B should
have the payoff 0.75 3 0 1 0.25 3 100 5 25. This is often called the expected
payoff from the random prospect. The word expected has a special connotation
in the jargon of probability theory. It does not mean what you think you will get
or expect to get; it is the mathematical or probabilistic or statistical expectation,
meaning an average of all possible outcomes, where each is given a weight proportional to its probability.
The second point creates a potential difficulty. Consider a game where players
get or lose money and payoffs are measured simply in money amounts. In reference to the preceding example, if a player has a 75% chance of getting nothing and
a 25% chance of getting $100, then the expected payoff as calculated in that example is $25. That is also the payoff that the player would get from a simple nonrandom outcome of $25. In other words, in this way of calculating payoffs, a person
should be indifferent to whether he receives $25 for sure or faces a risky prospect
of which the average is $25. One would think that most people would be averse to
risk, preferring a sure $25 to a gamble that yields only $25 on the average.
A very simple modification of our payoff calculation gets around this difficulty. We measure payoffs not in money sums but by using a nonlinear rescaling of the dollar amounts. This is called the expected utility approach, and we
will present it in detail in the appendix to Chapter 7. For now, please take our
word that incorporating differing attitudes toward risk into our framework is a
manageable task. Almost all of game theory is based on the expected utility approach, and it is indeed very useful, although not without flaws. We will adopt it
in this book, but we also will indicate some of the difficulties that it leaves unresolved, with the use of a simple example in Chapter 7, Section 5.C.
C. Rationality
Each player’s aim in the game will be to achieve as high a payoff for himself as
possible. But how good is each player at pursuing this aim? This question is not
about whether and how other players pursuing their own interests will impede
him; that is in the very nature of a game of strategic interaction. Rather, achieving a high payoff is based on how good each player is at calculating the strategy
that is in his own best interests and at following this strategy in the actual course
of play.
6841D CH02 UG.indd 29
12/18/14 3:09 PM
3 0 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
Much of game theory assumes that players are perfect calculators and flawless followers of their best strategies. This is the assumption of rational behavior. Observe the precise sense in which the term rational is being used. It
means that each player has a consistent set of rankings (values or payoffs) over
all the logically possible outcomes and calculates the strategy that best serves
these interests. Thus rationality has two essential ingredients: complete knowledge of one’s own interests, and flawless calculation of what actions will best
serve those ­interests.
It is equally important to understand what is not included in this concept
of rational behavior. It does not mean that players are selfish; a player may rate
highly the well-being of some other player(s) and incorporate this high rating
into his payoffs. It does not mean that players are short-term thinkers; in fact,
calculation of future consequences is an important part of strategic thinking,
and actions that seem irrational from an immediate perspective may have valuable long‑term strategic roles. Most important, being rational does not mean
having the same value system as other players, or sensible people, or ethical or
moral people would use; it means merely pursuing one’s own value system consistently. Therefore, when one player carries out an analysis of how other players will respond (in a game with sequential moves) or of the successive rounds
of thinking about thinking (in a game with simultaneous moves), he must recognize that the other players calculate the consequences of their choices by
using their own value or rating system. You must not impute your own value
systems or standards of rationality to others and assume that they would act as
you would in that situation. Thus, many “experts” commenting on the Persian
Gulf conflict in late 1990 and again in 2002–2003 predicted that Saddam Hussein would back down “because he is rational”; they failed to recognize that
Saddam’s value system was different from the one held by most Western governments and by the Western experts.
In general, each player does not really know the other players’ value systems;
this is part of the reason that in reality many games have incomplete and asymmetric information. In such games, trying to find out the values of others and trying to conceal or convey one’s own become important components of strategy.
Game theory assumes that all players are rational. How good is this assumption, and therefore how good is the theory that employs it? At one level,
it is obvious that the assumption cannot be literally true. People often don’t
even have full advance knowledge of their own value systems; they don’t think
ahead about how they would rank hypothetical alternatives and then remember these rankings until they are actually confronted with a concrete choice.
Therefore they find it very difficult to perform the logical feat of tracing all possible consequences of their and other players’ conceivable strategic choices and
ranking the outcomes in advance in order to choose which strategy to follow.
Even if they knew their preferences, the calculation would remain far from easy.
Most games in real life are very complex, and most real players are limited in
6841D CH02 UG.indd 30
12/18/14 3:09 PM
s o m e t e r m i n o l o g y a n d b a c k g r o u n d a s s u m p t i o n s 3 1
their thinking and computational abilities. In games such as chess, it is known
that the calculation for the best strategy can be performed in a finite number of
steps, but that number is so large that no one has succeeded in performing it,
and good play remains largely an art.
The assumption of rationality may be closer to reality when the players are
regulars who play the game quite often. Then they benefit from having experienced the different possible outcomes. They understand how the strategic
choices of various players lead to the outcomes and how well or badly they
themselves fare. Then we can hope that their choices, even if not made with full
and conscious computations, closely approximate the results of such computations. We can think of the players as implicitly choosing the optimal strategy
or behaving as if they were perfect calculators. We will offer some experimental
evidence in Chapter 5 that the experience of playing the game generates more
rational behavior.
The advantage of making a complete calculation of your best strategy, taking into account the corresponding calculations of a similar strategically calculating rival, is that then you are not making mistakes that the rival can exploit.
In many actual situations, you may have specific knowledge of the way in which
the other players fall short of this standard of rationality, and you can exploit this
in devising your own strategy. We will say something about such calculations,
but very often this is a part of the “art” of game playing, not easily codifiable in
rules to be followed. You must always beware of the danger that the others are
merely pretending to have poor skills or strategy, losing small sums through bad
play and hoping that you will then raise the stakes, when they can raise the level
of their play and exploit your gullibility. When this risk is real, the safer advice
to a player may be to assume that the rivals are perfect and rational calculators
and to choose his own best response to them. In other words, you should play to
your opponents’ capabilities instead of their limitations.
D. Common Knowledge of Rules
We suppose that, at some level, the players have a common understanding of
the rules of the game. In a Peanuts cartoon, Lucy thought that body checking
was allowed in golf and decked Charlie Brown just as he was about to take his
swing. We do not allow this.
The qualification “at some level” is important. We saw how the rules of the
immediate game could be manipulated. But this merely admits that there is another game being played at a deeper level—namely, where the players choose
the rules of the superficial game. Then the question is whether the rules of this
deeper game are fixed. For example, in the legislative context, what are the
rules of the agenda-setting game? They may be that the committee chairs have
the power. Then how are the committees and their chairs elected? And so on.
At some basic level, the rules are fixed by the constitution, by the technology
6841D CH02 UG.indd 31
12/18/14 3:09 PM
3 2 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
of campaigning, or by general social norms of behavior. We ask that all players
recognize the given rules of this basic game, and that is the focus of the analysis.
Of course, that is an ideal; in practice, we may not be able to proceed to a deep
enough level of analysis.
Strictly speaking, the rules of the game consist of (1) the list of players,
(2) the strategies available to each player, (3) the payoffs of each player for all
possible combinations of strategies pursued by all the players, and (4) the assumption that each player is a rational maximizer.
Game theory cannot properly analyze a situation where one player does not
know whether another player is participating in the game, what the entire sets
of actions available to the other players are from which they can choose, what
their value systems are, or whether they are conscious maximizers of their own
payoffs. But in actual strategic interactions, some of the biggest gains are to be
made by taking advantage of the element of surprise and doing something that
your rivals never thought you capable of. Several vivid examples can be found
in historic military conflicts. For example, in 1967 Israel launched a preemptive
attack that destroyed the Egyptian air force on the ground; in 1973 it was Egypt’s
turn to spring a surprise by launching a tank attack across the Suez Canal.
It would seem, then, that the strict definition of game theory leaves out a
very important aspect of strategic behavior, but in fact matters are not that bad.
The theory can be reformulated so that each player attaches some small probability to the situation where such dramatically different strategies are available
to the other players. Of course, each player knows his own set of available strategies. Therefore, the game becomes one of asymmetric information and can be
handled by using the methods developed in Chapter 8.
The concept of common knowledge itself requires some explanation. For
some fact or situation X to be common knowledge between two people, A and
B, it is not enough for each of them separately to know X. Each should also know
that the other knows X; otherwise, for example, A might think that B does not
know X and might act under this misapprehension in the midst of a game. But
then A should also know that B knows that A knows X, and the other way around,
otherwise A might mistakenly try to exploit B’s supposed ignorance of A’s knowledge. Of course, it doesn’t even stop there. A should know that B knows that A
knows that B knows, and so on ad infinitum. Philosophers have a lot of fun exploring the fine points of this infinite regress and the intellectual paradoxes that
it can generate. For us, the general notion that the players have a common understanding of the rules of their game will suffice.
E. Equilibrium
Finally, what happens when rational players’ strategies interact? Our answer
will generally be in the framework of equilibrium. This simply means that each
6841D CH02 UG.indd 32
12/18/14 3:09 PM
s o m e t e r m i n o l o g y a n d b a c k g r o u n d a s s u m p t i o n s 3 3
player is using the strategy that is the best response to the strategies of the other
players. We will develop game-theoretic concepts of equilibrium in Chapters 3
through 7 and then use them in subsequent chapters.
Equilibrium does not mean that things don’t change; in sequential-move
games the players’ strategies are the complete plans of action and reaction,
and the position evolves all the time as the successive moves are made and responded to. Nor does equilibrium mean that everything is for the best; the interaction of rational strategic choices by all players can lead to bad outcomes
for all, as in the prisoners’ dilemma. But we will generally find that the idea of
equilibrium is a useful descriptive tool and organizing concept for our analysis. We will consider this idea in greater detail later, in connection with specific
equilibrium concepts. We will also see how the concept of equilibrium can be
augmented or modified to remove some of its flaws and to incorporate behavior
that falls short of full calculating rationality.
Just as the rational behavior of individual players can be the result of experience in playing the game, fitting of their choices into an overall equilibrium can
come about after some plays that involve trial and error and nonequilibrium
outcomes. We will look at this matter in Chapter 5.
Defining an equilibrium is not hard, but finding an equilibrium in a particular
game—that is, solving the game—can be a lot harder. Throughout this book, we
will solve many simple games in which there are two or three players, each of them
having two or three strategies or one move each in turn. Many people believe this
to be the limit of the reach of game theory and therefore believe that the theory is
useless for the more complex games that take place in reality. That is not true.
Humans are severely limited in their speed of calculation and in their patience for performing long calculations. Therefore, humans can easily solve only
the simple games with two or three players and strategies. But computers are
very good at speedy and lengthy calculations. Many games that are far beyond
the power of human calculators are easy for computers. The level of complexity
in many games in business and politics is already within the power of computers. Even in games such as chess that are far too complex to solve completely,
computers have reached a level of ability comparable to that of the best humans; we consider chess in more detail in Chapter 3.
Computer programs for solving quite complex games exist, and more are appearing rapidly. Mathematica and similar program packages contain routines for
finding mixed-strategy equilibria in simultaneous-move games. Gambit, a National Science Foundation project led by Professors Richard D. McKelvey of the
California Institute of Technology and Andrew McLennan of the University of
Minnesota, is producing a comprehensive set of routines for finding equilibria in sequential- and simultaneous-move games, in pure and mixed strategies,
and with varying degrees of uncertainty and incomplete information. We will
refer to this project again in several places in the next several chapters. The biggest
6841D CH02 UG.indd 33
12/18/14 3:09 PM
3 4 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
advantage of the project is that its programs are open source and can easily be
obtained from its Web site www.gambit-project.org.
Why then do we set up and solve several simple games in detail in this book?
The reason is that understanding the concepts is an important prerequisite for
making good use of the mechanical solutions that computers can deliver, and
understanding comes from doing simple cases yourself. This is exactly how you
learned and now use arithmetic. You came to understand the ideas of addition,
subtraction, multiplication, and division by doing many simple problems mentally or using paper and pencil. With this grasp of basic concepts, you can now
use calculators and computers to do far more complicated sums than you would
ever have the time or patience to do manually. But if you did not understand the
concepts, you would make errors in using calculators; for example, you might
solve 3 1 4 3 5 by grouping additions and multiplications incorrectly as (3 1 4) 3 5
5 35 instead of correctly as 3 1 (4 3 5) 5 23.
Thus, the first step of understanding the concepts and tools is essential.
Without it, you would never learn to set up correctly the games that you ask the
computer to solve. You would not be able to inspect the solution with any feeling for whether it was reasonable, and if it was not, you would not be able to go
back to your original specification, improve it, and solve it again until the specification and the calculation correctly captured the strategic situation that you
wanted to study. Therefore, please pay serious attention to the simple examples
that we solve and the drill exercises that we ask you to solve, especially in Chapters 3 through 7.
F. Dynamics and Evolutionary Games
The theory of games based on assumptions of rationality and equilibrium has
proved very useful, but it would be a mistake to rely on it totally. When games
are played by novices who do not have the necessary experience to perform the
calculations to choose their best strategies, explicitly or implicitly, their choices,
and therefore the outcome of the game, can differ significantly from the predictions of analysis based on the concept of equilibrium.
However, we should not abandon all notions of good choice; we should recognize the fact that even poor calculators are motivated to do better for their
own sakes and will learn from experience and by observing others. We should
allow for a dynamic process in which strategies that proved to be better in previous plays of the game are more likely to be chosen in later plays.
The evolutionary approach to games does just this. It is derived from the
idea of evolution in biology. Any individual animal’s genes strongly influence its
behavior. Some behaviors succeed better in the prevailing environment, in the
sense that the animals exhibiting those behaviors are more likely to reproduce
successfully and pass their genes to their progeny. An evolutionary stable state,
6841D CH02 UG.indd 34
12/18/14 3:09 PM
s o m e t e r m i n o l o g y a n d b a c k g r o u n d a s s u m p t i o n s 3 5
relative to a given environment, is the ultimate outcome of this process over several generations.
The analogy in games would be to suppose that strategies are not chosen by
conscious rational maximizers, but instead that each player comes to the game
with a particular strategy “hardwired” or “programmed” in. The players then
confront other players who may be programmed to apply the same or different
strategies. The payoffs to all the players in such games are then obtained. The
strategies that fare better—in the sense that the players programmed to play
them get higher payoffs in the games—multiply faster, whereas the strategies that
fare worse decline. In biology, the mechanism of this growth or decay is purely
genetic transmission through reproduction. In the context of strategic games
in business and society, the mechanism is much more likely to be social or
­cultural—observation and imitation, teaching and learning, greater availability
of capital for the more successful ventures, and so on.
The object of study is the dynamics of this process. Does it converge to an
evolutionary stable state? Does just one strategy prevail over all others in the
end, or can a few strategies coexist? Interestingly, in many games, the evolutionary stable limit is the same as the equilibrium that would result if the players
were consciously rational calculators. Therefore, the evolutionary approach
gives us a backdoor justification for equilibrium analysis.
The concept of evolutionary games has thus imported biological ideas
into game theory; there has been an influence in the opposite direction, too.
Biologists have recognized that significant parts of animal behavior consist of
strategic interactions with other animals. Members of a given species compete with one another for space or mates; members of different species relate
to one another as predators and prey along a food chain. The payoff in such
games in turn contributes to reproductive success and therefore to biological
evolution. Just as game theory has benefited by importing ideas from biological evolution for its analysis of choice and dynamics, biology has benefited by
importing game‑theoretic ideas of strategies and payoffs for its characterization of basic interactions between animals. We have a true instance of synergy
or symbiosis. We provide an introduction to the study of evolutionary games in
Chapter 12.
G. Observation and Experiment
All of Section 3 to this point has concerned how to think about games or how
to analyze strategic interactions. This constitutes theory. This book will cover
an ­extremely simple level of theory, developed through cases and illustrations
­instead of formal mathematics or theorems, but it will be theory just the same.
All theory should relate to reality in two ways. Reality should help structure the
theory, and reality should provide a check on the results of the theory.
6841D CH02 UG.indd 35
12/18/14 3:09 PM
3 6 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
We can find out the reality of strategic interactions in two ways: (1) by observing them as they occur naturally and (2) by conducting special experiments
that help us pin down the effects of particular conditions. Both methods have
been used, and we will mention several examples of each in the proper contexts.
Many people have studied strategic interactions—the participants’ behavior
and the outcomes—under experimental conditions, in classrooms among “captive” players, or in special laboratories with volunteers. Auctions, bargaining,
prisoners’ dilemmas, and several other games have been studied in this way.
The results are a mixture. Some conclusions of the theoretical analysis are borne
out; for example, in games of buying and selling, the participants generally settle quickly on the economic equilibrium. In other contexts, the outcomes differ
significantly from the theoretical predictions; for example, prisoners’ dilemmas
and bargaining games show more cooperation than theory based on the assumption of selfish, maximizing behavior would lead us to expect, whereas auctions show some gross overbidding.
At several points in the chapters that follow, we will review the knowledge
that has been gained by observation and experiments, discuss how it relates to
the theory, and consider what reinterpretations, extensions, and modifications
of the theory have been made or should be made in light of this knowledge.
4 THE USES OF GAME THEORY
We began Chapter 1 by saying that games of strategy are everywhere—in your
personal and working life; in the functioning of the economy, society, and polity
around you; in sports and other serious pursuits; in war and in peace. This should
be motivation enough to study such games systematically, and that is what game
theory is about. But your study can be better directed if you have a clearer idea of
just how you can put game theory to use. We suggest a threefold perspective.
The first use is in explanation. Many events and outcomes prompt us to ask:
Why did that happen? When the situation requires the interaction of decision
makers with different aims, game theory often supplies the key to understanding the situation. For example, cutthroat competition in business is the result of
the rivals being trapped in a prisoners’ dilemma. At several points in the book,
we will mention actual cases where game theory helps us to understand how
and why the events unfolded as they did. This includes Chapter 14’s detailed
case study of the Cuban missile crisis from the perspective of game theory.
The other two uses evolve naturally from the first. The second is in prediction. When looking ahead to situations where multiple decision makers will
6841D CH02 UG.indd 36
12/18/14 3:09 PM
t h e u s e s o f g a m e t h e o r y 3 7
interact strategically, we can use game theory to foresee what actions they will
take and what outcomes will result. Of course, prediction for a particular context depends on its details, but we will prepare you to use prediction by analyzing several broad classes of games that arise in many applications.
The third use is in advice or prescription. We can act in the service of one participant in the future interaction and tell him which strategies are likely to yield
good results and which ones are likely to lead to disaster. Once again such work is
context specific, and we can equip you with several general principles and techniques and show you how to apply them to some general types of contexts. For
example, in Chapter 7, we will show how to mix moves; in Chapter 9, we will examine how to make your commitments, threats, and promises credible; in Chapter 10, we will examine alternative ways of overcoming prisoners’ dilemmas.
The theory is far from perfect in performing any of the three functions. To
explain an outcome, one must first have a correct understanding of the motives
and behavior of the participants. As we saw earlier, most of game theory takes a
specific approach to these matters—namely, the framework of rational choice
of individual players and the equilibrium of their interaction. Actual players and
interactions in a game might not conform to this framework. But the proof of
the pudding is in the eating. Game-theoretic analysis has greatly improved our
understanding of many phenomena, as reading this book should convince you.
The theory continues to evolve and improve as the result of ongoing research.
This book will equip you with the basics so that you can more easily learn and
profit from the new advances as they appear.
When explaining a past event, we can often use historical records to get a
good idea of the motives and the behavior of the players in the game. When attempting prediction or advice, there is the additional problem of determining
what motives will drive the players’ actions, what informational or other limitations they will face, and sometimes even who the players will be. Most important, if game-theoretic analysis assumes that the other player is a rational
maximizer of his own objectives when in fact he is unable to do the calculations
or is a clueless person acting at random, the advice based on that analysis may
prove wrong. This risk is reduced as more and more players recognize the importance of strategic interaction and think through their strategic choices or get
expert advice on the matter, but some risk remains. Even then, the systematic
thinking made possible by the framework of game theory helps keep the errors
down to this irreducible minimum, by eliminating the errors that arise from
faulty logical thinking about the strategic interaction. Also, game theory can
take into account many kinds of uncertainty and incomplete information, including that about the strategic possibilities and rationality of the opponent. We
will consider a few examples in the chapters to come.
6841D CH02 UG.indd 37
12/18/14 3:09 PM
3 8 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
5 THE STRUCTURE OF THE CHAPTERS TO FOLLOW
In this chapter, we introduced several considerations that arise in almost every
game in reality. To understand or predict the outcome of any game, we must
know in greater detail all of these ideas. We also introduced some basic concepts
that will prove useful in such analysis. However, trying to cope with all of the
concepts at once merely leads to confusion and a failure to grasp any of them.
Therefore, we will build up the theory one concept at a time. We will develop the
appropriate technique for analyzing that concept and illustrate it.
In the first group of chapters, from Chapters 3 to 7, we will construct and
illustrate the most important of these concepts and techniques. We will examine purely sequential-move games in Chapter 3 and introduce the techniques—
game trees and rollback reasoning—that are used to analyze and solve such
games. In Chapters 4 and 5, we will turn to games with simultaneous moves and
develop for them another set of concepts—payoff tables, dominance, and Nash
equilibrium. Both chapters will focus on games where players use pure strategies; in Chapter 4, we will restrict players to a finite set of pure strategies, and in
Chapter 5, we will allow strategies that are continuous variables. Chapter 5 will
also examine some mixed empirical evidence and conceptual criticisms and
counterarguments on Nash equilibrium, and a prominent alternative to Nash
equilibrium—namely, rationalizability. In Chapter 6, we will show how games
that have some sequential moves and some simultaneous moves can be studied
by combining the techniques developed in Chapters 3 through 5. In Chapter 7,
we will turn to simultaneous-move games that require the use of randomization
or mixed strategies. We will start by introducing the basic ideas about mixing in
two-by-two games, develop the simplest techniques for finding mixed-strategy
Nash equilibria, and then consider more complex examples along with the empirical evidence on mixing.
The ideas and techniques developed in Chapters 3 through 7 are the most
basic ones: (1) correct forward-looking reasoning for sequential-move games,
and (2) equilibrium strategies—pure and mixed—for simultaneous-move
games. Equipped with these concepts and tools, we can apply them to study
some broad classes of games and strategies in Chapters 8 through 12.
Chapter 8 studies what happens in games when players are subject to uncertainty or when they have asymmetric information. We will examine strategies for coping with risk and even for using risk strategically. We will also study
the important strategies of signaling and screening that are used for manipulating and eliciting information. We will develop the appropriate generalization of Nash equilibrium in the context of uncertainty, namely Bayesian Nash
equilibrium, and show the different kinds of equilibria that can arise. In Chapter 9, we will continue to examine the role of player manipulation in games as we
6841D CH02 UG.indd 38
12/18/14 3:09 PM
t h e s t r u c t u r e o f t h e c h a p t e r s t o f o l l o w 3 9
consider strategies that players use to manipulate the rules of a game, by seizing a
first‑mover advantage and making a strategic move. Such moves are of three
kinds—commitments, threats, and promises. In each case, credibility is essential
to the success of the move, and we will outline some ways of making such moves
credible.
In Chapter 10, we will move on to study the best-known game of them
all—the prisoners’ dilemma. We will study whether and how cooperation can
be sustained, most importantly in a repeated or ongoing relationship. Then, in
Chapter 11, we will turn to situations where large populations, rather than pairs
or small groups of players, interact strategically, games that concern problems
of collective action. Each person’s actions have an effect—in some instances
beneficial, in others, harmful—on the others. The outcomes are generally not
the best from the aggregate perspective of the society as a whole. We will clarify
the nature of these outcomes and describe some simple policies that can lead
to better outcomes.
All these theories and applications are based on the supposition that the
players in a game fully understand the nature of the game and deploy calculated strategies that best serve their objectives in the game. Such rationally
­optimal behavior is sometimes too demanding of information and calculating
power to be believable as a good description of how people really act. Therefore, Chapter 12 will look at games from a very different perspective. Here,
the players are not calculating and do not pursue optimal strategies. ­Instead,
each player is tied, as if genetically preordained, to a particular strategy. The
population is diverse, and different players have different predetermined
strategies. When players from such a population meet and act out their strategies, which strategies perform better? And if the more successful strategies
proliferate better in the population, whether through inheritance or imitation, then what will the eventual structure of the population look like? It turns
out that such evolutionary dynamics often favor exactly those strategies that
would be used by rational optimizing players. Thus, our study of evolutionary games lends useful indirect support to the theories of optimal strategic
choice and equilibrium that we will have studied in the previous chapters.
In the final group, Chapters 13 through 17, we will take up specific applications to situations of strategic interactions. Here, we will use as needed the
ideas and methods from all the earlier chapters. Chapter 13 uses the methods
developed in Chapter 8 to analyze the strategies that people and firms have to
use when dealing with others who have some private information. We will illustrate the screening mechanisms that are used for eliciting information—for
example, the multiple fares with different restrictions that airlines use for separating the business travelers who are willing to pay more from the tourists who are
more price sensitive. We will also develop the methods for designing incentive
payments to elicit effort from workers when direct monitoring is difficult or too
6841D CH02 UG.indd 39
12/18/14 3:09 PM
4 0 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
costly. Chapter 14 then applies the ideas from Chapter 9 to examine a particularly interesting dynamic version of a threat, known as the strategy of brinkmanship. We will elucidate its nature and apply the idea to study the Cuban missile
crisis of 1962. Chapter 15 is about voting in committees and elections. We will
look at the variety of voting rules available and some paradoxical results that
can arise. In addition, we will address the potential for strategic behavior not
only by voters but also by candidates in a variety of election types.
Chapters 16 and 17 will look at mechanisms for the allocation of valuable
economic resources: Chapter 16 will treat auctions and Chapter 17 will consider
bargaining processes. In our discussion of auctions, we will emphasize the roles
of information and attitudes toward risk in the formulation of optimal strategies
for both bidders and sellers. We will also take the opportunity to apply the theory to the newest type of ­auctions, those that take place online. Finally, Chapter
17 will present bargaining in both cooperative and noncooperative settings.
All of these chapters together provide a lot of material; how might readers
or teachers with more specialized interests choose from it? Chapters 3 through
7 constitute the core theoretical ideas that are needed throughout the rest of the
book. Chapters 9 and 10 are likewise important for the general classes of games
and strategies considered therein. Beyond that, there is a lot from which to pick
and choose. Section 1 of Chapter 5, Section 7 of Chapter 7, Section 5 of Chapter
10, and Section 7 of Chapter 12, for example, all consider more advanced topics. These sections will appeal to those with more scientific and quantitative
backgrounds and interests, but those who come from the social sciences or humanities and have less quantitative background can omit them without loss of
continuity. Chapter 8 deals with an important topic in that most games in practice have incomplete and asymmetric information, and the players’ attempts
to manipulate information is a critical aspect of many strategic interactions.
However, the concepts and techniques for analyzing information games are inherently somewhat more complex. Therefore, some readers and teachers may
choose to study just the examples that convey the basic ideas of signaling and
screening and leave out the rest. We have placed this chapter early in Part Three,
however, in view of the importance of the subject. Chapters 9 and 10 are key to
understanding many phenomena in the real world, and most teachers will want
to include them in their courses, but Section 5 of Chapter 10 is mathematically a
little more advanced and can be omitted. Chapters 11 and 12 both look at games
with large numbers of players. In Chapter 11, the focus is on social interactions;
in Chapter 12, the focus is on evolutionary biology. The topics in Chapter 12
will be of greatest interest to those with interests in biology, but similar themes
are emerging in the social sciences, and students from that background should
aim to get the gist of the ideas even if they skip the details. Chapter 13 is most
important for students of business and organization theories. Chapters 14 and
15 present topics from political science (international diplomacy and elections,
6841D CH02 UG.indd 40
12/18/14 3:09 PM
K e y t e r m s 4 1
respectively), and Chapters 16 and 17 cover topics from economics (auctions
and bargaining). Those teaching courses with more specialized audiences may
choose a subset from Chapters 11 through 17, and indeed expand on the ideas
considered therein.
Whether you come from mathematics, biology, economics, politics, other
sciences, or from history or sociology, the theory and examples of strategic
games will stimulate and challenge your intellect. We urge you to enjoy the subject even as you are studying or teaching it.
SUMMARY
Strategic games situations are distinguished from individual decision-making
situations by the presence of significant interactions among the players. Games
can be classified according to a variety of categories including the timing of play,
the common or conflicting interests of players, the number of times an interaction occurs, the amount of information available to the players, the type of rules,
and the feasibility of coordinated action.
Learning the terminology for a game’s structure is crucial for analysis. Players have strategies that lead to different outcomes with different associated payoffs.
Payoffs incorporate everything that is important to a player about a game and are
calculated by using probabilistic averages or expectations if outcomes are random
or include some risk. Rationality, or consistent behavior, is assumed of all players, who must also be aware of all of the relevant rules of conduct. Equilibrium
arises when all players use strategies that are best responses to others’ strategies;
some classes of games allow learning from experience and the study of dynamic
movements toward equilibrium. The study of behavior in actual game situations
provides additional information about the performance of the theory.
Game theory may be used for explanation, prediction, or prescription in
various circumstances. Although not perfect in any of these roles, the theory
continues to evolve; the importance of strategic interaction and strategic thinking has also become more widely understood and accepted.
KEY TERMS 3
asymmetric information (23)
constant-sum game (21)
cooperative game (26)
decision (18)
equilibrium (32)
evolutionary game (34)
expected payoff (29)
external uncertainty (23)
3
The number in parentheses after each key term is the page on which that term is defined or
­discussed.
6841D CH02 UG.indd 41
12/18/14 3:09 PM
4 2 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
game (18)
imperfect information (23)
noncooperative game (26)
payoff (28)
perfect information (23)
rational behavior (30)
screening (24)
screening device (24)
sequential moves (20)
signal (24)
signaling (24)
simultaneous moves (20)
strategic game (18)
strategic uncertainty (23)
strategy (27)
zero-sum game (21)
solved EXERCISEs 4
S1.
Determine which of the following situations describe games and which
describe decisions. In each case, indicate what specific features of the situation caused you to classify it as you did.
(a) A group of grocery shoppers in the dairy section, with each shopper
choosing a flavor of yogurt to purchase
(b) A pair of teenage girls choosing dresses for their prom
(c) A college student considering what type of postgraduate education
to pursue
(d) The New York Times and the Wall Street Journal choosing the prices
for their online subscriptions this year
(e) A presidential candidate picking a running mate
S2.
Consider the strategic games described below. In each case, state how
you would classify the game according to the six dimensions outlined in
the text. (i) Are moves sequential or simultaneous? (ii) Is the game zerosum or not? (iii) Is the game repeated? (iv) Is there imperfect information, and if so, is there incomplete (asymmetric) information? (v) Are the
rules fixed or not? (vi) Are cooperative agreements possible or not? If you
do not have enough information to classify a game in a particular dimension, explain why not.
(a) Rock-Paper-Scissors : On the count of three, each player makes the
shape of one of the three items with his hand. Rock beats Scissors,
Scissors beats Paper, and Paper beats Rock.
(b) Roll-call voting: Voters cast their votes orally as their names are
called. The choice with the most votes wins.
(c) Sealed-bid auction: Bidders on a bottle of wine seal their bids in envelopes. The highest bidder wins the item and pays the amount of
his bid.
4
Note to Students: The solutions to the Solved Exercises are found on the Web site wwnorton
.com/books/games_of_strategy, which is free and open to all.
6841D CH02 UG.indd 42
12/18/14 3:09 PM
E x e r c i s e s 4 3
S3.
“A game player would never prefer an outcome in which every player
gets a little profit to an outcome in which he gets all the available profit.”
Is this statement true or false? Explain why in one or two sentences.
S4.
You and a rival are engaged in a game in which there are three possible
outcomes: you win, your rival wins (you lose), or the two of you tie. You
get a payoff of 50 if you win, a payoff of 20 if you tie, and a payoff of 0 if
you lose. What is your expected payoff in each of the following situations?
(a) There is a 50% chance that the game ends in a tie, but only a 10%
chance that you win. (There is thus a 40% chance that you lose.)
(b) There is a 50–50 chance that you win or lose. There are no ties.
(c) There is an 80% chance that you lose, a 10% chance that you win,
and a 10% chance that you tie.
S5.
Explain the difference between game theory’s use as a predictive tool and
its use as a prescriptive tool. In what types of real‑world settings might
these two uses be most important?
UNsolved EXERCISES
6841D CH02 UG.indd 43
U1.
Determine which of the following situations describe games and which
describe decisions. In each case, indicate what specific features of the situation caused you to classify it as you did.
(a) A party nominee for president of the United States must choose
whether to use private financing or public financing for her
campaign.
(b) Frugal Fred receives a $20 gift card for downloadable music and
must choose whether to purchase individual songs or whole albums.
(c) Beautiful Belle receives 100 replies to her online dating profile and
must choose whether to reply to each of them.
(d) NBC chooses how to distribute its television shows online this season. The executives consider Amazon.com, iTunes, and/or NBC.
com. The fee they might pay to Amazon or to iTunes is open to
negotiation.
(e) China chooses a level of tariffs to apply to American imports.
U2.
Consider the strategic games described below. In each case, state how
you would classify the game according to the six dimensions outlined in
the text. (i) Are moves sequential or simultaneous? (ii) Is the game zerosum or not? (iii) Is the game repeated? (iv) Is there imperfect information, and if so, is there incomplete (asymmetric) information? (v) Are the
rules fixed or not? (vi) Are cooperative agreements possible or not? If you
12/18/14 3:09 PM
4 4 [ C h . 2 ] h o w t o t h i n k a b o u t s t r at e g i c g a m e s
do not have enough information to classify a game in a particular dimension, explain why not.
(a) Garry and Ross are sales representatives for the same company.
Their manager informs them that of the two of them, whoever sells
more this year wins a Cadillac.
(b) On the game show The Price Is Right, four contestants are asked
to guess the price of a television set. Play starts with the leftmost
player, and each player’s guess must be different from the guesses of
the previous players. The person who comes closest to the real price,
without going over it, wins the television set.
(c) Six thousand players each pay $10,000 to enter the World Series of
Poker. Each starts the tournament with $10,000 in chips, and they
play No­­‑Limit Texas Hold ’Em (a type of poker) until someone wins
all the chips. The top 600 players each receive prize money according to the order of finish, with the winner receiving more than
$8,000,000.
(d) Passengers on Desert Airlines are not assigned seats; passengers
choose seats once they board. The airline assigns the order of boarding according to the time the passenger checks in, either on the Web
site up to 24 hours before takeoff or in person at the airport.
U3.
“Any gain by the winner must harm the loser.” Is this statement true or
false? Explain your reasoning in one or two sentences.
U4.
Alice, Bob, and Confucius are bored during recess, so they decide to play
a new game. Each of them puts a dollar in the pot, and each tosses a
quarter. Alice wins if the coins land all heads or all tails. Bob wins if two
heads and one tail land, and Confucius wins if one head and two tails
land. The quarters are fair, and the winner receives a net payment of
$2 ($3 2 $1 5 $2), and the losers lose their $1.
(a) What is the probability that Alice will win and the probability that
she will lose?
(b) What is Alice’s expected payoff?
(c) What is the probability that Confucius will win and the probability
that he will lose?
(d) What is Confucius’ expected payoff?
(e) Is this a zero‑sum game? Please explain your answer.
U5.
“When one player surprises another, this indicates that the players did
not have common knowledge of the rules.” Give an example that illustrates this statement, and give a counterexample that shows that the
statement is not always true.
6841D CH02 UG.indd 44
12/18/14 3:09 PM
Part TWO
■
Concepts and
Techniques
6841D CH03 UG.indd 45
12/18/14 3:10 PM
6841D CH03 UG.indd 46
12/18/14 3:10 PM
3
■
Games with Sequential Moves
S
equential-move games
entail strategic situations in which there is a strict
order of play. Players take turns making their moves, and they know what
the players who have gone before them have done. To play well in such
a game, participants must use a particular type of interactive thinking.
Each player must consider how her opponent will respond if she makes a particular move. Whenever actions are taken, players need to think about how their
current actions will influence future actions, both for their rivals and for themselves. Players thus decide their current moves on the basis of calculations of
future consequences.
Most actual games combine aspects of both sequential- and simultaneous‑move
situations. But the concepts and methods of analysis are more easily understood
if they are first developed separately for the two pure cases. Therefore, in this
chapter, we study purely sequential games. Chapters 4 and 5 deal with purely simultaneous games, and Chapter 6 and parts of Chapter 7 show how to combine
the two types of analysis in more realistic mixed situations. The analysis presented here can be used whenever a game includes sequential decision making. Analysis of sequential games also provides information about when it is to
a player’s advantage to move first and when it is better to move second. Players
can then devise ways, called strategic moves, to manipulate the order of play to
their advantage. The analysis of such moves is the focus of Chapter 9.
47
6841D CH03 UG.indd 47
12/18/14 3:10 PM
4 8 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
1 GAME TREES
We begin by developing a graphical technique for displaying and analyzing
­sequential-move games, called a game tree. This tree is referred to as the extensive form of a game. It shows all the component parts of the game that we introduced in Chapter 2: players, actions, and payoffs.
You have probably come across decision trees in other contexts. Such trees
show all the successive decision points, or nodes, for a single decision maker in a
neutral environment. Decision trees also include branches corresponding to the
available choices emerging from each node. Game trees are just joint decision trees
for all of the players in a game. The trees illustrate all of the possible actions that can
be taken by all of the players and indicate all of the possible outcomes of the game.
A. Nodes, Branches, and Paths of Play
Figure 3.1 shows the tree for a particular sequential game. We do not supply a
story for this game, because we want to omit circumstantial details and help
you focus on general concepts. Our game has four players: Ann, Bob, Chris, and
Deb. The rules of the game give the first move to Ann; this is shown at the leftmost
point, or node, which is called the initial node or root of the game tree. At this
node, which may also be called an action node or decision node, Ann has two
choices available to her. Ann’s possible choices are labeled “Stop” and “Go” (remember that these labels are abstract and have no necessary significance) and
are shown as branches emerging from the initial node.
If Ann chooses “Stop,” then it will be Bob’s turn to move. At his action node,
he has three available choices labeled 1, 2, and 3. If Ann chooses “Go,” then Chris
gets the next move, with choices “Risky” and “Safe.” Other nodes and branches
follow successively, and rather than list them all in words, we draw your attention to a few prominent features.
If Ann chooses “Stop” and then Bob chooses 1, Ann gets another turn, with
new choices, “Up” and “Down.” It is quite common in actual sequential-move
games for a player to get to move several times and to have her available moves
differ at different turns. In chess, for example, two players make alternate moves;
each move changes the board and therefore the available moves are changed at
subsequent turns.
B. Uncertainty and “Nature’s Moves”
If Ann chooses “Go” and then Chris chooses “Risky,” something happens at
­random—a fair coin is tossed and the outcome of the game is determined by
whether that coin comes up “heads” or “tails.” This aspect of the game is an
6841D CH03 UG.indd 48
12/18/14 3:10 PM
g a m e t r e e s 4 9
ANN
Up
(2, 7, 4, 1)
Down
(1, –2, 3, 0)
High
(1.3, 2, –11, 3)
Low
(0, –2.718, 0, 0)
1
Branches
DEB
2
BOB
Stop
Terminal
nodes
3
(10, 7, 1, 1)
ANN
Good 50%
(6, 3, 4, 0)
Bad 50%
(2, 8, –1, 2)
NATURE
Risky
Go
Root
(Initial node)
CHRIS
Safe
(3, 5, 3, 1)
Figure 3.1 An Illustrative Game Tree
example of external uncertainty and is handled in the tree by introducing an
outside player called “Nature.” Control over the random event is ceded to the
player known as Nature, who chooses, as it were, one of two branches, each with
50% probability. The probabilities here are fixed by the type of random event, a
coin toss, but could vary in other circumstances; for example, with the throw of
a die, Nature could specify six possible outcomes, each with 16‒23% probability.
Use of the player Nature allows us to introduce external uncertainty in a game
and gives us a mechanism to allow things to happen that are outside the control
of any of the actual players.
You can trace a number of different paths through the game tree by following successive branches. In Figure 3.1, each path leads you to an end point of
the game after a finite number of moves. An end point is not a necessary feature
of all games; some may in principle go on forever. But most applications that we
will consider are finite games.
6841D CH03 UG.indd 49
12/18/14 3:10 PM
5 0 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
C. Outcomes and Payoffs
At the last node along each path, called a terminal node, no player has another
move. (Note that terminal nodes are thus distinguished from action nodes.) Instead, we show the outcome of that particular sequence of actions, as measured
by the payoffs for the players. For our four players, we list the payoffs in order
(Ann, Bob, Chris, Deb). It is important to specify which payoff belongs to which
player. The usual convention is to list payoffs in the order in which the players
make the moves. But this method may sometimes be ambiguous; in our example, it is not clear whether Bob or Chris should be said to have the second move.
Thus, we have used alphabetical order. Further, we have color-coded everything so that Ann’s name, choices, and payoffs are all in black; Bob’s in dark blue;
Chris’s in gray; and Deb’s in light blue. When drawing trees for any games that
you analyze, you can choose any specific convention you like, but you should
state and explain it clearly for the reader.
The payoffs are numerical, and generally for each player a higher number
means a better outcome. Thus, for Ann, the outcome of the bottommost path
(payoff 3) is better than that of the topmost path (payoff 2) in Figure 3.1. But
there is no necessary comparability across players. Thus there is no necessary
sense in which, at the end of the topmost path, Bob (payoff 7) does better than
Ann (payoff 2). Sometimes, if payoffs are dollar amounts, for example, such interpersonal comparisons may be meaningful.
Players use information about payoffs when deciding among the various
actions available to them. The inclusion of a random event (a choice made by
Nature) means that players need to determine what they get on average when
Nature moves. For example, if Ann chooses “Go” at the game’s first move, Chris
may then choose “Risky,” giving rise to the coin toss and Nature’s “choice” of
“Good” or “Bad.” In this situation, Ann could anticipate a payoff of 6 half the
time and a payoff of 2 half the time, or a statistical average or expected payoff of
4  (0.5  6)  (0.5  2).
D. Strategies
Finally, we use the tree in Figure 3.1 to explain the concept of a strategy. A single action taken by a player at a node is called a move. But players can, do, and
should make plans for the succession of moves that they expect to make in all of
the various eventualities that might arise in the course of a game. Such a plan of
action is called a strategy.
In this tree, Bob, Chris, and Deb each get to move at most once; Chris, for example, gets a move only if Ann chooses “Go” on her first move. For them, there is
no distinction between a move and a strategy. We can qualify the move by specifying the contingency in which it gets made; thus, a strategy for Bob might be,
6841D CH03 UG.indd 50
12/18/14 3:10 PM
g a m e t r e e s 5 1
“Choose 1 if Ann has chosen Stop.” But Ann has two opportunities to move, so
her strategy needs a fuller specification. One strategy for her is, “Choose Stop,
and then if Bob chooses 1, choose Down.”
In more complex games such as chess, where there are long sequences of
moves with many choices available at each, descriptions of strategies get very
complicated; we consider this aspect in more detail later in this chapter. But the
general principle for constructing strategies is simple, except for one peculiarity.
If Ann chooses “Go” on her first move, she never gets to make a second move.
Should a strategy in which she chooses “Go” also specify what she would do in
the hypothetical case in which she somehow found herself at the node of her
second move? Your first instinct may be to say no, but formal game theory says
yes, and for two reasons.
First, Ann’s choice of “Go” at the first move may be influenced by her consideration of what she would have to do at her second move if she were to choose
“Stop” originally instead. For example, if she chooses “Stop,” Bob may then
choose 1; then Ann gets a second move, and her best choice would be “Up,” giving her a payoff of 2. If she chooses “Go” on her first move, Chris would choose
“Safe” (because his payoff of 3 from “Safe” is better than his expected payoff
of 1.5 from “Risky”), and that outcome would yield Ann a payoff of 3. To make
this thought process clearer, we state Ann’s strategy as, “Choose ‘Go’ at the first
move, and choose ‘Up’ if the next move arises.”
The second reason for this seemingly pedantic specification of strategies
has to do with the stability of equilibrium. When considering stability, we ask
what would happen if players’ choices were subjected to small disturbances.
One such disturbance is that players make small mistakes. If choices are made
by pressing a key, for example, Ann may intend to press the “Go” key, but there
is a small probability that her hand may tremble and she may press the “Stop”
key instead. In such a setting, it is important to specify how Ann will follow up
when she discovers her error because Bob chooses 1 and it is Ann’s turn to move
again. More advanced levels of game theory require such stability analyses, and
we want to prepare you for that by insisting on your specifying strategies as such
complete plans of action right from the beginning.
E. Tree Construction
Now we sum up the general concepts illustrated by the tree of Figure 3.1. Game
trees consist of nodes and branches. Nodes are connected to one another by the
branches and come in two types. The first node type is called a decision node.
Each decision node is associated with the player who chooses an action at that
node; every tree has one decision node that is the game’s initial node, the starting point of the game. The second type of node is called a terminal node. Each
terminal node has associated with it a set of outcomes for the players taking part
6841D CH03 UG.indd 51
12/18/14 3:10 PM
5 2 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
in the game; these outcomes are the payoffs received by each player if the game
has followed the branches that lead to this particular terminal node.
The branches of a game tree represent the possible actions that can be taken
from any decision node. Each branch leads from a decision node on the tree either to another decision node, generally for a different player, or to a terminal
node. The tree must account for all of the possible choices that could be made
by a player at each node; so some game trees include branches associated with
the choice “Do nothing.” There must be at least one branch leading from each
decision node, but there is no maximum. Every decision node can have only one
branch leading to it, however.
Game trees are often drawn from left to right across a page. However, game
trees can be drawn in any orientation that best suits the game at hand: bottom
up, sideways, top down, or even radially outward from a center. The tree is a
metaphor, and the important feature is the idea of successive branching, as decisions are made at the tree nodes.
2 SOLVING GAMES BY USING TREES
We illustrate the use of trees in finding equilibrium outcomes of sequential‑move
games in a very simple context that many of you have probably c­ onfronted—
whether to smoke. This situation and many other similar one‑player strategic situations can be described as games if we recognize that future choices are made
by the player’s future self, who will be subject to different influences and will
have different views about the ideal outcome of the game.
Take, for example, a teenager named Carmen who is deciding whether to
smoke. First, she has to decide whether to try smoking at all. If she does try it,
she has the further decision of whether to continue. We illustrate this example
as a simple decision in the tree of Figure 3.2.
The nodes and the branches are labeled with Carmen’s available choices,
but we need to explain the payoffs. Choose the outcome of never smoking at
all as the standard of reference, and call its payoff 0. There is no special significance to the number 0 in this context; all that matters for comparing outcomes,
and thus for Carmen’s decision, is whether this payoff is bigger or smaller than
the others. Suppose Carmen best likes the outcome in which she tries smoking
for a while but does not continue. The reason may be that she just likes to have
experienced many things firsthand or so that she can more convincingly be able
to say “I have been there and know it to be a bad situation” when she tries in
the future to dissuade her children from smoking. Give this outcome the payoff
1. The outcome in which she tries smoking and then continues is the worst.
6841D CH03 UG.indd 52
12/18/14 3:10 PM
s o lv i n g g a m e s b y u s i n g t r e e s 5 3
Continue
–1
Try
Not
1
Not
0
Figure 3.2 The Smoking Decision
Leaving aside the long-term health hazards, there are immediate problems—her
hair and clothes will smell bad, and her friends will avoid her. Give this outcome
the payoff 1. Carmen’s best choice then seems clear—she should try smoking
but she should not continue.
However, this analysis ignores the problem of addiction. Once Carmen has
tried smoking for a while, she develops different tastes, as well as different payoffs. The decision of whether to continue will be made not by “Today’s Carmen”
with today’s assessment of outcomes as shown in Figure 3.2, but by “Future Carmen,” who makes a different ranking of the alternatives available in the future.
When she makes her choice today, she has to look ahead to this consequence
and factor it into her current decision, which she should make on the basis of
her current preferences. In other words, the choice problem concerning smoking is not really a decision in the sense explained in Chapter 2—a choice made
by a single person in a neutral environment—but a game in the technical sense
also explained in Chapter 2, where the other player is Carmen’s future self with
her own distinct preferences. When Today’s Carmen makes her decision, she has
to play against Future Carmen.
We convert the decision tree of Figure 3.2 into a game tree in Figure 3.3 by
distinguishing between the two players who make the choices at the two nodes.
At the initial node, Today’s Carmen decides whether to try smoking. If her decision is to try, then the addicted Future Carmen comes into being and chooses
whether to continue. We show the healthy, nonpolluting Today’s Carmen, her
actions, and her payoffs in blue and the addicted Future Carmen, her actions,
and her payoffs in black, the color that her lungs have become. The payoffs of
Today’s Carmen are as before. But Future Carmen will enjoy continuing to
smoke and will suffer terrible withdrawal symptoms if she does not continue.
Let Future Carmen’s payoff from “Continue” be 1 and that from “Not” be 1.
Given the preferences of the addicted Future Carmen, she will choose
“Continue” at her decision node. Today’s Carmen should look ahead to this
6841D CH03 UG.indd 53
12/18/14 3:10 PM
5 4 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
Continue
Try
–1, 1
FUTURE
CARMEN
Not
TODAY’S
CARMEN
1, –1
Not
0
Figure 3.3 The Smoking Game
prospect and fold it into her current decision, recognizing that the choice to
try smoking will inevitably lead to continuing to smoke. Even though Today’s
Carmen does not want to continue to smoke in the future, given her preferences
today, she will not be able to implement her currently preferred choice at the
future time because Future Carmen, who has different preferences, will make
that choice. So Today’s Carmen should foresee that the choice “Try” will lead to
“Continue” and get her the payoff 1 as judged by her today, whereas the choice
“Not” will get her the payoff 0. So she should choose the latter.
This argument is shown more formally and with greater visual effect in Figure 3.4. In Figure 3.4a, we cut off, or prune, the branch “Not” emerging from the
second node. This pruning corresponds to the fact that Future Carmen, who
makes the choice at that node, will not choose the action associated with that
branch, given her preferences as shown in black.
The tree that remains has two branches emerging from the first node where
Today’s Carmen makes her choice; each of these branches now leads directly to
a terminal node. The pruning allows Today’s Carmen to forecast completely the
eventual consequence of each of her choices. “Try” will be followed by “Continue” and yield a payoff 1, as measured in the preferences of Today’s Carmen,
while “Not” will yield 0. Carmen’s choice today should then be “Not” rather than
“Try.” Therefore, we can prune the “Try” branch emerging from the first node
(along with its foreseeable continuation). This pruning is done in Figure 3.4b.
The tree shown there is now “fully pruned,” leaving only one branch emerging
from the initial node and leading to a terminal node. Following the only remaining path through the tree shows what will happen in the game when all players
make their best choices with correct forecasting of all future consequences.
In pruning the tree in Figure 3.4, we crossed out the branches not chosen.
Another equivalent but alternative way of showing player choices is to “highlight”
the branches that are chosen. To do so, you can place check marks or arrowheads on these branches or show them as thicker lines. Any one method will
do; Figure 3.5 shows them all. You can choose whether to prune or to highlight,
6841D CH03 UG.indd 54
12/18/14 3:10 PM
s o lv i n g g a m e s b y u s i n g t r e e s 5 5
(a) Pruning at second node:
Continue
Try
–1, 1
FUTURE
CARMEN
Not
TODAY’S
CARMEN
1, –1
Not
0
(b) Full pruning:
Continue
Try
–1, 1
FUTURE
CARMEN
Not
TODAY’S
CARMEN
1, –1
Not
0
Figure 3.4 Pruning the Tree of the Smoking Game
√
Try
Continue
–1, 1
FUTURE
CARMEN
Not
TODAY’S
CARMEN
1, –1
Not
√
0
FIGURE 3.5 Showing Branch Selection on the Tree of the Smoking Game
6841D CH03 UG.indd 55
12/18/14 3:10 PM
5 6 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
but the latter, especially in its arrowhead form, has some advantages. First, it
produces a cleaner picture. Second, the mess of the pruning picture sometimes
does not clearly show the order in which various branches were cut. For example, in Figure 3.4b, a reader may get confused and incorrectly think that the
“Continue” branch at the second node was cut first and that the “Try” branch at
the first node followed by the “Not” branch at the second node were cut next. Finally, and most important, the arrowheads show the outcome of the sequence of
optimal choices most visibly as a continuous link of arrows from the initial node
to a terminal node. Therefore, in subsequent diagrams of this type, we generally
use arrows instead of pruning. When you draw game trees, you should practice
showing both methods for a while; when you are comfortable with trees, you
can choose either to suit your taste.
No matter how you display your thinking in a game tree, the logic of the
analysis is the same and is important. You must start your analysis by considering those action nodes that lead directly to terminal nodes. The optimal choices
for a player moving at such a node can be found immediately by comparing
her payoffs at the relevant terminal nodes. With the use of these end-of-game
choices to forecast consequences of earlier actions, the choices at nodes just
preceding the final decision nodes can be determined. Then the same can be
done for the nodes before them, and so on. By working backward along the tree
in this way, you can solve the whole game.
This method of looking ahead and reasoning back to determine behavior in
sequential-move games is known as rollback. As the name suggests, using rollback
requires starting to think about what will happen at all the terminal nodes and literally “rolling back” through the tree to the initial node as you do your analysis. Because this reasoning requires working backward one step at a time, the method is
also called backward induction. We use the term rollback because it is simpler and
becoming more widely used, but other sources on game theory will use the older
term backward induction. Just remember that the two are equivalent.
When all players do rollback analysis to choose their optimal strategies, we
call this set of strategies the rollback equilibrium of the game; the outcome that
arises from playing these strategies is the rollback equilibrium outcome. More
advanced game theory texts refer to this concept as subgame perfect equilibrium, and your instructor may prefer to use that term. We provide more formal
explanation and analysis of subgame perfect equilibrium in Chapter 6, but we
generally prefer the simpler and more intuitive term rollback equilibrium. Game
theory predicts this outcome as the equilibrium of a sequential game in which
all players are rational calculators in pursuit of their respective best payoffs.
Later in this chapter, we will address how well this prediction is borne out in
practice. For now, you should know that all finite sequential-move games presented in this book have at least one rollback equilibrium. In fact, most have exactly one. Only in exceptional cases where a player gets equal payoffs from two
6841D CH03 UG.indd 56
12/18/14 3:10 PM
a dd i n g m o r e p l ay e r s 5 7
or more different sets of moves, and is therefore indifferent between them, will
games have more than one rollback equilibrium.
In the smoking game, the rollback equilibrium is where Today’s Carmen
chooses the strategy “Not” and Future Carmen chooses the strategy “Continue.”
When Today’s Carmen takes her optimal action, the addicted Future Carmen
does not come into being at all and therefore gets no actual opportunity to
move. But Future Carmen’s shadowy presence and the strategy that she would
choose if Today’s Carmen chose “Try” and gave her an opportunity to move are
important parts of the game. In fact, they are instrumental in determining the
optimal move for Today’s Carmen.
We introduced the ideas of the game tree and rollback analysis in a very
simple example, where the solution was obvious from verbal argument. Now we
proceed to use the ideas in successively more complex situations, where verbal
analysis becomes harder to conduct and the visual analysis with the use of the
tree becomes more important.
3 ADDING MORE PLAYERS
The techniques developed in Section 2 in the simplest setting of two players
and two moves can be readily extended. The trees get more complex, with more
branches, nodes, and levels, but the basic concepts and the method of rollback
remain unchanged. In this section, we consider a game with three players, each
of whom has two choices; with slight variations, this game reappears in many
subsequent chapters.
The three players, Emily, Nina, and Talia, all live on the same small street.
Each has been asked to contribute toward the creation of a flower garden where
their small street intersects with the main highway. The ultimate size and splendor of the garden depends on how many of them contribute. Furthermore, although each player is happy to have the garden—and happier as its size and
splendor increase—each is reluctant to contribute because of the cost that she
must incur to do so.
Suppose that, if two or all three contribute, there will be sufficient resources
for the initial planting and subsequent maintenance of the garden; it will then
be quite attractive and pleasant. However, if one or none contribute, it will be
too sparse and poorly maintained to be pleasant. From each player’s perspective, there are thus four distinguishable outcomes:
• She does not contribute, but both of the others do (resulting in a pleasant
garden and saving the cost of her own contribution).
• She contributes, and one or both of the others do as well (resulting in a
pleasant garden, but incurring the cost of her own contribution).
6841D CH03 UG.indd 57
12/18/14 3:10 PM
5 8 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
• She does not contribute, and only one or neither of the others does (resulting in a sparse garden, but saving the cost of her own contribution).
• She contributes, but neither of the others does (resulting in a sparse garden and incurring the cost of her own contribution).
Of these outcomes, the one listed at the top is clearly the best and the one
listed at the bottom is clearly the worst. We want higher payoff numbers to indicate outcomes that are more highly regarded, so we give the top outcome
the payoff 4 and the bottom one the payoff 1. (Sometimes payoffs are associated with an outcome’s rank order, so, with four outcomes, 1 would be best and
4 worst, and smaller numbers would denote more preferred outcomes. When
reading, you should carefully note which convention the author is using; when
writing, you should carefully state which convention you are using.)
There is some ambiguity about the two middle outcomes. Let us suppose
that each player regards a pleasant garden more highly than her own contribution. Then the outcome listed second gets payoff 3, and the outcome listed third
gets payoff 2.
Suppose the players move sequentially. Emily has the first move, and
chooses whether to contribute. Then, after observing what Emily has chosen,
Nina chooses between contributing and not contributing. Finally, having observed what Emily and Nina have chosen, Talia makes a similar choice.1
Figure 3.6 shows the tree for this game. We have labeled the action nodes for
easy reference. Emily moves at the initial node, a, and the branches corresponding
to her two choices, Contribute and Don’t, respectively, lead to nodes b and c. At
each of these nodes, Nina gets to move and to choose between Contribute and
Don’t. Her choices lead to nodes d, e, f, and g, at each of which Talia gets to move.
Her choices lead to eight terminal nodes, where we show the payoffs in order
(Emily, Nina, Talia).2 For example, if Emily contributes, then Nina does not, and
finally Talia does, then the garden is pleasant, and the two contributors each get
payoffs 3, while the noncontributor gets her top outcome with payoff 4; in this
case, the payoff list is (3, 4, 3).
To apply rollback analysis to this game, we begin with the action nodes
that come immediately before the terminal nodes—namely, d, e, f, and g. Talia
moves at each of these nodes. At d, she faces the situation where both Emily and
Nina have contributed. The garden is already assured to be pleasant; so, if Talia
chooses Don’t, she gets her best outcome, 4, whereas, if she chooses Contribute, she gets the next best, 3. Her preferred choice at this node is Don’t. We show
1
In later chapters, we vary the rules of this game—the order of moves and payoffs—and examine
how such variation changes the outcomes.
2
Recall from the discussion of the general tree in Section 1 that the usual convention for
sequential‑move games is to list payoffs in the order in which the players move; however, in case
of ambiguity or simply for clarity, it is good practice to specify the order explicitly.
6841D CH03 UG.indd 58
12/18/14 3:10 PM
a dd i n g m o r e p l ay e r s 5 9
PAYOFFS
te
TALIA Contribu
NINA
e
ut
b
tri
b
EMILY
Don
’t
on
C
te
tribu
Con
d
Don’t
e
Contrib
3, 4, 3
Don’t
1, 2, 2
TALIA
a
ute
ute
TALIA Contrib
Do
n’t
te
tribu
c
NINA
Con
Don
’t
3, 3, 3
3, 3, 4
4, 3, 3
f
Don’t
g
Contrib
2, 2, 1
Don’t
2, 2, 2
TALIA
ute
2, 1, 2
FIGURE 3.6 The Street–Garden Game
this preference both by thickening the branch for Don’t and by adding an arrowhead; either one would suffice to illustrate Talia’s choice. At node e, Emily
has contributed and Nina has not; so Talia’s contribution is crucial for a pleasant
garden. Talia gets the payoff 3 if she chooses Contribute and 2 if she chooses
Don’t. Her preferred choice at e is Contribute. You can check Talia’s choices at
the other two nodes similarly.
Now we roll back the analysis to the preceding stage—namely, nodes b and
c, where it is Nina’s turn to choose. At b, Emily has contributed. Nina’s reasoning now goes as follows: “If I choose Contribute, that will take the game to node
d, where I know that Talia will choose Don’t, and my payoff will be 3. (The garden will be pleasant, but I will have incurred the cost of my contribution.) If I
choose Don’t, the game will go to node e, where I know that Talia will choose
Contribute, and I will get a payoff of 4. (The garden will be pleasant, and I will
have saved the cost of my contribution.) Therefore I should choose Don’t.” Similar reasoning shows that at c, Nina will choose Contribute.
Finally, consider Emily’s choice at the initial node, a. She can foresee the
subsequent choices of both Nina and Talia. Emily knows that, if she chooses
Contribute, these later choices will be Don’t for Nina and Contribute for Talia.
With two contributors, the garden will be pleasant but Emily will have incurred a
cost; so her payoff will be 3. If Emily chooses Don’t, then the subsequent choices
will both be Contribute, and, with a pleasant garden and no cost of her own contribution, Emily’s payoff will be 4. So her preferred choice at a is Don’t.
6841D CH03 UG.indd 59
12/18/14 3:10 PM
6 0 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
The result of rollback analysis for this street–garden game is now easily
summarized. Emily will choose Don’t, then Nina will choose Contribute, and
finally Talia will choose Contribute. These choices trace a particular path of
play through the tree—along the lower branch from the initial node, a, and then
along the upper branches at each of the two subsequent nodes reached, c and
f. In Figure 3.6, the path of play is easily seen as the continuous sequence of arrowheads joined tail to tip from the initial node to the terminal node fifth from
the top of the tree. The payoffs that accrue to the players are shown at this terminal node.
Rollback analysis is simple and appealing. Here, we emphasize some features that emerge from it. First, notice that the equilibrium path of play of a
sequential-move game misses most of the branches and nodes. Calculating the
best actions that would be taken if these other nodes were reached, however, is
an important part of determining the ultimate equilibrium. Choices made early
in the game are affected by players’ expectations of what would happen if they
chose to do something other than their best actions and by what would happen if any opposing player chose to do something other than what was best for
her. These expectations, based on predicted actions at out-of-equilibrium nodes
(nodes associated with branches pruned in the process of rollback), keep players choosing optimal actions at each node. For instance, Emily’s optimal choice
of Don’t at the first move is governed by the knowledge that, if she chose Contribute, then Nina would choose Don’t, followed by Talia choosing Contribute;
this sequence would give Emily the payoff 3, instead of the 4 that she can get by
choosing Don’t at the first move.
The rollback equilibrium gives a complete statement of all this analysis by
specifying the optimal strategy for each player. Recall that a strategy is a complete plan of action. Emily moves first and has just two choices, so her strategy
is quite simple and is effectively the same thing as her move. But Nina, moving second, acts at one of two nodes, at one if Emily has chosen Contribute and
at the other if Emily has chosen Don’t. Nina’s complete plan of action has to
specify what she would do in either case. One such plan, or strategy, might be
“choose Contribute if Emily has chosen Contribute, choose Don’t if Emily has
chosen Don’t.” We know from our rollback analysis that Nina will not choose
this strategy, but our interest at this point is in describing all the available strategies from which Nina can choose within the rules of the game. We can abbreviate and write C for Contribute and D for Don’t; then this strategy can be written
as “C if Emily chooses C so that the game is at node b, D if Emily chooses D so
that the game is at node c,” or, more simply, “C at b, D at c,” or even “CD” if
the circumstances in which each of the stated actions is taken are evident or
previously explained. Now it is easy to see that, because Nina has two choices
available at each of the two nodes where she might be acting, she has available
to her four plans, or strategies—“C at b, C at c,” “C at b, D at c,” “D at b, C at c,”
6841D CH03 UG.indd 60
12/18/14 3:10 PM
a dd i n g m o r e p l ay e r s 6 1
and “D at b, D at c,” or “CC,” “CD,” “DC,” and “DD.” Of these strategies, the
rollback analysis and the arrows at nodes b and c of Figure 3.6 show that her
optimal strategy is “DC.”
Matters are even more complicated for Talia. When her turn comes, the history of play can, according to the rules of the game, be any one of four possibilities.
Talia’s turn to act comes at one of four nodes in the tree, one after Emily has chosen
C and Nina has chosen C (node d), the second after Emily’s C and Nina’s D (node
e), the third after Emily’s D and Nina’s C (node f ), and the fourth after both Emily
and Nina choose D (node g). Each of Talia’s strategies, or complete plans of action,
must specify one of her two actions for each of these four scenarios, or one of her
two actions at each of her four possible action nodes. With four nodes at which to
specify an action and with two actions from which to choose at each node, there
are 2 3 2 3 2 3 2, or 16 possible combinations of actions. So Talia has available to
her 16 possible strategies. One of them could be written as
“C at d, D at e, D at f, C at g” or “CDDC” for short,
where we have fixed the order of the four scenarios (the histories of moves by
Emily and Nina) in the order of nodes d, e, f, and g. Then, with the use of the
same abbreviation, the full list of 16 strategies available to Talia is
CCCC, CCCD, CCDC, CCDD, CDCC, CDCD, CDDC, CDDD,
DCCC, DCCD, DCDC, DCDD, DDCC, DDCD, DDDC, DDDD.
Of these strategies, the rollback analysis of Figure 3.6 and the arrows at nodes d,
e, f, and g show that Talia’s optimal strategy is DCCD.
Now we can express the findings of our rollback analysis by stating the strategy choices of each player—Emily chooses D from the two strategies available to
her, Nina chooses DC from the four strategies available to her, and Talia chooses
DCCD from the sixteen strategies available to her. When each player looks ahead
in the tree to forecast the eventual outcomes of her current choices, she is calculating the optimal strategies of the other players. This configuration of strategies, D for Emily, DC for Nina, and DCCD for Talia, then constitutes the rollback
equilibrium of the game.
We can put together the optimal strategies of the players to find the actual path of play that will result in the rollback equilibrium. Emily will begin by
choosing D. Nina, following her strategy DC, chooses the action C in response
to Emily’s D. (Remember that Nina’s DC means “choose D if Emily has played C,
and choose C if Emily has played D.”) According to the convention that we have
adopted, Talia’s actual action after Emily’s D and then Nina’s C—from node f—is
the third letter in the four-letter specification of her strategies. Because Talia’s
optimal strategy is DCCD, her action along the path of play is C. Thus the actual
path of play consists of Emily playing D, followed successively by Nina and Talia
playing C.
6841D CH03 UG.indd 61
12/18/14 3:10 PM
6 2 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
To sum up, we have three distinct concepts:
1. The lists of available strategies for each player. The list, especially for later
players, may be very long, because their actions in situations corresponding to all conceivable preceding moves by other players must be specified.
2. The optimal strategy, or complete plan of action for each player. This
strategy must specify the player’s best choices at each node where the
rules of the game specify that she moves, even though many of these
nodes will never be reached in the actual path of play. This specification
is in effect the preceding movers’ forecasting of what would happen if
they took different actions and is therefore an important part of their calculation of their own best actions at the earlier nodes. The optimal strategies of all players together yield the rollback equilibrium.
3. The actual path of play in the rollback equilibrium, found by putting together the optimal strategies for all the players.
4 ORDER ADVANTAGES
In the rollback equilibrium of the street–garden game, Emily gets her best outcome (payoff 4), because she can take advantage of the opportunity to make
the first move. When she chooses not to contribute, she puts the onus on the
other two players—each can get her next-best outcome if and only if both of
them choose to contribute. Most casual thinkers about strategic games have
the preconception that such a first-mover advantage should exist in all games.
However, that is not the case. It is easy to think of games in which an opportunity to move second is an advantage. Consider the strategic interaction between two firms that sell similar merchandise from catalogs—say, Land’s End
and L.L. Bean. If one firm had to release its catalog first, and then the second
firm could see what prices the first had set before printing its own catalog, then
the second mover could undercut its rival on all items and gain a tremendous
competitive edge.
First-mover advantage comes from the ability to commit oneself to an advantageous position and to force the other players to adapt to it; second-mover
advantage comes from the flexibility to adapt oneself to the others’ choices.
Whether commitment or flexibility is more important in a specific game depends on its particular configuration of strategies and payoffs; no generally valid
rule can be laid down. We will come across examples of both kinds of advantages throughout this book. The general point that there need not be first-mover
advantage, a point that runs against much common perception, is so important
that we felt it necessary to emphasize at the outset.
6841D CH03 UG.indd 62
12/18/14 3:10 PM
a dd i n g m o r e m o v e s 6 3
When a game has a first- or second-mover advantage, each player may try to
manipulate the order of play so as to secure for herself the advantageous position. Tactics for such manipulation are strategic moves, which we consider in
Chapter 9.
5 ADDING MORE MOVES
We saw in Section 3 that adding more players increases the complexity of the
analysis of sequential-play games. In this section, we consider another type of
complexity that arises from adding additional moves to the game. We can do so
most simply in a two-person game by allowing players to alternate moves more
than once. Then the tree is enlarged in the same fashion as a multiple-player
game tree would be, but later moves in the tree are made by the players who
have made decisions earlier in the same game.
Many common games, such as tic-tac-toe, checkers, and chess, are two‑person
strategic games with such alternating sequential moves. The use of game trees
and rollback should allow us to “solve” such games—to determine the rollback
equilibrium outcome and the equilibrium strategies leading to that outcome.
Unfortunately, as the complexity of the game grows and as strategies become
more and more intricate, the search for an optimal solution becomes more and
more difficult as well. In such cases, when manual solution is no longer really
feasible, computer routines such as Gambit, mentioned in Chapter 2, become
useful.
A. Tic-Tac-Toe
Start with the most simple of the three examples mentioned in the preceding
paragraph, tic-tac-toe, and consider an easier-than-usual version in which two
players (X and O) each try to be the first to get two of their symbols to fill any
row, column, or diagonal of a two-by-two game board. The first player has four
possible actions or positions in which to put her X. The second player then has
three possible actions at each of four decision nodes. When the first player gets
to her second turn, she has two possible actions at each of 12 (4 3 3) decision
nodes. As Figure 3.7 shows, even this mini-game of tic-tac-toe has a very complex game tree. This tree is actually not too complex because the game is guaranteed to end after the first player moves a second time, but there are still 24
terminal nodes to consider.
We show this tree merely as an illustration of how complex game trees can
become in even simple (or simplified) games. As it turns out, using rollback on
the mini-game of tic-tac-toe leads us quickly to an equilibrium. Rollback shows
6841D CH03 UG.indd 63
12/18/14 3:10 PM
6 4 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
X wins X wins
Bottom
Bottom
left
right
X wins
Player X
Top
right
X wins
Bottom
right
X wins
Bottom
left X wins
Bottom
right
Player O
Player X
Bottom X wins
right
Top
left
Top
left
Top
left
Top
right
Bottom
right
Player O
Bottom
Player X
left
Bottom
right
Top
left
Top
right
Player O
Bottom X wins
right
Top
right
Top
right
Player X
Top
right
X wins
Bottom
left
X wins
Bottom
left
Player X
Top
left
Bottom
left
X wins X wins
Top
left
Bottom
right
Player O
Top
left
X wins
Bottom
Player X
left
Bottom
right
Top
left
Player X
X wins
Player X
Bottom
left
Top
left
Player X
X wins
X wins X wins
Bottom
Top
left
right
Player X
Top
right
Player X
X wins
X wins X wins
Bottom
Top
right
right
X wins
Player X
Bottom X wins
left
Player X
Top
left
Top
right
X wins X wins
FIGURE 3.7 The Complex Tree for Simple Two-by-Two Tic-Tac-Toe
that all of the choices for the first player at her second move lead to the same
outcome. There is no optimal action; any move is as good as any other move.
Thus, when the second player makes her first move, she also sees that each possible move yields the same outcome, and she, too, is indifferent among her three
choices at each of her four decision nodes. Finally, the same is true for the first
player on her first move; any choice is as good as any other, so she is guaranteed
to win the game.
Although this version of tic-tac-toe has an interesting tree, its solution is
not as interesting. The first player always wins, so choices made by either
player cannot affect the ultimate outcome. Most of us are more familiar with
the three-by-three version of tic-tac-toe. To illustrate that version with a game
tree, we would have to show that the first player has nine possible actions
at the initial node, the second player has eight possible actions at each of nine
decision nodes, and then the first player, on her second turn, has seven possible
actions at each of 8 3 9  72 nodes, while the second player, on her second turn,
has six possible actions at each of 7 3 8 3 9  504 nodes. This pattern continues
until eventually the tree stops branching so rapidly because certain combinations of moves lead to a win for one player and the game ends. But no win is
possible until at least the fifth move. Drawing the complete tree for this game
requires a very large piece of paper or very tiny handwriting.
Most of you know, however, how to achieve at worst a tie when you play
three-by-three tic-tac-toe. So there is a simple solution to this game that can be
6841D CH03 UG.indd 64
12/18/14 3:10 PM
a dd i n g m o r e m o v e s 6 5
found by rollback, and a learned strategic thinker can reduce the complexity of
the game considerably in the quest for such a solution. It turns out that, as in the
two-by-two version, many of the possible paths through the game tree are strategically identical. Of the nine possible initial moves, there are only three types;
you put your X in either a corner position (of which there are four possibilities),
a side position (of which there are also four possibilities), or the (one) middle
position. Using this method to simplify the tree can help reduce the complexity
of the problem and lead you to a description of an optimal rollback equilibrium
strategy. Specifically, we could show that the player who moves second can always guarantee at least a tie with an appropriate first move and then by continually blocking the first player’s attempts to get three symbols in a row.3
B. Chess
Although relatively small games, such as tic-tac-toe, can be solved using rollback,
we showed above how rapidly the complexity of game trees can increase even
in two-player games. Thus when we consider more complicated games, such as
chess, finding a complete solution becomes much more difficult.
In chess, the players, White and Black, have a collection of 16 pieces in six
distinct shapes, each of which is bound by specified rules of movement on the
eight-by-eight game board shown in Figure 3.8.4 White opens with a move, Black
responds with one, and so on, in turns. All the moves are visible to the other
player, and nothing is left to chance, as it would be in card games that include
shuffling and dealing. Moreover, a chess game must end in a finite number of
moves. The rules declare that a game is drawn if a given position on the board
is repeated three times in the course of play. Because there are a finite number
of ways to place the 32 (or fewer after captures) pieces on 64 squares, a game
could not go on infinitely long without running up against this rule. Therefore,
in principle, chess is amenable to full rollback analysis.
That rollback analysis has not been carried out, however. Chess has not
been “solved” as tic-tac-toe has been. And the reason is that, for all its simplicity
of rules, chess is a bewilderingly complex game. From the initial set position of
3
If the first player puts her first symbol in the middle position, the second player must put her first
symbol in a corner position. Then the second player can guarantee a tie by taking the third position
in any row, column, or diagonal that the first player tries to fill. If the first player goes to a corner or
a side position first, the second player can guarantee a tie by going to the middle first and then following the same blocking technique. Note that if the first player picks a corner, the second player
picks the middle, and the first player then picks the corner opposite from her original play, then the
second player must not pick one of the remaining corners if she is to ensure at least a tie. For a beautifully detailed picture of the complete contigent strategy in tic-tac-toe, see the online comic strip at
http://xkcd.com/832/.
4
An easily accessible statement of the rules of chess and much more is at Wikipedia, at http://
en.wikipedia.org/wiki/Chess.
6841D CH03 UG.indd 65
12/18/14 3:10 PM
6 6 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
a
8
7
b
c
d
e
f
g
h
♜ ♞ ♝ ♛ ♚ ♝ ♞ ♜
♟ ♟ ♟ ♟ ♟ ♟ ♟ ♟
8
7
6
6
5
5
4
4
3
3
2
♙ ♙ ♙ ♙ ♙ ♙ ♙ ♙
2
1
♖ ♘ ♗ ♕ ♔ ♗ ♘ ♖
1
a
b
c
d
e
f
g
h
FIGURE 3.8 Chessboard
the pieces illustrated in Figure 3.8, White can open with any one of 20 moves,5
and Black can respond with any of 20. Therefore, 20 branches emerge from the
first node of the game tree, each leading to a second node from each of which 20
more branches emerge. After only two moves, there are already 400 branches,
each leading to a node from which many more branches emerge. And the total
number of possible moves in chess has been estimated to be 10120, or a “one”
with 120 zeros after it. A supercomputer a thousand times as fast as your PC,
making a trillion calculations a second, would need more than 10100 years to
check out all these moves.6 Astronomers offer us less than 1010 years before the
sun turns into a red giant and swallows the earth.
The general point is that, although a game may be amenable in principle to
a complete solution by rollback, its complete tree may be too complex to permit
such solution in practice. Faced with such a situation, what is a player to do? We
can learn a lot about this by reviewing the history of attempts to program computers to play chess.
When computers first started to prove their usefulness for complex calculations in science and business, many mathematicians and computer scientists
5
He can move one of eight pawns forward either one square or two or he can move one of the two
knights in one of two ways (to squares a3, c3, f3, or h3).
6
This would have to be done only once because, after the game has been solved, anyone can use
the solution and no one will actually need to play. Everyone will know whether White has a win or
whether Black can force a draw. Players will toss to decide who gets which color. They will then know
the outcome, shake hands, and go home.
6841D CH03 UG.indd 66
12/18/14 3:10 PM
a dd i n g m o r e m o v e s 6 7
thought that a chess-playing computer program would soon beat the world
champion. It took a lot longer, even though computer technology improved dramatically while human thought progressed much more slowly. Finally, in December 1992, a German chess program called Fritz2 beat world champion Gary
Kasparov in some blitz (high-speed) games. Under regular rules, where each
player gets 2¹-2 hours to make 40 moves, humans retained greater superiority for
longer. A team sponsored by IBM put a lot of effort and resources into the development of a specialized chess-playing computer and its associated software. In
February 1996, this package, called Deep Blue, was pitted against Gary Kasparov
in a best-of-six series. Deep Blue caused a sensation by winning the first game,
but Kasparov quickly figured out its weaknesses, improved his counterstrategies, and won the series handily. In the next 15 months, the IBM team improved
Deep Blue’s hardware and software, and the resulting Deeper Blue beat Kasparov
in another best-of-six series in May 1997.
To sum up, computers have progressed in a combination of slow patches
and some rapid spurts, while humans have held some superiority but have not
been able to improve sufficiently fast to keep ahead. Closer examination reveals
that the two use quite different approaches to think through the very complex
game tree of chess.
When contemplating a move in chess, looking ahead to the end of the whole
game may be too hard (for humans and computers both). How about looking
part of the way—say, 5 or 10 moves ahead—and working back from there? The
game need not end within this limited horizon; that is, the nodes that you reach
after 5 or 10 moves will not generally be terminal nodes. Only terminal nodes
have payoffs specified by the rules of the game. Therefore, you need some indirect way of assigning plausible payoffs to nonterminal nodes, because you are
not able to explicitly roll back from a full look‑ahead. A rule that assigns such
payoffs is called an intermediate valuation function.
In chess, humans and computer programs both use such partial look-ahead
in conjunction with an intermediate valuation function. The typical method assigns values to each piece and to positional and combinational advantages that
can arise during play. Quantification of values for different positions are made on
the basis of the whole chess-playing community’s experience of play in past games
starting from such positions or patterns; this is called “knowledge.” The sum of all
the numerical values attached to pieces and their combinations in a position is the
intermediate value of that position. A move is judged by the value of the position to
which it is expected to lead after an explicit forward-looking calculation for a certain number—say, five or six—of moves.
The evaluation of intermediate positions has progressed furthest with respect to chess openings—that is, the first dozen or so moves of a game. Each
opening can lead to any one of a vast multitude of further moves and positions,
but experience enables players to sum up certain openings as being more or less
6841D CH03 UG.indd 67
12/18/14 3:10 PM
6 8 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
likely to favor one player or the other. This knowledge has been written down in
massive books of openings, and all top players and computer programs remember and use this information.
At the end stages of a game, when only a few pieces are left on the board,
backward reasoning on its own is often simple enough to be doable and complete enough to give the full answer. The midgame, when positions have evolved
into a level of complexity that will not simplify within a few moves, is the hardest
to analyze. To find a good move from a midgame position, a well-built intermediate valuation function is likely to be more valuable than the ability to calculate
another few moves further ahead.
This is where the art of chess playing comes into its own. The best human
players develop an intuition or instinct that enables them to sniff out good opportunities and avoid subtle traps in a way that computer programs find hard
to match. Computer scientists have found it generally very difficult to teach
their machines the skills of pattern recognition that humans acquire and use
instinctively—for example, recognizing faces and associating them with names.
The art of the midgame in chess also is an exercise in recognizing and evaluating patterns in the same, still mysterious way. This is where Kasparov has his
greatest advantage over Fritz2 or Deep Blue. It also explains why computer programs do better against humans at blitz or limited-time games: a human does
not have the time to marshal his art of the midgame.
In other words, the best human players have subtle “chess knowledge,”
based on experience or the ability to recognize patterns, which endows them
with a better intermediate valuation function. Computers have the advantage
when it comes to raw or brute-force calculation. Thus although both human and
computer players now use a mixture of look-ahead and intermediate valuation,
they use them in different proportions: humans do not look so many moves
ahead but have better intermediate valuations based on knowledge; computers
have less sophisticated valuation functions but look ahead further by using their
superior computational powers.
Recently, chess computers have begun to acquire more knowledge. When
modifying Deep Blue in 1996 and 1997, IBM enlisted the help of human experts
to improve the intermediate valuation function in its software. These consultants played repeatedly against the machine, noted its weaknesses, and suggested how the valuation function should be modified to correct the flaws.
Deep Blue benefited from the contributions of the experts and their subtle kind
of thinking, which results from long experience and an awareness of complex
interconnections among the pieces on the board.
If humans can gradually make explicit their subtle knowledge and transmit it to computers, what hope is there for human players who do not get reciprocal help from computers? At times in their 1997 encounter, Kasparov was
amazed by the human or even superhuman quality of Deep Blue’s play. He even
6841D CH03 UG.indd 68
12/18/14 3:10 PM
a dd i n g m o r e m o v e s 6 9
attributed one of the computer’s moves to “the hand of God.” And matters can
only get worse: the brute-force calculating power of computers is increasing
rapidly while they are simultaneously, but more slowly, gaining some of the subtlety that constitutes the advantage of humans.
The abstract theory of chess says that it is a finite game that can be solved
by rollback. The practice of chess requires a lot of “art” based on experience, intuition, and subtle judgment. Is this bad news for the use of rollback in
sequential‑move games? We think not. It is true that theory does not take us all
the way to an answer for chess. But it does take us a long way. Looking ahead a
few moves constitutes an important part of the approach that mixes brute-force
calculation of moves with a knowledge-based assessment of intermediate positions. And, as computational power increases, the role played by brute-force
calculation, and therefore the scope of the rollback theory, will also increase.
Evidence from the study of the game of checkers, as we describe below, suggests that a solution to chess may yet be feasible.
C. Checkers
An astonishing number of computer and person hours have been devoted to the
search for a solution to chess. Less famously, but just as doggedly, researchers
worked on solving the somewhat less complex game of checkers. And, indeed,
the game of checkers was declared “solved” in July 2007.7
Checkers is another two-player game played on an eight-by-eight board.
Each player has 12 round game pieces of different colors, as shown in Figure
3.9, and players take turns moving their pieces diagonally on the board, jumping (and capturing) the opponent’s pieces when possible. As in chess, the game
ends and Player A wins when Player B is either out of pieces or unable to move;
the game can also end in a draw if both players agree that neither can win.
Although the complexity of checkers pales somewhat in comparison to that
of chess—the number of possible positions in checkers is approximately the
square root of the number in chess—there are still 5  1020 possible positions, so
drawing a game tree is out of the question. Conventional wisdom and evidence
from world championships for years suggested that good play should lead to
a draw, but there was no proof. Now a computer scientist in Canada has the
proof—a computer program named Chinook that can play to a guaranteed tie.
Chinook was first created in 1989. This computer program played the world
champion, Marion Tinsley, in 1992 (losing four to two with 33 draws) and again
in 1994 (when Tinsley’s health failed during a series of draws). It was put on
7
Our account is based on two reports in the journal Science. See Adrian Cho, “Program Proves That
Checkers, Perfectly Played, Is a No-Win Situation,” Science, vol. 317 (July 20, 2007), pp. 308–309, and
Jonathan Schaeffer et al., “Checkers Is Solved,” Science, vol. 317 (September 14, 2007), pp. 1518–22.
6841D CH03 UG.indd 69
12/18/14 3:10 PM
7 0 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
FIGURE 3.9 Checkers
hold between 1997 and 2001 while its creators waited for computer technology to improve. And it finally exhibited a loss-proof algorithm in the spring of
2007. That algorithm uses a combination of endgame rollback analysis and
starting position forward analysis along with the equivalent of an intermediate
valuation function to trace out the best moves within a database including all
possible positions on the board.
The creators of Chinook describe the full game of checkers as “weakly
solved”; they know that they can generate a tie, and they have a strategy for
reaching that tie from the start of the game. For all 39  1012 possible positions
that include 10 or fewer pieces on the board, they describe checkers as “strongly
solved”; not only do they know they can play to a tie, they can reach that tie from
any of the possible positions that can arise once only 10 pieces remain. Their algorithm first solved the 10-piece endgames, then went back to the start to search
out paths of play in which both players make optimal choices. The search mechanism, involving a complex system of evaluating the value of each intermediate
position, invariably led to those 10-piece positions that generate a draw.
Thus, our hope for the future of rollback analysis may not be misplaced.
We know that for really simple games, we can find the rollback equilibrium by
verbal reasoning without having to draw the game tree explicitly. For games
having an intermediate range of complexity, verbal reasoning is too hard,
but a complete tree can be drawn and used for rollback. Sometimes we may
enlist the aid of a computer to draw and analyze a moderately complicated
game tree. For the most complex games, such as checkers and chess, we can
6841D CH03 UG.indd 70
12/18/14 3:10 PM
e v i d e n c e c o n c e r n i n g r o l l b a c k 7 1
draw only a small part of the game tree, and we must use a combination of two
methods: (1) calculation based on the logic of rollback, and (2) rules of thumb
for valuing intermediate positions on the basis of experience. The computational power of current algorithms has shown that even some games in this
category are amenable to solution, provided one has the time and resources to
devote to the problem.
Thankfully, most of the strategic games that we encounter in economics,
politics, sports, business, and daily life are far less complex than chess or even
checkers. The games may have a number of players who move a number of
times; they may even have a large number of players or a large number of moves.
But we have a chance at being able to draw a reasonable-looking tree for those
games that are sequential in nature. The logic of rollback remains valid, and it is
also often the case that, once you understand the idea of rollback, you can carry
out the necessary logical thinking and solve the game without explicitly drawing
a tree. Moreover, it is precisely at this intermediate level of difficulty, between
the simple examples that we solved explicitly in this chapter and the insoluble
cases such as chess, that computer software such as Gambit is most likely to be
useful; this is indeed fortunate for the prospect of applying the theory to solve
many games in practice.
6 EVIDENCE CONCERNING ROLLBACK
How well do actual participants in sequential-move games perform the calculations of rollback reasoning? There is very little systematic evidence, but classroom and research experiments with some games have yielded outcomes that
appear to counter the predictions of the theory. Some of these experiments
and their outcomes have interesting implications for the strategic analysis of
­sequential-move games.
For instance, many experimenters have had subjects play a single-round
bargaining game in which two players, designated A and B, are chosen from
a class or a group of volunteers. The experimenter provides a dollar (or some
known total), which can be divided between them according to the following
procedure: Player A proposes a split—for example, “75 to me, 25 to B.” If player
B accepts this proposal, the dollar is divided as proposed by A. If B rejects the
proposal, neither player gets anything.
Rollback in this case predicts that B should accept any sum, no matter how
small, because the alternative is even worse—namely, 0—and, foreseeing this, A
should propose “99 to me, 1 to B.” This particular outcome almost never happens.
Most players assigned the A role propose a much more equal split. In fact, 50–50
is the single most common proposal. Furthermore, most players assigned the B
6841D CH03 UG.indd 71
12/18/14 3:10 PM
7 2 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
role turn down proposals that leave them 25% or less of the total and walk away
with nothing; some reject proposals that would give them 40% of the pie.8
Many game theorists remain unpersuaded that these findings undermine
the theory. They counter with some variant of the following argument: “The
sums are so small as to make the whole thing trivial in the players’ minds. The B
players lose 25 or 40 cents, which is almost nothing, and perhaps gain some private satisfaction that they walked away from a humiliatingly small award. If the
total were a thousand dollars, so that 25% of it amounted to real money, the B
players would accept.” But this argument does not seem to be valid. Experiments
with much larger stakes show similar results. The findings from experiments conducted in Indonesia, with sums that were small in dollars but amounted to as
much as three months’ earnings for the participants, showed no clear tendency
on the part of the A players to make less equal offers, although the B players
tended to accept somewhat smaller shares as the total increased; similar experiments conducted in the Slovak Republic found the behavior of inexperienced
players unaffected by large changes in payoffs.9
The participants in these experiments typically have no prior knowledge of
game theory and no special computational abilities. But the game is extremely
simple; surely even the most naive player can see through the reasoning, and
answers to direct questions after the experiment generally show that most participants do. The results show not so much the failure of rollback as the theorist’s
error in supposing that each player cares only about her own money earnings.
Most societies instill in their members a strong sense of fairness, which causes
the B players to reject anything that is grossly unfair. Anticipating this, the A
players offer relatively equal splits.
Supporting evidence comes from the new field of “neuroeconomics.” Alan
Sanfey and his colleagues took MRI readings of the players’ brains as they
made their choices in the ultimatum game. They found stimulation of “activity
in a region well known for its involvement in negative emotion” in the brains
of responders (B players) when they rejected “unfair” (less than 50;50) offers.
Thus, deep instincts or emotions of anger and disgust seem to be implicated
in these rejections. They also found that “unfair” (less than 50;50) offers were
8
Reiley first encountered this game as a graduate student; he was stunned that when he offered a
90:10 split of $100, the other economics graduate student rejected it. For a detailed account of this
game and related ones, read Richard H. Thaler, “Anomalies: The Ultimate Game,” Journal of Economic Perspectives, vol. 2, no. 4 (Fall 1988), pp. 195–206; and Douglas D. Davis and Charles A. Holt,
Experimental Economics (Princeton: Princeton University Press, 1993), pp. 263–69.
9
The results of the Indonesian experiment are reported in Lisa Cameron, “Raising the Stakes in the
Ultimatum Game: Experimental Evidence from Indonesia,” Economic Inquiry, vol. 37, no. 1 (January 1999), pp. 47–59. Robert Slonim and Alvin Roth report results similar to Cameron’s, but they also
found that offers (in all rounds of play) were rejected less often as the payoffs were raised. See Robert
Slonim and Alvin Roth, “Learning in High Stakes Ultimatum Games: An Experiment in the Slovak
Republic,” Econometrica, vol. 66, no. 3 (May 1998), pp. 569–96.
6841D CH03 UG.indd 72
12/18/14 3:10 PM
e v i d e n c e c o n c e r n i n g r o l l b a c k 7 3
A
Take
dime
10, 0
B
Pass
Take
dimes
A
Pass
Take
dimes
0, 20
30, 0
B
Pass
Take
dimes
0, 40
B
Pass
Pass
0, 0
Take
dimes
0, 100
FIGURE 3.10 The Centipede Game
rejected less often when responders knew that the offerer was a computer than
when they knew that the offerer was human.10
Notably, A players have some tendency to be generous even without the
threat of retaliation. In a drastic variant called the dictator game, where the A
player decides on the split and the B player has no choice at all, many As still
give significant shares to the Bs, suggesting the players have some intrinsic
preference for relatively equal splits.11 However, the offers by the A players are
noticeably less generous in the dictator game than in the ultimatum game, suggesting that the credible fear of retaliation is also a strong motivator. Caring
about other people’s perceptions of us also appears to matter. When the experimental design is changed so that not even the experimenter can identify who
proposed (or accepted) the split, the extent of sharing drops noticeably.
Another experimental game with similarly paradoxical outcomes goes as
follows: two players are chosen and designated as A and B. The experimenter
puts a dime on the table. Player A can take it or pass. If A takes the dime, the
game is over, with A getting the 10 cents and B getting nothing. If A passes, the
experimenter adds a dime, and now B has the choice of taking the 20 cents or
passing. The turns alternate, and the pile of money grows until reaching some
limit—say, a dollar—that is known in advance by both players.
We show the tree for this game in Figure 3.10. Because of the appearance of
the tree, this type of game is often called the centipede game. You may not even
need the tree to use rollback on this game. Player B is sure to take the dollar at
the last stage, so A should take the 90 cents at the penultimate stage, and so on.
Thus, A should take the very first dime and end the game.
In experiments, however, such games typically go on for at least a few
rounds. Remarkably, by behaving “irrationally,” the players as a group make
10
See Alan Sanfey, James Rilling, Jessica Aronson, Leigh Nystrom, and Jonathan Cohen, “The Neural
Basis of Economic Decision-Making in the Ultimatum Game,” Science, vol. 300 (June 13, 2003), pp.
1755–58.
11
One could argue that this social norm of fairness may actually have value in the ongoing evolutionary game being played by the whole society. Players who are concerned with fairness reduce
transaction costs and the costs of fights, which can be beneficial to society in the long run. These
matters will be discussed in Chapters 10 and 11.
6841D CH03 UG.indd 73
12/18/14 3:10 PM
7 4 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
more money than they would if they followed the logic of backward reasoning.
Sometimes A does better and sometimes B, but sometimes they even solve this
conflict or bargaining problem. In a classroom experiment that one of us (Dixit)
conducted, one such game went all the way to the end. Player B collected the
dollar, and quite voluntarily gave 50 cents to player A. Dixit asked A, “Did you
two conspire? Is B a friend of yours?” and A replied, “No, we didn’t even know
each other before. But he is a friend now.” We will come across some similar evidence of cooperation that seems to contradict rollback reasoning when we look
at finitely repeated prisoners’ dilemma games in Chapter 10.
The centipede game points out a possible problem with the logic of rollback in non-zero-sum games, even for players whose decisions are only based
on money. Note that if Player A passes in the first round, he has already shown
himself not to be playing rollback. So what should Player B expect him to do in
round 3? Having passed once, he might pass again, which would make it rational
for Player B to pass in round 2. Eventually someone will take the pile of money,
but an initial deviation from rollback equilibrium makes it difficult to predict
exactly when this will happen. And because the size of the pie keeps growing,
if I see you deviate from rollback, I might want to deviate as well, at least for a
little while. A player might deliberately pass in an early round in order to signal
a willingness to pass in future rounds. This problem does not arise in zero-sum
games, where there is no incentive to cooperate by waiting.
Supporting this observation, Steven Levitt, John List, and Sally Sadoff conducted experiments with world-class chess players, finding more rollback
behavior in zero-sum sequential-move games than in the non-zero-sum centipede game. Their centipede game involved six nodes, with total payoffs increasing quite steeply across rounds.12 While there are considerable gains to players
who can manage to pass back and forth to each other, the rollback equilibrium
specifies playing Take at each node. In stark contrast to the theory, only 4% of
players played Take at node 1, providing little support for rollback equilibrium
even in this simple six-move game. (The fraction of players who played Take increased over the course of the game.13 )
12
See Steven D. Levitt, John A. List, and Sally E. Sadoff, “Checkmate: Exploring Backward Induction
Among Chess Players,” American Economic Review, vol. 101, no. 2 (April 2011), pp. 975–90. The details of the game tree are as follows. If A plays Take at node 1, then A receives $4 while B receives $1. If
A passes and B plays Take at node 2, then A receives $2 while B receives $8. This pattern of doubling
continues until node 6, where if B plays Take, the payoffs are $32 for A and $128 for B, but if B plays
Pass, the payoffs are $256 for A and $64 for B.
13
Different results were found in an earlier paper by Ignacio Palacios-Huerta and Oscar Volij, “Field
Centipedes,” American Economic Review, vol. 99, no. 4 (September 2009), pp. 1619–35. Of the chess
players they studied, 69% played Take at the first node, with the more highly rated chess players
being more likely to play Take at the first opportunity. These results indicated a surprisingly high
ability of players to carry experience with them to a new game context, but these results have not
been reproduced in the later paper discussed above.
6841D CH03 UG.indd 74
12/18/14 3:10 PM
s t r at e g i e s i n s u r v i v o r 7 5
By contrast, in a zero-sum sequential-move game whose rollback equilibrium involves 20 moves (you are invited to solve such a game in Exercise S7), the
chess players played the exact rollback equilibrium 10 times as often as in the
six-move centipede game.14
Levitt and his coauthors also experimented with a similar but more difficult zero-sum game (a version of which you are invited to solve in Exercise U5).
There the chess players played the complete rollback equilibrium only 10% of
the time (20% for the highest-ranked grandmasters), although by the last few
moves the agreement with rollback was nearly 100%. As world-class chess players spend tens of thousands of hours trying to win chess games by rolling back,
these results indicate that even highly experienced players usually cannot immediately carry their experience over to a new game: they need a little experience with the new game before they can figure out the optimal strategy. An
advantage of learning game theory is that you can more easily spot underlying
similarities between seemingly different situations and so devise good strategies
more quickly in any new games you may face.
The examples discussed here seem to indicate that apparent violations of
strategic logic can often be explained by recognizing that people do not care
merely about their own money payoffs; rather, they internalize concepts such
as fairness. But not all observed plays, contrary to the precepts of rollback, have
such an explanation. People do fail to look ahead far enough, and they do fail
to draw the appropriate conclusions from attempts to look ahead. For example,
when issuers of credit cards offer favorable initial interest rates or no fees for the
first year, many people fall for them without realizing that they may have to pay
much more later. Therefore the game-theoretic analysis of rollback and rollback
equilibria serves an advisory or prescriptive role as much as it does a descriptive role. People equipped with the theory of rollback are in a position to make
better strategic decisions and to get higher payoffs, no matter what they include
in their payoff calculations. And game theorists can use their expertise to give
valuable advice to those who are placed in complex strategic situations but lack
the skill to determine their own best strategies.
7 STRATEGIES IN SURVIVOR
The examples in the preceding sections were deliberately constructed to illustrate
and elucidate basic concepts such as nodes, branches, moves, and strategies,
14
As you will see in the exercises, another key distinction of this zero-sum game is that there is a
way for one player to guarantee victory, regardless of what the other player does. By contrast, a player’s best move in the centipede game depends on what she expects the other player to do.
6841D CH03 UG.indd 75
12/18/14 3:10 PM
7 6 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
as well as the technique of rollback. Now we show how all of them can be applied, by considering a real-life (or at least “reality-TV-life”) situation.
In the summer of 2000, CBS television broadcast the first of the Survivor
shows, which became an instant hit and helped launch the whole new genre
of “reality TV.” Leaving aside many complex details and some earlier stages not
relevant for our purpose, the concept was as follows: a group of contestants,
called a “tribe,” was put on an uninhabited island and left largely to fend for
themselves for food and shelter. Every three days, they had to vote one fellow
contestant out of the tribe. The person who had the most votes cast against him
or her at a meeting of the remaining players (called the “tribal council”) was
the victim of the day. However, before each meeting of the tribal council, the
survivors up to that point competed in a game of physical or mental skill that
was devised by the producers of the game for that occasion, and the winner of
this competition, called a “challenge,” was immune from being voted off at the
following meeting. Also, one could not vote against oneself. Finally, when two
people were left, the seven who had been voted off most recently r­ eturned as a
“jury” to pick one of the two remaining survivors as the million‑dollar winner
of the whole game.
The strategic problems facing all contestants were: (1) to be generally regarded as a productive contributor to the tribe’s search for food and other tasks
of survival, but to do so without being regarded as too strong a competitor and
therefore a target for elimination; (2) to form alliances to secure blocks of votes
to protect oneself from being voted off; (3) to betray these alliances when the
numbers got too small and one had to vote against someone; but (4) to do so
without seriously losing popularity with the other players, who would ultimately
have the power of the vote on the jury.
We pick up the story when just three contestants were left: Rudy, Kelly, and
Rich. Of them, Rudy was the oldest contestant, an honest and blunt person who
was very popular with the contestants who had been previously voted off. It
was generally agreed that, if he was one of the last two, then he would be voted
the million-dollar winner. So it was in the interests of both Kelly and Rich that
they should face each other, rather than face Rudy, in the final vote. But neither
wanted to be seen as instrumental in voting off Rudy. With just three contestants
left, the winner of the immunity challenge is effectively decisive in the cast-off
vote, because the other two must vote against each other. Thus, the jury would
know who was responsible for voting off Rudy and, given his popularity, would
regard the act of voting him off with disfavor. The person doing so would harm
his or her chances in the final vote. This was especially a problem for Rich, because he was known to have an alliance with Rudy.
The immunity challenge was one of stamina: each contestant had to stand
on an awkward support and lean to hold one hand in contact with a totem on a
6841D CH03 UG.indd 76
12/18/14 3:10 PM
s t r at e g i e s i n s u r v i v o r 7 7
central pole, called the “immunity idol.” Anyone whose hand lost contact with
the idol, even for an instant, lost the challenge; the one to hold on longest was
the winner.
An hour and a half into the challenge, Rich figured out that his best strategy
was to deliberately lose this immunity challenge. Then, if Rudy won immunity,
he would maintain his alliance and keep Rich—Rudy was known to be a man
who always kept his word. Rich would lose the final vote to Rudy in this case,
but that would make him no worse off than if he won the challenge and kept
Rudy. If Kelly won immunity, the much more likely outcome, then it would be
in her interest to vote off Rudy—she would have at least some chance against
Rich but none against Rudy. Then Rich’s chances of winning were quite good.
Whereas, if Rich himself held on, won immunity, and then voted off Rudy, his
chances against Kelly would be decreased by the fact that he voted off Rudy.
So Rich deliberately stepped off and later explained his reasons quite clearly
to the camera. His calculation was borne out. Kelly won that challenge and voted
off Rudy. And, in the final jury vote between Rich and Kelly, Rich won by one vote.
Rich’s thinking was essentially a rollback analysis along a game tree. He did
this analysis instinctively, without drawing the tree, while standing awkwardly
and holding on to the immunity idol, but it took him an hour and a half to come
to his conclusion. With all due credit to Rich, we show the tree explicitly, and can
reach the answer faster.
Figure 3.11 shows the tree. You can see that it is much more complex than
the trees encountered in earlier sections. It has more branches and moves; in
addition, there are uncertain outcomes, and the chances of winning or losing in
various alternative situations have to be estimated instead of being known precisely. But you will see how we can make reasonable assumptions about these
chances and proceed with the analysis.
At the initial node, Rich decides whether to continue or to give up in the immunity challenge. In either case, the winner of the challenge cannot be forecast
with certainty; this is indicated in the tree by letting “Nature” make the choice,
as we did with the coin-toss situation in Figure 3.1. If Rich continues, Nature
chooses the winner from the three contestants. We don’t know the actual probabilities, but we will assume particular values for exposition and point out the
crucial assumptions. The supposition is that Kelly has a lot of stamina and that
Rudy, being the oldest, is not likely to win. So we posit the following probabilities
of a win when Rich chooses to continue: 0.5 (50%) for Kelly, 0.45 for Rich, and
only 0.05 for Rudy. If Rich gives up on the challenge, Nature picks the winner of
the immunity challenge randomly between the two who remain; in this case, we
assume that Kelly wins with probability 0.9 and Rudy with probability 0.1.
The rest of the tree follows from each of the three possible winners of the
challenge. If Rudy wins, he keeps Rich as he promised, and the jury votes Rudy
6841D CH03 UG.indd 77
12/18/14 3:10 PM
7 8 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
Rich
0.4
Kelly
JURY
PICKS
Kelly
0.6
Rich
keeps
Rudy
Rich
prob. 0.45
Rich wins with prob.
= 0.18 + 0.3 = 0.48
Continue
Rich
Rudy
prob. 0.05
JURY
PICKS
Kelly
0.4
Kelly
keeps
Rudy
Rudy
keeps
Rudy
Rich
0.6
Kelly
prob. 0.5
Nature
picks
immunity
winner
JURY
PICKS
Rich
JURY
PICKS
JURY
PICKS
Rudy
Rudy
Rich wins
Prob. = 0.18
Kelly wins
Rudy wins
Rich wins
Prob. = 0.3
Kelly wins
Rudy wins
Rudy wins
Rich
Rich
0.6
Rich
Give up
Nature
picks
immunity
winner
Rich wins with prob.
= 0.54
Kelly
prob.
0.9
Kelly
0.4
Kelly
keeps
Rudy
Rudy
prob.
0.1
JURY
PICKS
Rudy
keeps
Rich
JURY
PICKS
JURY
PICKS
Rudy
Rudy
Rich wins
Prob. = 0.54
Kelly wins
Rudy wins
Rudy wins
FIGURE 3.11 Survivor Immunity Game Tree
the winner.15 If Rich wins immunity, he has to decide whether to keep Kelly or
Rudy. If he keeps Rudy, the jury votes for Rudy. If he keeps Kelly, it is not certain
whom the jury chooses. We assume that Rich alienates some jurors by turning
on Rudy and that, despite being better liked than Kelly, he gets the jury’s vote
in this situation only with probability 0.4. Similarly, if Kelly wins immunity, she
can either keep Rudy and lose the jury’s vote, or keep Rich. If she keeps Rich, his
15
Technically, Rudy faces a choice between keeping Rich or Kelly at the action node after he wins
the immunity challenge. Because everyone placed 0 probability on his choosing Kelly (owing to the
Rich-Rudy alliance), we illustrate only Rudy’s choice of Rich. The jury, similarly, has a choice between Rich and Rudy at the last action node along this branch of play. Again, the foregone conclusion is that Rudy wins in this case.
6841D CH03 UG.indd 78
12/18/14 3:10 PM
s t r at e g i e s i n s u r v i v o r 7 9
probability of winning the jury’s vote is higher, at 0.6, because in this case he is
both better liked by the jury and hasn’t voted off Rudy.
What about the players’ actual payoffs? We can safely assume that both Rich
and Kelly want to maximize the probability of his or her emerging as the ultimate winner of the $1 million. Rudy similarly wants to get the prize, but keeping
his word to Rich is paramount. With these preferences of the various players in
mind, Rich can now do rollback analysis along the tree to determine his own
initial choice.
Rich knows that, if he wins the immunity challenge (the uppermost path
after his own first move and Nature’s move), he will have to keep Kelly to have
a 40% chance of eventual victory; keeping Rudy at this stage would mean a 0
probability of eventual victory. Rich can also calculate that, if Kelly wins the immunity challenge (which occurs once in each of the upper and lower halves of
the tree), she will choose to keep him for similar reasons, and then the probability of his eventual victory will be 0.6.
What are Rich’s chances as he calculates them at the initial node? If Rich
chooses Give Up at the initial node, then there is only one way for him to
emerge as the eventual winner—if Kelly wins immunity (probability 0.9), if
she then keeps Rich (probability 1), and if the jury votes for Rich (probability
0.6). Because all three things need to happen for Rich to win, his overall probability of victory is the product of the three probabilities—namely, 0.9  1 
0.6  0.54.16 If Rich chooses Continue at the initial node, then there are two ways
in which he can win. First, he wins the game if he wins the immunity challenge
(probability 0.45), if he then eliminates Rudy (probability 1), and if he still wins
the jury’s vote against Kelly (probability 0.4); the total probability of winning in
this way is 0.45  0.4  0.18. Second, he wins the game if Kelly wins the challenge (probability 0.5), if she eliminates Rudy (probability 1), and if Rich gets the
jury’s vote (probability 0.6); total probability here is 0.5  0.6  0.3. Rich’s overall
probability of eventual victory if he chooses Continue is the sum of the probabilities of these two paths to victory—namely, 0.18  0.3  0.48.
Rich can now compare his probability of winning the million dollars when
he chooses Give Up (0.54) with his probability of winning when he chooses Continue (0.48). Given the assumed values of the various probabilities in the tree, Rich
has a better chance of victory if he gives up. Thus, Give Up is his optimal strategy.
Although this result is based on assuming specific numbers for the probabilities,
Give Up remains Rich’s optimal strategy as long as (1) Kelly is very likely to win
the immunity challenge once Rich gives up, and (2) Rich wins the jury’s final vote
more often when Kelly has voted out Rudy than when Rich has done so.17
16
Readers who need instruction or a refresher course in the rules for combining probabilities will
find a quick tutorial in the appendix to Chapter 7.
17
Readers who can handle the algebra of probabilities can solve this game by using more general
symbols instead of specific numbers for the probabilities, as in Exercise U10 of this chapter.
6841D CH03 UG.indd 79
12/18/14 3:10 PM
8 0 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
This example serves several purposes. Most important, it shows how a
complex tree, with much external uncertainty and missing information
about precise probabilities, can still be solved by using rollback analysis. We
hope this gives you some confidence in using the method and some training
in converting a somewhat loose verbal account into a more precise logical argument. You might counter that Rich did this reasoning without drawing any
trees. But knowing the system or general framework greatly simplifies the task
even in new and unfamiliar circumstances. Therefore it is definitely worth the
effort to acquire the systematic skill.
A second purpose is to illustrate the seemingly paradoxical strategy of “losing to win.” Another instance of this strategy can be found in some sporting
competitions that are held in two rounds, such as the soccer World Cup. The first
round is played on a league basis in several groups of four teams each. The top
two teams from each group then go to the second round, where they play others
chosen according to a prespecified pattern; for example, the top-ranked team
in group A meets the second-ranked team in group B, and so on. In such a situation, it may be good strategy for a team to lose one of its first-round matches
if this loss causes it to be ranked second in its group; that ranking might earn it
a subsequent match against a team that, for some particular reason, it is more
likely to beat than the team that it would meet if it had placed first in its group in
the first round.
SUMMARY
Sequential-move games require players to consider the future consequences of their current moves before choosing their actions. Analysis of pure
sequential‑move games generally requires the creation of a game tree. The tree
is made up of nodes and branches that show all of the possible actions available
to each player at each of her opportunities to move, as well as the payoffs associated with all possible outcomes of the game. Strategies for each player are
complete plans that describe actions at each of the player’s decision nodes contingent on all possible combinations of actions made by players who acted at
earlier nodes. The equilibrium concept employed in sequential-move games is
that of rollback equilibrium, in which players’ equilibrium strategies are found
by looking ahead to subsequent nodes and the actions that would be taken there
and by using these forecasts to calculate one’s current best action. This process
is known as rollback, or backward induction.
Different types of games entail advantages for different players, such as
first-mover advantages. The inclusion of many players or many moves enlarges
the game tree of a sequential-move game but does not change the solution
6841D CH03 UG.indd 80
12/18/14 3:10 PM
e x e r c i s e s 8 1
process. In some cases, drawing the full tree for a particular game may require more space or time than is feasible. Such games can often be solved by
identifying strategic similarities between actions that reduce the size of the tree
or by simple logical thinking.
When solving larger games, verbal reasoning can lead to the rollback equilibrium if the game is simple enough or a complete tree may be drawn and analyzed. If the game is sufficiently complex that verbal reasoning is too difficult
and a complete tree is too large to draw, we may enlist the help of a computer
program. Checkers has been “solved” with the use of such a program, although
full solution of chess will remain beyond the powers of computers for a long
time. In actual play of these truly complex games, elements of both art (identification of patterns and of opportunities versus peril) and science (forward‑looking
calculations of the possible outcomes arising from certain moves) have a role in
determining player moves.
Tests of the theory of sequential-move games seem to suggest that actual
play shows the irrationality of the players or the failure of the theory to predict
behavior adequately. The counterargument points out the complexity of actual
preferences for different possible outcomes and the usefulness of strategic theory for identifying optimal actions when actual preferences are known.
KEY TERMS
action node (48)
backward induction (56)
branch (48)
decision node (48)
decision tree (48)
equilibrium path of play (60)
extensive form (48)
first-mover advantage (62)
game tree (48)
initial node (48)
intermediate valuation function (67)
move (50)
node (48)
path of play (60)
prune (54)
rollback (56)
rollback equilibrium (56)
root (48)
second-mover advantage (62)
terminal node (50)
solved Exercises
S1.
6841D CH03 UG.indd 81
Suppose two players, Hansel and Gretel, take part in a sequential-move
game. Hansel moves first, Gretel moves second, and each player moves
only once.
(a) Draw a game tree for a game in which Hansel has two possible actions (Up or Down) at each node and Gretel has three possible
12/18/14 3:10 PM
8 2 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
actions (Top, Middle, or Bottom) at each node. How many of each
node type—decision and terminal—are there?
(b) Draw a game tree for a game in which Hansel and Gretel each have
three possible actions (Sit, Stand, or Jump) at each node. How many
of the two node types are there?
(c) Draw a game tree for a game in which Hansel has four possible actions (North, South, East, or West) at each node and Gretel has two
possible actions (Stay or Go) at each node. How many of the two
node types are there?
S2.
In each of the following games, how many pure strategies (complete
plans of action) are available to each player? List out all of the pure strategies for each player.
(a)
TINMAN
t
0, 2
b
2, 1
N
SCARECROW
S
1, 0
N
0, 1
N
SCARECROW
(b)
SCARECROW
n
n
S
TINMAN t
N
5, 4
S
TINMAN s
3, 2
TINMAN s
2, 2
N
2, 3, 2
S
0, 0, 2
b
u
S
LION
3, 3, 3
N
1, 2, 4
S
0, 2, 0
d
SCARECROW
6841D CH03 UG.indd 82
1, 0
1, 1, 1
SCARECROW
SCARECROW
4, 5
n
S
TINMAN s
(c)
2, 3
N
SCARECROW
12/18/14 3:10 PM
e x e r c i s e s 8 3
6841D CH03 UG.indd 83
S3.
For each of the games illustrated in Exercise S2, identify the rollback equilibrium outcome and the complete equilibrium strategy for each player.
S4.
Consider the rivalry between Airbus and Boeing to develop a new commercial jet aircraft. Suppose Boeing is ahead in the development process
and Airbus is considering whether to enter the competition. If Airbus
stays out, it earns 0 profit, whereas Boeing enjoys a monopoly and earns
a profit of $1 billion. If Airbus decides to enter and develop the rival
airplane, then Boeing has to decide whether to accommodate Airbus
peaceably or to wage a price war. In the event of peaceful competition,
each firm will make a profit of $300 million. If there is a price war, each
will lose $100 million because the prices of airplanes will fall so low that
neither firm will be able to recoup its development costs.
Draw the tree for this game. Find the rollback equilibrium and describe the firms’ equilibrium strategies.
S5.
Consider a game in which two players, Fred and Barney, take turns removing matchsticks from a pile. They start with 21 matchsticks, and Fred
goes first. On each turn, each player may remove either one, two, three, or
four matchsticks. The player to remove the last matchstick wins the game.
(a) Suppose there are only six matchsticks left, and it is Barney’s turn.
What move should Barney make to guarantee himself victory? Explain your reasoning.
(b) Suppose there are 12 matchsticks left, and it is Barney’s turn. What
move should Barney make to guarantee himself victory? (Hint: Use
your answer to part (a) and roll back.)
(c) Now start from the beginning of the game. If both players play optimally, who will win?
(d) What are the optimal strategies (complete plans of action) for each
player?
S6.
Consider the game in the previous exercise. Suppose the players have
reached a point where it is Fred’s move and there are just five matchsticks
left.
(a) Draw the game tree for the game starting with five matchsticks.
(b) Find the rollback equilibria for this game starting with five
matchsticks.
(c) Would you say this five-matchstick game has a first-mover advantage or a second-mover advantage?
(d) Explain why you found more than one rollback equilibrium. How is
your answer related to the optimal strategies you found in part (c) of
the previous exercise?
12/18/14 3:10 PM
8 4 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
S7.
Elroy and Judy play a game that Elroy calls the “race to 100.” Elroy goes
first, and the players take turns choosing numbers between one and
nine. On each turn, they add the new number to a running total. The
player who brings the total exactly to 100 wins the game.
(a) If both players play optimally, who will win the game? Does this
game have a first-mover advantage? Explain your reasoning.
(b) What are the optimal strategies (complete plans of action) for each
player?
S8.
A slave has just been thrown to the lions in the Roman Colosseum. Three
lions are chained down in a line, with Lion 1 closest to the slave. Each
lion’s chain is short enough that he can only reach the two players immediately adjacent to him.
The game proceeds as follows. First, Lion 1 decides whether or not
to eat the slave.
If Lion 1 has eaten the slave, then Lion 2 decides whether or not to
eat Lion 1 (who is then too heavy to defend himself). If Lion 1 has not
eaten the slave, then Lion 2 has no choice: he cannot try to eat Lion 1,
because a fight would kill both lions.
Similarly, if Lion 2 has eaten Lion 1, then Lion 3 decides whether or
not to eat Lion 2.
Each lion’s preferences are fairly natural: best (4) is to eat and stay
alive, next best (3) is to stay alive but go hungry, next (2) is to eat and be
eaten, and worst (1) is to go hungry and be eaten.
(a) Draw the game tree, with payoffs, for this three-player game.
(b) What is the rollback equilibrium to this game? Make sure to describe the strategies, not just the payoffs.
(c) Is there a first-mover advantage to this game? Explain why or why
not.
(d) How many complete strategies does each lion have? List them.
S9.
Consider three major department stores—Big Giant, Titan, and Frieda’s—
contemplating opening a branch in one of two new Boston-area shopping malls. Urban Mall is located close to the large and rich population
center of the area; it is relatively small and can accommodate at most two
department stores as “anchors” for the mall. Rural Mall is farther out in a
rural and relatively poorer area; it can accommodate as many as three
anchor stores. None of the three stores wants to have branches in both
malls because there is sufficient overlap of customers between the malls
that locating in both would just mean competing with itself. Each store
prefers to be in a mall with one or more other department stores than
to be alone in the same mall, because a mall with multiple department
6841D CH03 UG.indd 84
12/18/14 3:10 PM
e x e r c i s e s 8 5
stores will attract sufficiently many more total customers that each
store’s profit will be higher. Further, each store prefers Urban Mall to
Rural Mall because of the richer customer base. Each store must choose
between trying to get a space in Urban Mall (knowing that if the attempt
fails, it will try for a space in Rural Mall) and trying to get a space in Rural
Mall right away (without even attempting to get into Urban Mall).
In this case, the stores rank the five possible outcomes as follows: 5
(best), in Urban Mall with one other department store; 4, in Rural Mall
with one or two other department stores; 3, alone in Urban Mall; 2, alone
in Rural Mall; and 1 (worst), alone in Rural Mall after having attempted
to get into Urban Mall and failed, by which time other nondepartment
stores have signed up the best anchor locations in Rural Mall.
The three stores are sufficiently different in their managerial structures that they experience different lags in doing the paperwork required
to request an expansion space in a new mall. Frieda’s moves quickly,
followed by Big Giant, and finally by Titan, which is the least efficient in
readying a location plan. When all three have made their requests, the
malls decide which stores to let in. Because of the name recognition that
both Big Giant and Titan have with the potential customers, a mall would
take either (or both) of those stores before it took Frieda’s. Thus, Frieda’s
does not get one of the two spaces in Urban Mall if all three stores request those spaces; this is true even though Frieda’s moves first.
(a) Draw the game tree for this mall location game.
(b) Illustrate the rollback pruning process on your game tree and use
the pruned tree to find the rollback equilibrium. Describe the equilibrium by using the (complete) strategies employed by each department store. What are the payoffs to each store at the rollback
equilibrium outcome?
S10.
6841D CH03 UG.indd 85
(Optional ) Consider the following ultimatum bargaining game, which
has been studied in laboratory experiments. The Proposer moves first,
and proposes a split of $10 between himself and the Responder. Any
whole‑dollar split may be proposed. For example, the Proposer may offer
to keep the whole $10 for himself, he may propose to keep $9 for himself
and give $1 to the Responder, $8 to himself and $2 to the Responder, and
so on. (Note that the Proposer therefore has eleven possible choices.)
After seeing the split, the Responder can choose to accept the split or reject it. If the Responder accepts, both players get the proposed amounts.
If she rejects, both players get $0.
(a) Write out the game tree for this game.
(b) How many complete strategies does each player have?
12/18/14 3:10 PM
8 6 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
(c) What is the rollback equilibrium to this game, assuming the players
care only about their cash payoffs?
(d) Suppose Rachel the Responder would accept any offer of $3 or
more, and reject any offer of $2 or less. Suppose Pete the Proposer
knows Rachel’s strategy, and he wants to maximize his cash payoff.
What strategy should he use?
(e) Rachel’s true payoff (her “utility”) might not be the same as her cash
payoff. What other aspects of the game might she care about? Given
your answer, propose a set of payoffs for Rachel that would make
her strategy optimal.
(f) In laboratory experiments, players typically do not play the rollback
equilibrium. Proposers typically offer an amount between $2 and $5
to the Responder. Responders often reject offers of $3, $2, and especially $1. Explain why you think this might occur.
UNsolved Exercises
U1.
“In a sequential-move game, the player who moves first is sure to win.” Is
this statement true or false? State the reason for your answer in a few brief
sentences, and give an example of a game that illustrates your answer.
U2.
In each of the following games, how many pure strategies (complete
plans of action) are available to each player? List all of the pure strategies
for each player.
(a)
a
3, 0
b
2, 1
a
1, 1
MINERVA
N
MINERVA
ALBUS
E
b
5, 0
S
4, 4
a
MINERVA
(b)
MINERVA
b
a
1, 5
3, 3
ALBUS
N
5, 2
b
N
c
0, 4
S
a
2, 2
S
1, 3
ALBUS
6841D CH03 UG.indd 86
12/18/14 3:10 PM
S
4, 4
a
MINERVA
b
1, 5
e x e r c i s e s 8 7
(b)
MINERVA
3, 3
ALBUS
a
5, 2
N
b
N
c
0, 4
S
a
2, 2
b
3, 1
S
1, 3
ALBUS
MINERVA
(c)
1, 3, 1
N
ALBUS
X
SEVERUS
a
Y
MINERVA
MINERVA
N
b
0, 4, 4
S
2, 4, 0
0, 2, 3
a
b
4, 1, 5
ALBUS
S
2, 1, 1
U3.
For each of the games illustrated in Exercise U2, identify the rollback
equilibrium outcome and the complete equilibrium strategy for each
player.
U4.
Two distinct proposals, A and B, are being debated in Washington. Congress likes proposal A, and the president likes proposal B. The proposals
are not mutually exclusive; either or both or neither may become law.
Thus there are four possible outcomes, and the rankings of the two sides
are as follows, where a larger number represents a more favored outcome:
Outcome
6841D CH03 UG.indd 87
Congress
President
A becomes law
4
1
B becomes law
1
4
Both A and B become law
3
3
Neither (status quo prevails)
2
2
12/18/14 3:10 PM
8 8 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
(a) The moves in the game are as follows. First, Congress decides whether
to pass a bill and whether the bill is to contain A or B or both. Then
the president decides whether to sign or veto the bill. Congress does
not have enough votes to override a veto. Draw a tree for this game
and find the rollback equilibrium.
(b) Now suppose the rules of the game are changed in only one respect:
the president is given the extra power of a line-item veto. Thus, if
Congress passes a bill containing both A and B, the president may
choose not only to sign or veto the bill as a whole, but also to veto
just one of the two items. Show the new tree and find the rollback
equilibrium.
(c) Explain intuitively why the difference between the two equilibria
arises.
U5.
Two players, Amy and Beth, play the following game with a jar containing 100 pennies. The players take turns; Amy goes first. Each time it is
a player’s turn, she takes between 1 and 10 pennies out of the jar. The
player whose move empties the jar wins.
(a) If both players play optimally, who will win the game? Does this
game have a first-mover advantage? Explain your reasoning.
(b) What are the optimal strategies (complete plans of action) for each
player?
U6.
Consider a slight variant to the game in Exercise U5. Now the player
whose move empties the jar loses.
(a) Does this game have a first-mover advantage?
(b) What are the optimal strategies for each player?
U7.
Kermit and Fozzie play a game with two jars, each containing 100 pennies. The players take turns; Kermit goes first. Each time it is a player’s
turn, he chooses one of the jars and removes anywhere from 1 to 10 pennies from it. The player whose move leaves both jars empty wins. (Note
that when a player empties the second jar, the first jar must already have
been emptied in some previous move by one of the players.)
(a) Does this game have a first-mover advantage or a second-mover advantage? Explain which player can guarantee victory, and how he
can do it. (Hint: Simplify the game by starting with a smaller number
of pennies in each jar, and see if you can generalize your finding to
the actual game.)
(b) What are the optimal strategies (complete plans of action) for each
player? (Hint: First think of a starting situation in which both jars
have equal numbers of pennies. Then consider starting positions in
which the two jars differ by 1 to 10 pennies. Finally, consider starting
positions in which the jars differ by more than 10 pennies.)
6841D CH03 UG.indd 88
12/18/14 3:10 PM
e x e r c i s e s 8 9
6841D CH03 UG.indd 89
U8.
Modify Exercise S8 so that there are now four lions.
(a) Draw the game tree, with payoffs, for this four-player game.
(b) What is the rollback equilibrium to this game? Make sure to describe the strategies, not just the payoffs.
(c) Is the additional lion good or bad for the slave? Explain.
U9.
To give Mom a day of rest, Dad plans to take his two children, Bart and
Cassie, on an outing on Sunday. Bart prefers to go to the amusement
park (A), whereas Cassie prefers to go to the science museum (S). Each
child gets 3 units of utility from his/her more preferred activity and only
2 units of utility from his/her less preferred activity. Dad gets 2 units of
utility for either of the two activities.
To choose their activity, Dad plans first to ask Bart for his preference,
then to ask Cassie after she hears Bart’s choice. Each child can choose either the amusement park (A) or the science museum (S). If both children
choose the same activity, then that is what they will all do. If the children
choose different activities, Dad will make a tie-breaking decision. As the
parent, Dad has an additional option: he can choose the amusement
park, the science museum, or his personal favorite, the mountain hike
(M). Bart and Cassie each get 1 unit of utility from the mountain hike,
and Dad gets 3 units of utility from the mountain hike.
Because Dad wants his children to cooperate with one another, he
gets 2 extra units of utility if the children choose the same activity (no
matter which one of the two it is).
(a) Draw the game tree, with payoffs, for this three-person game.
(b) What is the rollback equilibrium to this game? Make sure to describe the strategies, not just the payoffs.
(c) How many different complete strategies does Bart have? Explain.
(d) How many complete strategies does Cassie have? Explain.
U10.
(Optional—more difficult) Consider the Survivor game tree illustrated in
Figure 3.11. We might not have guessed exactly the values Rich estimated
for the various probabilities, so let’s generalize this tree by considering
other possible values. In particular, suppose that the probability of winning the immunity challenge when Rich chooses Continue is x for Rich,
y for Kelly, and 1 2 x 2 y for Rudy; similarly, the probability of winning
when Rich gives up is z for Kelly and 1 2 z for Rudy. Further, suppose
that Rich’s chance of being picked by the jury is p if he has won immunity and has voted Rudy off the island; his chance of being picked is q
if Kelly has won immunity and has voted Rudy off the island. Continue
to assume that if Rudy wins immunity, he keeps Rich with probability 1,
and that Rudy wins the game with probability 1 if he ends up in the final
two. Note that in the example of Figure 3.11, we had x 5 0.45, y 5 0.5,
12/18/14 3:10 PM
9 0 [ C h . 3 ] g a m e s w i t h s e q u e n t i a l m o v e s
z 5 0.9, p 5 0.4, and q 5 0.6. (In general, the variables p and q need not
sum to 1, though this happened to be true in Figure 3.11.)
(a) Find an algebraic formula, in terms of x, y, z, p, and q, for the probability that Rich wins the million dollars if he chooses Continue.
(Note: Your formula might not contain all of these variables.)
(b) Find a similar algebraic formula for the probability that Rich wins
the million dollars if he chooses Give Up. (Again, your formula
might not contain all of the variables.)
(c) Use these results to find an algebraic inequality telling us under
what circumstances Rich should choose Give Up.
(d) Suppose all the values are the same as in Figure 3.11 except for z.
How high or low could z be so that Rich would still prefer to Give
Up? Explain intuitively why there are some values of z for which
Rich is better off choosing Continue.
(e) Suppose all the values are the same as in Figure 3.11 except for p
and q. Assume that since the jury is more likely to choose a “nice”
person who doesn’t vote Rudy off, we should have p . 0.5 . q. For
what values of the ratio (p q)should Rich choose Give Up? Explain
intuitively why there are some values of p and q for which Rich is
better off choosing Continue.
6841D CH03 UG.indd 90
12/18/14 3:10 PM
4
■
Simultaneous-Move Games:
Discrete Strategies
R
C hapter 2 that games are said to have simultaneous moves
if players must move without knowledge of what their rivals have chosen to do. It is obviously so if players choose their actions at exactly
the same time. A game is also simultaneous when players choose their
actions in isolation, with no information about what other players have done or
will do, even if the choices are made at different hours of the clock. (For this reason, ­simultaneous-move games have imperfect information in the sense we defined in Chapter 2, Section 2.D.) This chapter focuses on games that have such
purely simultaneous interactions among players. We consider a variety of types
of simultaneous games, introduce a solution concept called Nash equilibrium
for these games, and study games with one equilibrium, many equilibria, or no
equilibrium at all.
Many familiar strategic situations can be described as simultaneous-move
games. The various producers of television sets, stereos, or automobiles make
decisions about product design and features without knowing what rival firms
are doing about their own products. Voters in U.S. elections simultaneously
cast their individual votes; no voter knows what the others have done when she
makes her own decision. The interaction between a soccer goalie and an opposing striker during a penalty kick requires both players to make their decisions
­simultaneously—the goalie cannot afford to wait until the ball has actually been
kicked to decide which way to go, because then it would be far too late.
When a player in a simultaneous-move game chooses her action, she obviously
does so without any knowledge of the choices made by other players. She also
ecall from
91
6841D CH04 UG.indd 91
12/18/14 3:10 PM
9 2 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
cannot look ahead to how they will react to her choice, because they act simultaneously and do not know what she is choosing. Rather, each player must figure out what others are choosing to do at the same time that the others are
figuring out what she is choosing to do. This circularity makes the analysis
of simultaneous‑move games somewhat more intricate than the analysis of
sequential-move games, but the analysis is not difficult. In this chapter, we
will develop a simple concept of equilibrium for such games that has considerable explanatory and predictive power.
1 DEPICTING SIMULTANEOUS-MOVE GAMES
WITH DISCRETE STRATEGIES
In Chapters 2 and 3, we emphasized that a strategy is a complete plan of action. But in a purely simultaneous-move game, each player can have at most
one opportunity to act (although that action may have many component parts);
if a player had multiple opportunities to act, that would be an element of sequentiality. Therefore, there is no real distinction between strategy and action
in ­simultaneous-move games, and the terms are often used as synonyms in this
context. There is only one complication. A strategy can be a probabilistic choice
from the basic actions initially specified. For example, in sports, a player or team
may deliberately randomize its choice of action to keep the opponent guessing.
Such probabilistic strategies are called mixed strategies, and we consider them
in Chapter 7. In this chapter, we confine our attention to the basic initially specified actions, which are called pure strategies.
In many games, each player has available to her a finite number of discrete pure strategies—for example, Dribble, Pass, or Shoot in basketball. In
other games, each player’s pure strategy can be any number from a continuous
range—for example, the price charged for a product by a firm.1 This distinction
makes no difference to the general concept of equilibrium in simultaneousmove games, but the ideas are more easily conveyed with discrete strategies; solution of games with continuous strategies needs slightly more advanced tools.
Therefore, in this chapter, we restrict the analysis to the simpler case of discrete
pure strategies and then take up continuously variable strategies in Chapter 5.
Simultaneous-move games with discrete strategies are most often depicted with the use of a game table (also called a game matrix or payoff table).
The table is called the normal form or the strategic form of the game. Games
with any number of players can be illustrated by using a game table, but its
1
In fact, prices must be denominated in the minimum unit of coinage—for example, whole cents—
and can therefore take on only a finite number of discrete values. But this unit is usually so small
that it makes more sense to think of the price as a continuous variable.
6841D CH04 UG.indd 92
12/18/14 3:10 PM
d e p i c t i n g s i m u lta n e o u s - m o v e g a m e s w i t h d i s c r e t e s t r at e g i e s 9 3
COLUMN
ROW
Left
Middle
Right
Top
3, 1
2, 3
10, 2
High
4, 5
3, 0
6, 4
Low
2, 2
5, 4
12, 3
Bottom
5, 6
4, 5
9, 7
FIGURE 4.1 Representing a Simultaneous-Move Game in a Table
dimensions must equal the number of players. For a two-player game, the table
is two-dimensional and appears similar to a spreadsheet. The row and column
headings of the table are the strategies available to the first and second players,
respectively. The size of the table, then, is determined by the numbers of strategies available to the players.2 Each cell within the table lists the payoffs to all
players that arise under the configuration of strategies that placed players into
that cell. Games with three players require three-dimensional tables; we consider
them later in this chapter.
We illustrate the concept of a payoff table for a simple game in Figure 4.1.
The game here has no special interpretation, so we can develop the concepts
without the distraction of a “story.” The players are named Row and Column.
Row has four choices (strategies or actions) labeled Top, High, Low, and Bottom;
Column has three choices labeled Left, Middle, and Right. Each selection of Row
and Column generates a potential outcome of the game. Payoffs associated with
each outcome are shown in the cell corresponding to that row and that column.
By convention, of the two payoff numbers, the first is Row’s payoff and the second is Column’s. For example, if Row chooses High and Column chooses Right,
the payoffs are 6 to Row and 4 to Column. For additional convenience, we show
everything pertaining to Row—player name, strategies, and payoffs—in black,
and everything pertaining to Column in blue.
Next we consider a second example with more of a story attached. Figure
4.2 represents a very simplified version of a single play in American football. Offense attempts to move the ball forward to improve its chances of kicking a field
goal. It has four possible strategies: a run and one of three different-length passes
(short, medium, and long). Defense can adopt one of three strategies to try to
keep Offense at bay: a run defense, a pass defense, or a blitz of the quarterback.
2
If each firm can choose its price at any number of cents in a range that extends over a dollar, each
has 100 distinct discrete strategies, and the table becomes 100 by 100. That is surely too unwieldy to
analyze. Algebraic formulas with prices as continuous variables provide a simpler approach, not a
more complicated one as some readers might fear. We develop this “Algebra is our friend” method
in Chapter 5.
6841D CH04 UG.indd 93
12/18/14 3:10 PM
9 4 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
DEFENSE
OFFENSE
Run
Pass
Blitz
Run
2, –2
5, –5
13, –13
Short Pass
6, –6
5.6, –5.6 10.5, –10.5
Medium Pass
6, –6
4.5, –4.5
Long Pass
10, –10
3, –3
1, –1
–2, 2
FIGURE 4.2 A Single Play in American Football
Offense tries to gain yardage while Defense tries to prevent it from doing so. Suppose we have enough information about the underlying strengths of the two
teams to work out the probabilities of completing different plays and to determine the average gain in yardage that could be expected under each combination of strategies. For example, when Offense chooses the Medium Pass and
Defense counters with its Pass defense, we estimate Offense’s payoff to be a gain
of 4.5 yards, or 14.5.3 Defense’s “payoff” is a loss of 4.5 yards, or 24.5. The other
cells similarly show our estimates of each team’s gain or loss of yardage.
Note that the payoffs sum to 0 in every cell of this table: when the offense
gains 5 yards, the defense loses 5 yards, and when the offense loses 2 yards, the
defense gains 2 yards. This pattern is quite common in sports contexts, where
the interests of the two sides are exactly the opposite of each other. As noted
in Chapter 2, we call this a zero-sum (or sometimes constant-sum) game. You
should remember that the definition of a zero-sum game is that the payoffs sum
to the same constant across cells, whether that number is 0, 6, or 1,000. (In Section 4.7, we describe a game where the two players’ payoffs sum to 100.) The key
feature of any zero-sum game is that one player’s loss is the other player’s gain.
2 NASH EQUILIBRIUM
To analyze simultaneous games, we need to consider how players choose their
actions. Return to the game table in Figure 4.1. Focus on one specific outcome—
3
Here is how the payoffs for this case were constructed. When Offense chooses the Medium Pass
and Defense counters with its Pass defense, our estimate is that with probability 50% the pass
will be completed for a gain of 15 yards, with probability 40% the pass will fall incomplete (0 yards),
and with probability 10% the pass will be intercepted with a loss of 30 yards; this makes an average
of 0.5 3 15 1 0.4 3 0 1 0.1 3 (230) 5 4.5 yards. The numbers in the table were constructed by a
small panel of expert neighbors and friends convened by Dixit on one fall Sunday afternoon. They
received a liquid consultancy fee.
6841D CH04 UG.indd 94
12/18/14 3:10 PM
N a s h e q u i l i b r i u m 9 5
namely, the one where Row chooses Low and Column chooses Middle; payoffs
there are 5 to Row and 4 to Column. Each player wants to pick an action that
yields her the highest payoff, and in this outcome each indeed makes such a
choice, given what her opponent chooses. Given that Row is choosing Low, can
Column do any better by choosing something other than Middle? No, because
Left would give her the payoff 2, and Right would give her 3, neither of which is
better than the 4 she gets from Middle. Thus, Middle is Column’s best response
to Row’s choice of Low. Conversely, given that Column is choosing Middle, can
Row do better by choosing something other than Low? Again no, because the
payoffs from switching to Top (2), High (3), or Bottom (4) would all be no better
than what Row gets with Low (5). Thus, Low is Row’s best response to Column’s
choice of Middle.
The two choices, Low for Row and Middle for Column, have the property
that each is the chooser’s best response to the other’s action. If they were making these choices, neither would want to switch to anything different on her
own. By the definition of a noncooperative game, the players are making their
choices independently; therefore such unilateral changes are all that each
player can contemplate. Because neither wants to make such a change, it is
natural to call this state of affairs an equilibrium. This is exactly the concept of
Nash equilibrium.
To state it a little more formally, a Nash equilibrium4 in a game is a list of
strategies, one for each player, such that no player can get a better payoff by
switching to some other strategy that is available to her while all the other players adhere to the strategies specified for them in the list.
A. Some Further Explanation of the Concept of Nash Equilibrium
To understand the concept of Nash equilibrium better, we take another look
at the game in Figure 4.1. Consider now a cell other than (Low, Middle)—say,
the one where Row chooses High and Column chooses Left. Can this be a Nash
equilibrium? No, because, if Column is choosing Left, Row does better to choose
Bottom and get the payoff 5 rather than to choose High, which gives her only 4.
Similarly, (Bottom, Left) is not a Nash equilibrium, because Column can do better by switching to Right, thereby improving her payoff from 6 to 7.
4
This concept is named for the mathematician and economist John Nash, who developed it in his
doctoral dissertation at Princeton in 1949. Nash also proposed a solution to cooperative games,
which we consider in Chapter 17. He shared the 1994 Nobel Prize in economics with two other game
theorists, Reinhard Selten and John Harsanyi; we will treat some aspects of their work in Chapters
8, 9, and 13. Sylvia Nasar’s biography of Nash, A Beautiful Mind (New York: Simon & Schuster, 1998),
was the (loose) basis for a movie starring Russell Crowe. Unfortunately, the movie’s attempt to explain the concept of Nash equilibrium fails. We explain this failure in Exercise S13 of this chapter
and in Exercise S14 of Chapter 7.
6841D CH04 UG.indd 95
12/18/14 3:10 PM
9 6 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
COLUMN
ROW
Left
Middle
Right
Top
3, 1
2, 3
10, 2
High
4, 5
3, 0
6, 4
Low
2, 2
5, 4
12, 3
Bottom
5, 6
5, 5
9, 7
FIGURE 4.3 Variation on Game of Figure 4.1 with a Tie in Payoffs
The definition of Nash equilibrium does not require equilibrium choices to
be strictly better than other available choices. Figure 4.3 is the same as Figure
4.1 except that Row’s payoff from (Bottom, Middle) is changed to 5, the same as
that from (Low, Middle). It is still true that, given Column’s choice of Middle, Row
could not do any better than she does when choosing Low. So neither player has a
reason to change her action when the outcome is (Low, Middle), and that qualifies it for a Nash equilibrium.5
More important, a Nash equilibrium does not have to be jointly best for the
players. In Figure 4.1, the strategy pair (Bottom, Right) gives payoffs (9, 7), which
are better for both players than the (5, 4) of the Nash equilibrium. However,
playing independently, they cannot sustain (Bottom, Right). Given that Column
plays Right, Row would want to deviate from Bottom to Low and get 12 instead
of 9. Getting the jointly better payoffs of (9, 7) would require cooperative action
that made such “cheating” impossible. We examine this type of behavior later in
this chapter and in more detail in Chapter 10. For now, we merely point out the
fact that a Nash equilibrium may not be in the joint interests of the ­players.
To reinforce the concept of Nash equilibrium, look at the football game
of Figure 4.2. If Defense is choosing the Pass defense, then the best choice for
Offense is Short Pass (payoff of 5.6 versus 5, 4.5, or 3). Conversely, if Offense is
choosing the Short Pass, then Defense’s best choice is the Pass d
­ efense—it holds
Offense down to 5.6 yards, whereas the Run defense and the Blitz would be expected to concede 6 and 10.5 yards, respectively. (Remember that the entries in
each cell of a zero-sum game are the Row player’s payoffs; therefore the best choice
for the Column player is the one that yields the smallest number, not the largest.)
In this game, the strategy combination (Short Pass, Pass defense) is a Nash equilibrium, and the resulting payoff to Offense is 5.6 yards.
5
But note that (Bottom, Middle) with the payoffs of (5, 5) is not itself a Nash equilibrium. If Row was
choosing Bottom, Column’s own best choice would not be Middle; she could do better by choosing
Right. In fact, you can check all the other cells in the table to verify that none of them can be a Nash
equilibrium.
6841D CH04 UG.indd 96
12/18/14 3:10 PM
N a s h e q u i l i b r i u m 9 7
How does one find Nash equilibria in games? One can always check every
cell to see if the strategies that generate it satisfy the definition of a Nash equilibrium. Such a systematic analysis is foolproof, but tedious and unmanageable
except in simple games or with the use of a good computer program to check
cells for equilibria. Luckily, there are other methods, applicable to special types
of games, that not only find Nash equilibria more quickly when they apply, but
also give us a better understanding of the process of thinking by which beliefs
and then choices are formed. We develop such methods in later sections.
B. Nash Equilibrium as a System of Beliefs and Choices
Before we proceed with further study and use of the Nash equilibrium concept,
we should try to clarify something that may have bothered some of you. We said
that, in a Nash equilibrium, each player chooses her “best response” to the other’s choice. But the two choices are made simultaneously. How can one respond
to something that has not yet happened, at least when one does not know what
has happened?
People play simultaneous-move games all the time and do make choices.
To do so, they must find a substitute for actual knowledge or observation of the
others’ actions. Players could make blind guesses and hope that they turn out
to be inspired ones, but luckily there are more systematic ways to try to figure
out what the others are doing. One method is experience and observation—if
the players play this game or similar games with similar players all the time,
they may develop a pretty good idea of what the others do. Then choices that
are not best will be unlikely to persist for long. Another method is the logical
process of thinking through the others’ thinking. You put yourself in the position of other players and think what they are thinking, which of course includes
their putting themselves in your position and thinking what you are thinking.
The logic seems circular, but there are several ways of breaking into the circle,
and we demonstrate these ways by using specific examples in the sections that
follow. Nash equilibrium can be thought of as a culmination of this process of
thinking about thinking, where each player has correctly figured out the others’
choice.
Whether by observation or logical deduction or some other method, you, the game
player, acquire some notion of what the others are choosing in simultaneous-move
games. It is not easy to find a word to describe the process or its outcome. It is not
anticipation, nor is it forecasting, because the others’ actions do not lie in the future
but occur simultaneously with your own. The word most frequently used by game
theorists is belief. This word is not perfect either, because it seems to connote more
confidence or certainty than is intended; in fact, in Chapter 7, we allow for the possibility that beliefs are held with some uncertainty. But for lack of a better word, it
will have to suffice.
6841D CH04 UG.indd 97
12/18/14 3:10 PM
9 8 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
This concept of belief also relates to our discussion of uncertainty in Chapter 2, Section 2.D. There we introduced the concept of strategic uncertainty.
Even when all the rules of a game—the strategies available to all players and the
payoffs for each as functions of the strategies of all—are known without any uncertainty external to the game, such as weather, each player may be uncertain
about what actions the others are taking at the same time. Similarly, if past actions are not observable, each player may be uncertain about what actions the
others took in the past. How can players choose in the face of this strategic uncertainty? They must form some subjective views or estimates about the others’
actions. That is exactly what the notion of belief captures.
Now think of Nash equilibrium in this light. We defined it as a configuration of strategies such that each player’s strategy is her best response to that of
the others. If she does not know the actual choices of the others but has beliefs
about them, in Nash equilibrium those beliefs would have to be correct—the
others’ actual actions should be just what you believe them to be. Thus, we can
define Nash equilibrium in an alternative and equivalent way: it is a set of strategies, one for each player, such that (1) each player has correct beliefs about the
strategies of the others and (2) the strategy of each is the best for herself, given
her beliefs about the strategies of the others.6
This way of thinking about Nash equilibrium has two advantages. First, the
concept of “best response” is no longer logically flawed. Each player is choosing
her best response, not to the as yet unobserved actions of the others, but only to
her own already formed beliefs about their actions. Second, in Chapter 7, where
we allow mixed strategies, the randomness in one player’s strategy may be better interpreted as uncertainty in the other players’ beliefs about this player’s action. For now, we proceed by using both interpretations of Nash equilibrium in
parallel.
You might think that formation of correct beliefs and calculation of best
responses is too daunting a task for mere humans. We discuss some criticisms
of this kind, as well as empirical and experimental evidence concerning Nash
equilibrium, in Chapter 5 for pure strategies and Chapter 7 for mixed strategies.
For now, we simply say that the proof of the pudding is in the eating. We develop
and illustrate the Nash equilibrium concept by applying it. We hope that seeing
it in use will prove a better way to understand its strengths and drawbacks than
would an abstract discussion at this point.
6
In this chapter we consider only Nash equilibria in pure strategies—namely, the ones initially
listed in the specification of the game, and not mixtures of two or more of them. Therefore, in such
an equilibrium, each player has certainty about the actions of the others; strategic uncertainty is removed. When we consider mixed strategy equilibria in Chapter 7, the strategic uncertainty for each
player will consist of the probabilities with which the various strategies are played in the other players’ equilibrium mixtures.
6841D CH04 UG.indd 98
12/18/14 3:10 PM
d o m i n a n c e 9 9
3 DOMINANCE
Some games have a special property that one strategy is uniformly better than
or worse than another. When this is the case, it provides one way in which the
search for Nash equilibrium and its interpretation can be simplified.
The well-known game of the prisoners’ dilemma illustrates this concept well.
Consider a story line of the type that appears regularly in the television program
Law and Order. Suppose that a Husband and Wife have been arrested under the
suspicion that they were conspirators in the murder of a young woman. Detectives Green and Lupo place the suspects in separate detention rooms and interrogate them one at a time. There is little concrete evidence linking the pair to the
murder, although there is some evidence that they were involved in kidnapping
the victim. The detectives explain to each suspect that they are both looking at
jail time for the kidnapping charge, probably 3 years, even if there is no confession from either of them. In addition, the Husband and Wife are told individually that the detectives “know” what happened and “know” how one had been
coerced by the other to participate in the crime; it is implied that jail time for a
solitary confessor will be significantly reduced if the whole story is committed
to paper. (In a scene common to many similar programs, a yellow legal pad and
a pencil are produced and placed on the table at this point.) Finally, they are told
that, if both confess, jail terms could be negotiated down but not as much as they
would be if there were one confession and one denial.
Both Husband and Wife are then players in a two-person, simultaneous-move
game in which each has to choose between confessing and not confessing to the
crime of murder. They both know that no confession leaves them each with a
3-year jail sentence for involvement with the kidnapping. They also know that, if
one of them confesses, he or she will get a short sentence of 1 year for cooperating
with the police, while the other will go to jail for a minimum of 25 years. If both
confess, they figure that they can negotiate for jail terms of 10 years each.
The choices and outcomes for this game are summarized by the game table in
Figure 4.4. The strategies Confess and Deny can also be called Defect and Cooperate to capture their roles in the relationship between the two players; thus Defect
WIFE
Confess (Defect) Deny (Cooperate)
HUSBAND
Confess (Defect)
10 yr, 10 yr
1 yr, 25 yr
Deny (Cooperate)
25 yr, 1 yr
3 yr, 3 yr
FIGURE 4.4 Prisoners’ Dilemma
6841D CH04 UG.indd 99
12/18/14 3:10 PM
1 0 0 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
means to defect from any tacit arrangement with the spouse, and Cooperate
means to take the action that helps the spouse (not cooperate with the cops).
Payoffs here are the lengths of the jail sentences associated with each outcome, so low numbers are better for each player. In that sense, this example differs from those of most of the games that we analyze, in which large payoffs are
good rather than bad. We take this opportunity to alert you that “large is good” is
not always true. When payoff numbers indicate players’ rankings of outcomes,
people often use 1 for the best alternative and successively higher numbers for
successively worse ones. Also, in the table for a zero-sum game that shows only
one player’s bigger-is-better payoffs, smaller numbers are better for the other.
In the prisoners’ dilemma here, smaller numbers are better for both. Thus, if
you ever write a payoff table where large numbers are bad, you should alert the
reader by pointing it out clearly. And when reading someone else’s example, be
aware of the possibility.
Now consider the prisoners’ dilemma game in Figure 4.4 from the Husband’s perspective. He has to think about what the Wife will choose. Suppose
he believes that she will confess. Then his best choice is to confess; he gets a
sentence of only 10 years, while denial would have meant 25 years. What if he
believes the Wife will deny? Again, his own best choice is to confess; he gets only
1 year instead of the 3 that his own denial would bring in this case. Thus, in this
special game, Confess is better than Deny for the Husband regardless of his belief about the Wife’s choice. We say that, for the Husband, the strategy Confess is
a dominant strategy or that the strategy Deny is a dominated strategy. Equivalently, we could say that the strategy Confess dominates the strategy Deny or
that the strategy Deny is dominated by the strategy Confess.
If an action is clearly best for a player, no matter what the others might be
doing, then there is compelling reason to think that a rational player would
choose it. And if an action is clearly bad for a player, no matter what the others
might be doing, then there is equally compelling reason to think that a rational
player would avoid it. Therefore, dominance, when it exists, provides a compelling basis for the theory of solutions to simultaneous-move games.
A. Both Players Have Dominant Strategies
In the preceding prisoners’ dilemma, dominance should lead the Husband to
choose Confess. Exactly the same logic applies to the Wife’s choice. Her own
strategy Confess dominates her own strategy Deny; so she also should choose
Confess. Therefore, (Confess, Confess) is the outcome predicted for this game.
Note that it is a Nash equilibrium. (In fact it is the only Nash equilibrium.) Each
player is choosing his or her own best strategy.
In this special game, the best choice for each is independent of whether
their beliefs about the other are correct—this is the meaning of dominance—
6841D CH04 UG.indd 100
12/18/14 3:10 PM
d o m i n a n c e 1 0 1
but if each of them attributes to the other the same rationality as he or she practices, then both of them should be able to form correct beliefs. And the actual
action of each is the best response to the actual action of the other. Note that the
fact that Confess dominates Deny for both players is completely independent of
whether they are actually guilty, as in many episodes of Law and Order, or are
being framed, as happened in the movie L.A. Confidential. It only depends on
the pattern of payoffs dictated by the various sentence lengths.
Any game with the same general payoff pattern as that illustrated in Figure
4.4 is given the generic label “prisoners’ dilemma.” More specifically, a prisoners’ dilemma has three essential features. First, each player has two strategies: to
cooperate with one’s rival (deny any involvement in the crime, in our example)
or to defect from cooperation (confess to the crime, here). Second, each player
also has a dominant strategy (to confess or to defect from cooperation). Finally,
the dominance solution equilibrium is worse for both players than the nonequilibrium situation in which each plays the dominated strategy (to cooperate
with rivals).
Games of this type are particularly important in the study of game theory for
two reasons. The first is that the payoff structure associated with the prisoners’
dilemma arises in many quite varied strategic situations in economic, social,
political, and even biological competitions. This wide-ranging applicability makes
it an important game to study and to understand from a strategic standpoint. The
whole of Chapter 10 and sections in several other chapters deal with its study.
The second reason that prisoners’ dilemma games are integral to any discussion of games of strategy is the somewhat curious nature of the equilibrium
outcome achieved in such games. Both players choose their dominant strategies, but the resulting equilibrium outcome yields them payoffs that are lower
than they could have achieved if they had each chosen their dominated strategies. Thus, the equilibrium outcome in the prisoners’ dilemma is actually a bad
outcome for the players. There is another outcome that they both prefer to the
equilibrium outcome; the problem is how to guarantee that someone will not
cheat. This particular feature of the prisoners’ dilemma has received considerable attention from game theorists who have asked an obvious question: What
can players in a prisoners’ dilemma do to achieve the better outcome? We leave
this question to the reader momentarily, as we continue the discussion of simultaneous games, but return to it in detail in Chapter 10.
B. One Player Has a Dominant Strategy
When a rational player has a dominant strategy, she will use it, and the other
player can safely believe this. In the prisoners’ dilemma, it applies to both players. In some other games, it applies only to one of them. If you are playing in a
game in which you do not have a dominant strategy but your opponent does,
6841D CH04 UG.indd 101
12/18/14 3:10 PM
1 0 2 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
FEDERAL RESERVE
Low interest rates High interest rates
CONGRESS
Budget balance
3, 4
1, 3
Budget deficit
4, 1
2, 2
FIGURE 4.5 Game of Fiscal and Monetary Policies
you can assume that she will use her dominant strategy, and so you can choose
your equilibrium action (your best response) accordingly.
We illustrate this case by using a game frequently played between Congress,
which is responsible for fiscal policy (taxes and government expenditures), and
the Federal Reserve (Fed), which is in charge of monetary policy (primarily, interest rates).7 In a version that simplifies the game to its essential features, the
Congress’s fiscal policy can have either a balanced budget or a deficit, and the
Fed can set interest rates either high or low. In reality, the game is not clearly
­simultaneous, nor is who has the first move obvious if choices are sequential.
We consider the simultaneous-move version here, and in Chapter 6, we will
study how the outcomes differ for different rules of the game.
Almost everyone wants lower taxes. But there is no shortage of good claims
on government funds: defense, education, health care, and so on. There are also
various politically powerful special interest groups—including farmers and industries hurt by foreign competition—who want government subsidies. Therefore, Congress is under constant pressure both to lower taxes and to increase
spending. But such behavior runs the budget into deficit, which can lead to
higher inflation. The Fed’s primary task is to prevent inflation. However, it also
faces political pressure for lower interest rates from many important groups,
especially homeowners who benefit from lower mortgage rates. Lower interest
rates lead to higher demand for automobiles, housing, and capital investment
by firms, and all this demand can cause higher inflation. The Fed is generally
happy to lower interest rates, but only so long as inflation is not a threat. And
there is less threat of inflation when the government’s budget is in balance.
With all this in mind, we construct the payoff matrix for this game in Figure 4.5.
Congress likes best (payoff 4) the outcome with a budget deficit and low interest rates. This pleases all the immediate political constituents. It may entail
trouble for the future, but political time horizons are short. For the same reason,
Congress likes worst (payoff 1) the outcome with a balanced budget and high
7
Similar games are played in many other countries with central banks that have operational in­
dependence in the choice of monetary policy. Fiscal policies may be chosen by different political
entities—the executive or the legislature—in different countries.
6841D CH04 UG.indd 102
12/18/14 3:10 PM
d o m i n a n c e 1 0 3
interest rates. Of the other two outcomes, it prefers (payoff 3) the outcome with
a balanced budget and low interest rates; this outcome pleases the important
home-owning middle classes, and with low interest rates, less expenditure is
needed to service the government debt, so the balanced budget still has room
for many other items of expenditure or for tax cuts.
The Fed likes worst (payoff 1) the outcome with a budget deficit and low interest rates, because this combination is the most inflationary. It likes best (payoff 4)
the outcome with a balanced budget and low interest rates, because this combination can sustain a high level of economic activity without much risk of inflation. Comparing the other two outcomes with high interest rates, the Fed prefers
the one with a balanced budget because it reduces the risk of inflation.
We look now for dominant strategies in this game. The Fed does better by
choosing low interest rates if it believes that Congress is opting for a balanced
budget (Fed’s payoff 4 rather than 3), but it does better choosing high interest
rates if it believes that Congress is choosing to run a budget deficit (Fed’s payoff
2 rather than 1). The Fed, then, does not have a dominant strategy. But Congress
does. If Congress believes that the Fed is choosing low interest rates, it does better for itself by choosing a budget deficit rather than a balanced budget (Congress’s payoff 4 instead of 3). If Congress believes that the Fed is choosing high
interest rates, again it does better for itself by choosing a budget deficit rather
than a balanced budget (Congress’s payoff 2 instead of 1). Choosing to run a
budget deficit is then Congress’s dominant strategy.
The choice for Congress is now clear. No matter what it believes the Fed is
doing, Congress will choose to run a budget deficit. The Fed can now take this
choice into account when making its own decision. The Fed should believe
that Congress will choose its dominant strategy (budget deficit) and therefore
choose the best strategy for itself, given this belief. That means that the Fed
should choose high interest rates.
In this outcome, each side gets payoff 2. But an inspection of Figure 4.5
shows that, just as in the prisoners’ dilemma, there is another outcome (namely,
a balanced budget and low interest rates) that can give both players higher
payoffs (namely, 3 for Congress and 4 for the Fed). Why is that outcome not
achievable as an equilibrium? The problem is that Congress would be tempted
to deviate from its stated strategy and sneakily run a budget deficit. The Fed,
knowing this temptation and that it would then get its worst outcome (payoff
1), deviates also to its high interest rate strategy. In Chapters 6 and 9, we consider how the two sides can get around this difficulty to achieve their mutually
preferred outcome. But we should note that, in most countries and at many
times, the two policy authorities are indeed stuck in the bad outcome; the fiscal policy is too loose, and the monetary policy has to be tightened to keep
inflation down.
6841D CH04 UG.indd 103
12/18/14 3:10 PM
1 0 4 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
C. Successive Elimination of Dominated Strategies
The games considered so far have had only two pure strategies available to each
player. In such games, if one strategy is dominant, the other is dominated; so
choosing the dominant strategy is equivalent to eliminating the dominated one.
In larger games, some of a player’s strategies may be dominated even though
no single strategy dominates all of the others. If players find themselves in a
game of this type, they may be able to reach an equilibrium by removing dominated strategies from consideration as possible choices. Removing dominated
strategies reduces the size of the game, and then the “new” game may have another dominated strategy for the same player or for her opponent that can also
be ­removed. Or the “new” game may even have a dominant strategy for one of
the players. Successive or iterated elimination of dominated strategies uses
this process of removal of dominated strategies and reduction in the size of a
game until no further reductions can be made. If this process ends in a unique
outcome, then the game is said to be dominance solvable; that outcome is the
Nash equilibrium of the game, and the strategies that yield it are the equilibrium
strategies for each player.
We can use the game of Figure 4.1 to provide an example of this process.
Consider first Row’s strategies. If any one of Row’s strategies always provides
worse payoffs for Row than another of her strategies, then that strategy is
dominated and can be eliminated from consideration for Row’s equilibrium
choice. Here, the only dominated strategy for Row is High, which is dominated
by Bottom; if Column plays Left, Row gets 5 from Bottom and only 4 from
High; if Column plays Middle, Row gets 4 from Bottom and only 3 from High;
and, if Column plays Right, Row gets 9 from Bottom and only 6 from High. So
we can eliminate High. We now turn to Column’s choices to see if any of them
can be eliminated. We find that Column’s Left is now dominated by Right (with
similar reasoning, 1 , 2, 2 , 3, and 6 , 7). Note that we could not say this before Row’s High was eliminated; against Row’s High, Column would get 5 from
Left but only 4 from Right. Thus, the first step of eliminating Row’s High makes
possible the second step of eliminating Column’s Left. Then, within the remaining set of strategies (Top, Low, and Bottom for Row, and Middle and Right
for Column), Row’s Top and Bottom are both dominated by his Low. When
Row is left with only Low, Column chooses his best response—namely, Middle.
The game is thus dominance solvable, and the outcome is (Low, Middle)
with payoffs (5, 4). We identified this outcome as a Nash equilibrium when we
first illustrated that concept by using this game. Now we see in better detail the
thought process of the players that leads to the formation of correct beliefs. A
rational Row will not choose High. A rational Column will recognize this, and
thinking about how her various strategies perform for her against Row’s remaining
6841D CH04 UG.indd 104
12/18/14 3:10 PM
d o m i n a n c e 1 0 5
COLIN
Left
Right
Up
0, 0
1, 1
Down
1, 1
1, 1
ROWENA
FIGURE 4.6 Elimination of Weakly Dominated Strategies
strategies, will not choose Left. In turn, Row will recognize this, and therefore will
not choose either Top or Bottom. Finally, Column will see through all this, and
choose Middle.
Other games may not be dominance solvable, or successive elimination
of dominated strategies may not yield a unique outcome. Even in such cases,
some elimination may reduce the size of the game and make it easier to solve by
using one or more of the techniques described in the following sections. Thus
eliminating dominated strategies can be a useful step toward solving a large
simultaneous-play game, even when their elimination does not completely
­
solve the game.
Thus far in our consideration of iterated elimination of dominated strategies, all the payoff comparisons have been unambiguous. What if there are some
ties? Consider the variation on the preceding game that is shown in Figure 4.3.
In that version of the game, High (for Row) and Left (for Column) also are eliminated. And, at the next step, Low still dominates Top. But the dominance of Low
over Bottom is now less clear-cut. The two strategies give Row equal payoffs
when played against Column’s Middle, although Low does give Row a higher
payoff than Bottom when played against Column’s Right. We say that, from
Row’s perspective at this point, Low weakly dominates Bottom. In contrast, Low
strictly dominates Top, because it gives strictly higher payoffs than does Top
when played against both of Column’s strategies, Middle and Right, under consideration at this point.
And now, a word of warning. Successive elimination of weakly dominated
strategies can get rid of some Nash equilibria. Consider the game illustrated in
Figure 4.6, where we introduce Rowena as the row player and Colin as the column player.8 For Rowena, Up is weakly dominated by Down; if Colin plays Left,
then Rowena gets a better payoff by playing Down than by playing Up, and,
if Colin plays Right, then Rowena gets the same payoff from her two strategies.
8
We use these names in the hope that they will aid you in remembering which player chooses the
row and which chooses the column. We acknowledge Robert Aumann, who shared the Nobel Prize
with Thomas Schelling in 2005 (and whose ideas will be prominent in Chaper 9), for inventing this
clever naming idea.
6841D CH04 UG.indd 105
12/18/14 3:10 PM
1 0 6 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
Similarly, for Colin, Right weakly dominates Left. Dominance solvability then
tells us that (Down, Right) is a Nash equilibrium. That is true, but (Down, Left)
and (Up, Right) also are Nash equilibria. Consider (Down, Left). When Rowena
is playing Down, Colin cannot improve his payoff by switching to Right, and,
when Colin is playing Left, Rowena’s best response is clearly to play Down. A
similar reasoning verifies that (Up, Right) also is a Nash equilibrium.
Therefore, if you use weak dominance to eliminate some strategies, it is a
good idea to use other methods (such as the one described in the next section)
to see if you have missed any other equilibria. The iterated dominance solution seems to be a reasonable outcome to predict as the likely Nash equilibrium of this simultaneous-play game, but it is also important to consider the
significance of multiple equilibria as well as of the other equilibria themselves.
We will address these issues in later chapters, taking up a discussion of multiple equilibria in Chapter 5 and the interconnections between sequential- and
simultaneous-move games in Chapter 6.
4 BEST-RESPONSE ANALYSIS
Many simultaneous-move games have no dominant strategies and no dominated strategies. Others may have one or several dominated strategies, but iterated elimination of dominated strategies will not yield a unique outcome. In
such cases, we need a next step in the process of finding a solution to the game.
We are still looking for a Nash equilibrium in which every player does the best
she can, given the actions of the other player(s), but we must now rely on subtler
strategic thinking than the simple elimination of dominated strategies ­requires.
Here we develop another systematic method for finding Nash equilibria that
will prove very useful in later analysis. We begin without imposing a requirement of correctness of beliefs. We take each player’s perspective in turn and ask
the following question: For each of the choices that the other player(s) might be
making, what is the best choice for this player? Thus, we find the best responses
of each player to all available strategies of the others. In mathematical terms, we
find each player’s best-response strategy depending on, or as a function of, the
other players’ available strategies.
Let’s return to the game played by Row and Column and reproduce it as Figure
4.7. We’ll first consider Row’s responses. If Column chooses Left, Row’s best response is Bottom, yielding 5. We show this best response by circling that payoff in
the game table. If Column chooses Middle, Row’s best response is Low (also yielding 5). And if Column chooses Right, Row’s best choice is again Low (now yielding 12). Again, we show Row’s best choices by circling the appropriate payoffs.
6841D CH04 UG.indd 106
12/18/14 3:10 PM
B e s t - R e s p o n s e A n a ly s i s 1 0 7
COLUMN
ROW
Left
Middle
Right
Top
3, 1
2, 3
10, 2
High
4, 5
3, 0
6, 4
Low
2, 2
5, 4
12, 3
Bottom
5, 6
4, 5
9, 7
FIGURE 4.7 Best-Response Analysis
Similarly, Column’s best responses are shown by circling her payoffs: 3 (Middle as
best response to Row’s Top), 5 (Left to Row’s High), 4 (Middle to Row’s Low), and 7
(Right to Row’s Bottom).9 We see that one cell—namely, (Low, Middle)—has both
its payoffs circled. Therefore, the strategies Low for Row and Middle for Column
are simultaneously best responses to each other. We have found the Nash equilibrium of this game. (Again.)
Best-response analysis is a comprehensive way of locating all possible
Nash equilibria of a game. You should improve your understanding of it by trying it out on the other games that have been used in this chapter. The cases of
dominance are of particular interest. If Row has a dominant strategy, that same
strategy is her best response to all of Column’s strategies; therefore her best responses are all lined up horizontally in the same row. Similarly, if Column has a
dominant strategy, her best responses are all lined up vertically in the same column. You should see for yourself how the Nash equilibria in the Husband–Wife
prisoners’ dilemma shown in Figure 4.4 and the Congress–Federal Reserve game
depicted in Figure 4.5 emerge from such an analysis.
There will be some games for which best-response analysis does not find
a Nash equilibrium, just as dominance solvability sometimes fails. But in
this case we can say something more specific than can be said when dominance fails. When best-response analysis of a discrete strategy game does not
9
Alternatively and equivalently, one could mark in some way the choices that are not made. For example, in Figure 4.3, Row will not choose Top, High, or Bottom as responses to Column’s Right; one
could show this by drawing slashes through Row’s payoffs in these cases, respectively, 10, 6, and 9.
When this is done for all strategies of both players, (Low, Middle) has both of its payoffs unslashed;
it is then the Nash equilibrium of the game. The alternatives of circling choices that are made and
slashing choices that are not made stand in a conceptually similar relation to each other, as do the
alternatives of showing chosen branches by arrows and pruning unchosen branches for sequentialmove games. We prefer the first alternative in each case, because the resulting picture looks cleaner
and tells the story better.
6841D CH04 UG.indd 107
12/18/14 3:10 PM
1 0 8 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
find a Nash equilibrium, then the game has no equilibrium in pure strategies.
We address games of this type in Section 7 of this chapter. In Chapter 5, we will
extend best-­response analysis to games where the players’ strategies are continuous ­variables—for example, prices or advertising expenditures. Moreover,
we will construct best-response curves to help us find Nash equilibria, and we
will see that such games are less likely—by virtue of the continuity of strategy
choices—to have no equilibrium.
5 THREE PLAYERS
So far, we have analyzed only games between two players. All of the methods of
analysis that have been discussed, however, can be used to find the pure‑strategy
Nash equilibria of any simultaneous-play game among any number of players.
When a game is played by more than two players, each of whom has a relatively
small number of pure strategies, the analysis can be done with a game table, as
we did in the first four sections of this chapter.
In Chapter 3, we described a game among three players, each of whom had
two pure strategies. The three players, Emily, Nina, and Talia, had to choose
whether to contribute toward the creation of a flower garden for their small
street. We assumed that the garden was no better when all three contributed
than when only two contributed and that a garden with just one contributor was so sparse that it was as bad as no garden at all. Now let us suppose instead that the three players make their choices simultaneously and that there is
a somewhat richer variety of possible outcomes and payoffs. In particular, the
size and splendor of the garden will now differ according to the exact number
of contributors; three contributors will produce the largest and best garden, two
contributors will produce a medium garden, and one contributor will produce a
small garden.
Suppose Emily is contemplating the possible outcomes of the street‑garden
game. There are six possible choices for her to consider. Emily can choose either to contribute or not to contribute when both Nina and Talia contribute
or when neither of them contributes or when just one of them contributes.
From her perspective, the best possible outcome, with a rating of 6, would be
to take advantage of her good-hearted neighbors and to have both Nina and
Talia contribute while she does not. Emily could then enjoy a medium-sized
garden without putting up her own hard-earned cash. If both of the others
contribute and Emily also contributes, she gets to enjoy a large, very splendid
garden but at the cost of her own contribution; she rates this outcome ­second
best, or 5.
6841D CH04 UG.indd 108
12/18/14 3:10 PM
t h r e e p l ay e r s 1 0 9
At the other end of the spectrum are the outcomes that arise when neither
Nina nor Talia contributes to the garden. If that is the case, Emily would again
prefer not to contribute, because she would foot the bill for a public garden that
everyone could enjoy; she would rather have the flowers in her own yard. Thus,
when neither of the other players is contributing, Emily ranks the outcome in
which she contributes as a 1 and the outcome in which she does not as a 2.
In between these cases are the situations in which either Nina or Talia contributes to the flower garden but not both of them. When one of them contributes, Emily knows that she can enjoy a small garden without contributing; she
also feels that the cost of her contribution outweighs the increase in benefit that
she gets from being able to increase the size of the garden. Thus, she ranks the
outcome in which she does not contribute but still enjoys the small garden as a
4 and the outcome in which she does contribute, thereby providing a medium
garden, as a 3. Because Nina and Talia have the same views as Emily on the costs
and benefits of contributions and garden size, each of them orders the different outcomes in the same way—the worst outcome being the one in which each
contributes and the other two do not, and so on.
If all three women decide whether to contribute to the garden without knowing what their neighbors will do, we have a three-person simultaneous‑move
game. To find the Nash equilibrium of the game, we then need a game table. For
a three-player game, the table must be three-dimensional, and the third player’s
strategies must correspond to the new dimension. The easiest way to add a third
dimension to a two-dimensional game table is to add pages. The first page of the
table shows payoffs for the third player’s first strategy, the second page shows
payoffs for the third player’s second strategy, and so on.
We show the three-dimensional table for the street-garden game in Figure
4.8. It has two rows for Emily’s two strategies, two columns for Nina’s two strategies, and two pages for Talia’s two strategies. We show the pages side by side so
that you can see everything at the same time. In each cell, payoffs are listed for
TALIA chooses:
Contribute
Don’t Contribute
NINA
Contribute
NINA
Contribute
Don't
5, 5, 5
3, 6, 3
EMILY
Contribute
Don't
Contribute
3, 3, 6
1, 4, 4
Don't
4, 1, 4
2, 2, 2
EMILY
Don't
6, 3, 3
4, 4, 1
FIGURE 4.8 Street–Garden Game
6841D CH04 UG.indd 109
12/18/14 3:10 PM
1 1 0 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
the row player first, the column player second, and the page player third; in this
case, the order is Emily, Nina, Talia.
Our first test should be to determine whether there are dominant strategies
for any of the players. In one-page game tables, we found this test to be simple;
we just compared the outcomes associated with one of a player’s strategies with
the outcomes associated with another of her strategies. In practice this comparison required, for the row player, a simple check within columns of the single page of the table and vice versa for the column player. Here we must check
in both pages of the table to determine whether any player has a dominant
strategy.
For Emily, we compare the two rows of both pages of the table and note
that, when Talia contributes, Emily has a dominant strategy not to contribute,
and, when Talia does not contribute, Emily also has a dominant strategy not
to contribute. Thus, the best thing for Emily to do, regardless of what either of
the other players does, is not to contribute. Similarly, we see that Nina’s dominant strategy—in both pages of the table—is not to contribute. When we check
for a dominant strategy for Talia, we have to be a bit more careful. We must
compare outcomes that keep Emily’s and Nina’s behavior constant, checking
Talia’s payoffs from choosing Contribute versus Don’t Contribute. That is, we
compare cells across pages of the table—the top-left cell in the first page (on
the left) with the top-left cell in the second page (on the right), and so on. As
for the first two players, this process indicates that Talia also has a dominant
strategy not to contribute.
Each player in this game has a dominant strategy, which must therefore be
her equilibrium pure strategy. The Nash equilibrium of the street-garden game
entails all three players choosing not to contribute to the street garden and getting their second-worst payoffs; the garden is not planted, but no one has to
contribute either.
Notice that this game is yet another example of a prisoners’ dilemma.
There is a unique Nash equilibrium in which all players receive a payoff of 2.
Yet, there is another outcome in the game—in which all three neighbors contribute to the garden—that for all three players yields higher payoffs of 5. Even
though it would be beneficial to each of them for all to pitch in to build the
garden, no one has the individual incentive to do so. As a result, gardens of
this type are either not planted at all or paid for through tax dollars—because
the town government can require its citizens to pay such taxes. In Chapter 11,
we will encounter more such dilemmas of collective action and study some
methods for resolving them.
The Nash equilibrium of the game can also be found using best-response
analysis, as shown in Figure 4.9. Because each player has Don’t Contribute as
her dominant strategy, all of Emily’s best responses are on her Don’t Contribute
row, all of Nina’s best responses are on her Don’t Contribute column, and all of
6841D CH04 UG.indd 110
12/18/14 3:10 PM
m u lt i p l e e q u i l i b r i a i n p u r e s t r at e g i e s 1 1 1
TALIA chooses:
Don’t Contribute
Contribute
NINA
Contribute
NINA
Contribute
Don't
5, 5, 5
3, 6, 3
EMILY
Contribute
Don't
Contribute
3, 3, 6
1, 4, 4
Don't
4, 1, 4
2, 2, 2
EMILY
Don't
6, 3, 3
4, 4, 1
FIGURE 4.9 Best-Response Analysis in the Street–Garden Game
Talia’s best responses are on her Don’t Contribute page. The cell at the bottom
right has all three best responses; therefore, it gives us the Nash equilibrium.
6 MULTIPLE EQUILIBRIA IN PURE STRATEGIES
Each of the games considered in preceding sections has had a unique pure‑strategy
Nash equilibrium. In general, however, games need not have unique Nash equilibria. We illustrate this result by using a class of games that have many applications.
As a group, they may be labeled coordination games. The players in such games
have some (but not always completely) common interests. But, because they act
independently (by virtue of the nature of noncooperative games), the coordination
of actions needed to achieve a jointly preferred outcome is problematic.
A. Will Harry Meet Sally? Pure Coordination
To illustrate this idea, picture two undergraduates, Harry and Sally, who meet
in their college library.10 They are attracted to each other and would like to continue the conversation, but they have to go off to their separate classes. They
arrange to meet for coffee after the classes are over at 4:30. Sitting separately in
class, each realizes that in the excitement they forgot to fix the place to meet.
There are two possible choices: Starbucks and Local Latte. Unfortunately, these
locations are on opposite sides of the large campus, so it is not possible to try
both. And Harry and Sally have not exchanged cell-phone numbers, so they
can’t send messages. What should each do?
Figure 4.10 illustrates this situation as a game and shows the payoff matrix.
Each player has two choices: Starbucks and Local Latte. The payoffs for each are
1 if they meet and 0 if they do not. Best-response analysis quickly reveals that the
10
The names come from the 1989 movie When Harry Met Sally, starring Meg Ryan and Billy Crystal,
with its classic line “I’ll have what she’s having.”
6841D CH04 UG.indd 111
12/18/14 3:10 PM
1 1 2 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
SALLY
Starbucks
Local Latte
Starbucks
1, 1
0, 0
Local Latte
0, 0
1, 1
HARRY
FIGURE 4.10 Pure Coordination
game has two Nash equilibria, one where both choose Starbucks and the other
where both choose Local Latte. It is important for both that they achieve one of
the equilibria, but which one is immaterial because the two yield equal payoffs.
All that matters is that they coordinate on the same action; it does not matter
which action. That is why the game is said to be one of pure coordination.
But will they coordinate successfully? Or will they end up in different cafés,
each thinking that the other has let him or her down? Alas, that risk exists. Harry
might think that Sally will go to Starbucks because she said something about the
class to which she was going and that class is on the Starbucks side of the campus. But Sally may have the opposite belief about what Harry will do. When there
are multiple Nash equilibria, if the players are to select one successfully, they need
some way to coordinate their beliefs or expectations about each other’s actions.
The situation is similar to that of the heroes of the “Which tire?” game in
Chapter 1, where we labeled the coordination device a focal point. In the pres­
ent context, one of the two cafés may be generally known as the student
hangout. But it is not enough that Harry knows this to be the case. He must
know that Sally knows, and that she knows that he knows, and so on. In other
words, their expectations must converge on the focal point. Otherwise Harry
might be doubtful about where Sally will go because he does not know what she
is thinking about where he will go, and similar doubts may arise at the third or
fourth or higher level of thinking about thinking.11
When one of us (Dixit) posed this question to students in his class, the freshmen generally chose Starbucks and the juniors and seniors generally chose the
local café in the campus student center. These responses are understandable—
freshmen, who have not been on campus long, focus their expectations on a na-
11
Thomas Schelling presented the classic treatment of coordination games and developed the concept of a focal point in his book The Strategy of Conflict (Cambridge: Harvard University Press, 1960);
see pp. 54–58, 89–118. His explanation of focal points included the results garnered when he posed
several questions to his students and colleagues. The best-remembered of these is “Suppose you
have arranged to meet someone in New York City on a particular day, but have failed to arrange a
specific place or time, and have no way of communicating with the other person. Where will you
go and at what time?” Fifty years ago when the question was first posed, the clock at Grand Central
Station was the usual focal place; now it might be the stairs at TKTS in Times Square. The focal time
remains twelve noon.
6841D CH04 UG.indd 112
12/18/14 3:10 PM
m u lt i p l e e q u i l i b r i a i n p u r e s t r at e g i e s 1 1 3
tionwide chain that is known to everyone, whereas juniors and seniors know the
local hangouts, which they now regard as superior, and they expect their peers
to believe likewise.
If one café had an orange decor and the other a crimson decor, then in
Princeton the former may serve as a focal point because orange is the Princeton
color, whereas at Harvard crimson may be a focal point for the same reason. If
one person is a Princeton student and the other a Harvard student, they may
fail to meet at all, either because each thinks that his or her color “should” get
priority or because each thinks that the other will be inflexible and so tries to accommodate him or her. More generally, whether players in coordination games
can find a focal point depends on their having some commonly known point of
contact, whether historical, cultural, or linguistic.
B. Will Harry Meet Sally? And Where? Assurance
Now let’s change the game payoffs a little. The behavior of juniors and seniors
suggests that our pair may not be quite indifferent about which café they both
choose. The coffee may be better at one or the ambiance better at one. Or they
may want to choose the one that is not the general student hangout, to avoid
the risk of running into former boyfriends or girlfriends. Suppose they both prefer Local Latte; so the payoff of each is 2 when they meet there versus 1 when
they meet at Starbucks. The new payoff matrix is shown in ­Figure 4.11.
Again, there are two Nash equilibria. But in this version of the game, each
prefers the equilibrium where both choose Local Latte. Unfortunately, their
mere liking of that outcome is not guaranteed to bring it about. First of all (and
as always in our analysis), the payoffs have to be common knowledge—both
have to know the entire payoff matrix, both have to know that both know, and so
on. Such detailed knowledge about the game can arise if the two discussed and
agreed on the relative merits of the two cafés but simply forgot to decide definitely to meet at Local Latte. Even then, Harry might think that Sally has some
other reason for choosing Starbucks, or he may think that she thinks that he
does, and so on. Without genuine convergence of expectations about actions,
they may choose the worse equilibrium or, worse still, they may fail to coordinate actions and get 0 each.
SALLY
Starbucks
Local Latte
Starbucks
1, 1
0, 0
Local Latte
0, 0
2, 2
HARRY
FIGURE 4.11 Assurance
6841D CH04 UG.indd 113
12/18/14 3:10 PM
1 1 4 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
To repeat, players in the game illustrated in Figure 4.11 can get the preferred
equilibrium outcome only if each has enough certainty or assurance that the
other is choosing the appropriate action. For this reason, such games are called
assurance games.12
In many real-life situations of this kind, such assurance is easily obtained,
given even a small amount of communication between the players. Their interests are perfectly aligned; if one of them says to the other, “I am going to Local
Latte,” the other has no reason to doubt the truth of this statement and will follow to get the mutually preferred outcome. That is why we had to construct the
story with the two students isolated in different classes with no means of communication. If the players’ interests conflict, truthful communication becomes
more problematic. We examine this problem further when we consider strategic
manipulation of information in games in Chapter 8.
In larger groups, communication can be achieved by scheduling meetings
or by making announcements. These devices work only if everyone knows that
everyone else is paying attention to them, because successful coordination requires the desired equilibrium to be a focal point. The players’ expectations must
converge on it; everyone should know that everyone knows that . . . everyone is
choosing it. Many social institutions and arrangements play this role. Meetings
where the participants sit in a circle facing inward ensure that everyone sees everyone else paying attention. Advertisements during the Super Bowl, especially
when they are proclaimed in advance as major attractions, assure each viewer
that many others are viewing them also. That makes such ads especially attractive to companies making products that are more desirable for any one buyer
when many others are buying them, too; such products include those produced
by the computer, telecommunication, and Internet industries.13
C. Will Harry Meet Sally? And Where? Battle of the Sexes
Now let’s introduce another complication to the café-choice game. Both players
want to meet but prefer different cafés. So Harry might get a payoff of 2 and Sally
12
The classic example of an assurance game usually offered is the stag hunt described by the
eighteenth-century French philosopher Jean-Jacques Rousseau. Several people can successfully
hunt a stag, thereby getting a large quantity of meat, if they collaborate. If any one of them is sure
that all of the others will collaborate, he also stands to benefit by joining the group. But if he is unsure whether the group will be large enough, he will do better to hunt for a smaller animal, a hare,
on his own. However, it can be argued that Rousseau believed that each hunter would prefer to go
after a hare regardless of what the others were doing, which would make the stag hunt a multiperson
prisoners’ dilemma, not an assurance game. We discuss this example in the context of collective action in Chapter 11.
13
Michael Chwe develops this theme in Rational Ritual: Culture, Coordination, and Common
Knowledge (Princeton: Princeton University Press, 2001).
6841D CH04 UG.indd 114
12/18/14 3:10 PM
m u lt i p l e e q u i l i b r i a i n p u r e s t r at e g i e s 1 1 5
SALLY
Starbucks
Local Latte
Starbucks
2, 1
0, 0
Local Latte
0, 0
1, 2
HARRY
FIGURE 4.12 Battle of the Sexes
a payoff of 1 from meeting at Starbucks, and the other way around from meeting
at Local Latte. This payoff matrix is shown in Figure 4.12.
This game is called the battle of the sexes. The name derives from the story
concocted for this payoff structure by game theorists in the sexist 1950s. A husband and wife were supposed to choose between going to a boxing match and a
ballet, and (presumably for evolutionary genetic reasons) the husband was supposed to prefer the boxing match and the wife the ballet. The name has stuck
and we will keep it, but our example—where either player could easily have
some non-gender-based reason to prefer either of the two cafés—should make
it clear that it does not necessarily have sexist connotations.
What will happen in this game? There are still two Nash equilibria. If Harry
believes that Sally will choose Starbucks, it is best for him to do likewise, and the
other way around. For similar reasons, Local Latte also is a Nash equilibrium. To
achieve either of these equilibria and avoid the outcomes where the two go to
different cafés, the players need a focal point, or convergence of expectations,
exactly as in the pure-coordination and assurance games. But the risk of coordination failure is greater in the battle of the sexes. The players are initially in quite
symmetric situations, but each of the two Nash equilibria gives them asymmetric payoffs, and their preferences between the two outcomes are in conflict.
Harry prefers the outcome where they meet in Starbucks, and Sally prefers to
meet in Local Latte. They must find some way of breaking the symmetry.
In an attempt to achieve his or her preferred equilibrium, each player may
try to act tough and follow the strategy leading to the better equilibrium. In
Chapter 9, we consider in detail such advance devices, called strategic moves,
that players in such games can adopt to try to achieve their preferred outcomes.
Or each may try to be nice, leading to the unfortunate situation where Harry
goes to Local Latte because he wants to please Sally, only to find that she has
chosen to please him and gone to Starbucks, like the couple choosing Christmas
presents for each other in O. Henry’s short story titled “The Gift of the Magi.” Alternatively, if the game is repeated, successful coordination may be negotiated
and maintained as an equilibrium. For example, the two can arrange to alternate between the cafés. In Chapter 10, we examine such tacit cooperation in repeated games in the context of a prisoners’ dilemma.
6841D CH04 UG.indd 115
12/18/14 3:10 PM
1 1 6 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
D. Will James Meet Dean? Chicken
Our final example in this section is a slightly different kind of coordination
game. In this game, the players want to avoid, not choose, actions with the same
labels. Further, the consequences of one kind of coordination failure are far
more drastic than those of the other kind.
The story comes from a game that was supposedly played by American
teenagers in the 1950s. Two teenagers take their cars to opposite ends of
Main Street, Middle-of-Nowhere, USA, at midnight and start to drive toward
each other. The one who swerves to prevent a collision is the “chicken,” and
the one who keeps going straight is the winner. If both maintain a straight
course, there is a collision in which both cars are damaged and both players
injured.14
The payoffs for chicken depend on how negatively one rates the “bad”
outcome—being hurt and damaging your car in this case—against being labeled
chicken. As long as words hurt less than crunching metal, a reasonable payoff
table for the 1950s version of chicken is found in Figure 4.13. Each player most
prefers to win, having the other be chicken, and each least prefers the crash of
the two cars. In between these two extremes, it is better to have your rival be
chicken with you (to save face) than to be chicken by yourself.
This story has four essential features that define any game of chicken.
First, each player has one strategy that is the “tough” strategy and one that is
the “weak” strategy. Second, there are two pure-strategy Nash equilibria. These
are the outcomes in which exactly one of the players is chicken, or weak. Third,
each player strictly prefers that equilibrium in which the other player chooses
chicken, or weak. Fourth, the payoffs when both players are tough are very bad
for both players. In games such as this one, the real game becomes a test of how
to achieve one’s preferred equilibrium.
We are now back in a situation similar to that discussed for the
battle‑of‑the‑sexes game. One expects most real-life chicken games to be
even worse as battles than most battles of the sexes—the benefit of winning is larger, as is the cost of the crash, and so all the problems of conflict of
14
A slight variant was made famous by the 1955 James Dean movie Rebel Without a Cause. There,
two players drove their cars in parallel, very fast, toward a cliff. The first to jump out of his car before
it went over the cliff was the chicken. The other, if he left too late, risked going over the cliff in his
car to his death. The characters in the film referred to this as a “chicky game.” In the mid-1960s, the
British philosopher Bertrand Russell and other peace activists used this game as an analogy for the
nuclear arms race between the United States and the USSR, and the game theorist Anatole Rapoport
gave a formal game-theoretic statement. Other game theorists have chosen to interpret the arms
race as a prisoners’ dilemma or as an assurance game. For a review and interesting discussion, see
Barry O’Neill, “Game Theory Models of Peace and War,” in The Handbook of Game Theory, vol. 2, ed.
Robert J. Aumann and Sergiu Hart (Amsterdam: North Holland, 1994), pp. 995–1053.
6841D CH04 UG.indd 116
12/18/14 3:10 PM
m u lt i p l e e q u i l i b r i a i n p u r e s t r at e g i e s 1 1 7
DEAN
Swerve (Chicken) Straight (Tough)
JAMES
Swerve (Chicken)
0, 0
–1, 1
Straight (Tough)
1, –1
–2, –2
FIGURE 4.13 Chicken
interest and asymmetry between the players are aggravated. Each player will want
to try to influence the outcome. It may be the case that one player will try to create an aura of toughness that everyone recognizes so as to intimidate all rivals.15
Another possibility is to come up with some other way to convince your rival
that you will not be chicken, by making a visible and irreversible commitment
to going straight. (In Chapter 9, we consider just how to make such commitment
moves.) In addition, both players also want to try to prevent the bad (crash) outcome if at all possible.
As with the battle of the sexes, if the game is repeated, tacit coordination is
a better route to a solution. That is, if the teenagers played the game every Saturday night at midnight, they would have the benefit of knowing that the game
had both a history and a future when deciding their equilibrium strategies.
In such a situation, they might logically choose to alternate between the two
equilibria, taking turns being the winner every other week. (But if the others
found out about this deal, both players would lose face.)
There is one final point, arising from these coordination games, that must
be addressed. The concept of Nash equilibrium requires each player to have the
correct belief about the other’s choice of strategy. When we look for Nash equilibria in pure strategies, the concept requires each to be confident about the other’s choice. But our analysis of coordination games shows that thinking about
the other’s choice in such games is fraught with strategic uncertainty. How can
we incorporate such uncertainty in our analysis? In Chapter 7, we introduce the
concept of a mixed strategy, where actual choices are made randomly among
the available actions. This approach generalizes the concept of Nash equilibrium to situations where the players may be unsure about each other’s actions.
15
Why would a potential rival play chicken against someone with a reputation for never giving in?
The problem is that participation in chicken, as in lawsuits, is not really voluntary. Put another way,
choosing whether to play chicken is itself a game of chicken. As Thomas Schelling says, “If you are
publicly invited to play chicken and say you would rather not, then you have just played [and lost]”
(Arms and Influence, New Haven: Yale University Press, 1965, p. 118).
6841D CH04 UG.indd 117
12/18/14 3:10 PM
1 1 8 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
7 NO EQUILIBRIUM IN PURE STRATEGIES
Each of the games considered so far has had at least one Nash equilibrium in pure
strategies. Some of these games, such as those in Section 6, had more than one
equilibrium, whereas games in earlier sections had exactly one. Unfortunately,
not all games that we come across in the study of strategy and game theory will
have such easily definable outcomes in which players always choose one particular action as an equilibrium strategy. In this section, we will look at games in
which there is not even one pure-strategy Nash equilibrium—games in which
none of the players would consistently choose one strategy as that player’s equilibrium action.
A simple example of a game with no equilibrium in pure strategies is that
of a single point in a tennis match. Imagine a match between the two all-time
best women players—Martina Navratilova and Chris Evert.16 Navratilova at the
net has just volleyed a ball to Evert on the baseline, and Evert is about to attempt
a passing shot. She can try to send the ball either down the line (DL; a hard,
straight shot) or crosscourt (CC; a softer, diagonal shot). Navratilova must likewise prepare to cover one side or the other. Each player is aware that she must
not give any indication of her planned action to her opponent, knowing that
such information will be used against her. Navratilova would move to cover the
side to which Evert is planning to hit or Evert would hit to the side that Navratilova is not planning to cover. Both must act in a fraction of a second, and both
are equally good at concealing their intentions until the last possible moment;
therefore their actions are effectively simultaneous, and we can analyze the
point as a two-player simultaneous-move game.
Payoffs in this tennis-point game are given by the fraction of times a player
wins the point in any particular combination of passing shot and covering play.
Given that a down-the-line passing shot is stronger than a crosscourt shot and
that Evert is more likely to win the point when Navratilova moves to cover the
wrong side of the court, we can work out a reasonable set of payoffs. Suppose
Evert is successful with a down-the-line passing shot 80% of the time if Navratilova covers crosscourt; Evert is successful with the down-the-line shot only 50%
of the time if Navratilova covers down the line. Similarly, Evert is successful with
16
For those among you who remember only the latest phenom who shines for a couple of years and
then burns out, here are some amazing facts about these two women, who were at the top levels
of the game for almost two decades and ran a memorable rivalry all that time. Navratilova was a
left‑handed serve-and-volley player. In grand-slam tournaments, she won 18 singles titles, 31 doubles, and 7 mixed doubles. In all tournaments, she won 167, a record. Evert, a right-handed baseliner, had a record win-loss percentage (90% wins) in her career and 150 titles, of which 18 were
for singles in grand-slam tournaments. She probably invented (and certainly popularized) the twohanded backhand that is now so common. From 1973 to 1988, the two played each other 80 times,
and Navratilova ended up with a slight edge, 43–37.
6841D CH04 UG.indd 118
12/18/14 3:10 PM
N o e q u i l i b r i u m i n p u r e s t r at e g i e s 1 1 9
NAVRATILOVA
DL
CC
DL
50, 50
80, 20
CC
90, 10
20, 80
EVERT
FIGURE 4.14 No Equilibrium in Pure Strategies
her crosscourt passing shot 90% of the time if Navratilova covers down the line.
This success rate is higher than when Navratilova covers crosscourt, in which
case Evert wins only 20% of the time.
Clearly, the fraction of times that Navratilova wins this tennis point is just
the difference between 100% and the fraction of time that Evert wins. Thus, the
game is zero-sum (even though the two payoffs technically sum to 100), and we
can represent all the necessary information in the payoff table with just the payoff to Evert in each cell. Figure 4.14 shows the payoff table and the fraction of
time that Evert wins the point against Navratilova in each of the four possible
combinations of their strategy choices.
The rules for solving simultaneous-move games tell us to look first for dominant or dominated strategies and then to use best-response analysis to find a
Nash equilibrium. It is a useful exercise to verify that no dominant strategies
exist here. Going on to best-response analysis, we find that Evert’s best response
to DL is CC, and her best response to CC is DL. By contrast, Navratilova’s best
response to DL is DL, and her best response to CC is CC. None of the cells in
the table is a Nash equilibrium, because someone always prefers to change her
strategy. For example, if we start in the upper-left cell of the table, we find that
Evert prefers to deviate from DL to CC, increasing her own payoff from 50% to
90%. But in the lower-left cell of the table, we find that Navratilova prefers to
switch from DL to CC, raising her payoff from 10% to 80%. As you can verify,
Evert similarly prefers to deviate from the lower-right cell, and Navratilova prefers to deviate from the upper-right cell. In every cell, one player always wants
to change her play, and we cycle through the table endlessly without finding an
equilibrium.
An important message is contained in the absence of a Nash equilibrium in
this game and similar ones. What is important in games of this type is not what
players should do, but what players should not do. In particular, each player
should neither always nor systematically pick the same shot when faced with
this situation. If either player engages in any determinate behavior of that type,
the other can take advantage of it. (So if Evert consistently went crosscourt with
her passing shot, Navratilova would learn to cover crosscourt every time and
would thereby reduce Evert’s chances of success with her crosscourt shot.) The
most reasonable thing for players to do here is to act somewhat unsystematically,
6841D CH04 UG.indd 119
12/18/14 3:10 PM
1 2 0 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
hoping for the element of surprise in defeating their opponents. An unsystematic approach entails choosing each strategy part of the time. (Evert should be
using her weaker shot with enough frequency to guarantee that Navratilova cannot predict which shot will come her way. She should not, however, use the two
shots in any set pattern, because that, too, would cause her to lose the element
of surprise.) This approach, in which players randomize their actions, is known
as mixing strategies and is the focus of Chapter 7. The game illustrated in Figure
4.14 may not have an equilibrium in pure strategies, but it can still be solved by
looking for an equilibrium in mixed strategies, as we do in Chapter 7, Section 1.
SUMMARY
In simultaneous-move games, players make their strategy choices without knowledge of the choices being made by other players. Such games are illustrated by
game tables, where cells show payoffs to each player and the dimensionality of
the table equals the number of players. Two-person zero-sum games may be illustrated in shorthand with only one player’s payoff in each cell of the game table.
Nash equilibrium is the solution concept used to solve simultaneous-move
games; such an equilibrium consists of a set of strategies, one for each player,
such that each player has chosen her best response to the other’s choice. Nash
equilibrium can also be defined as a set of strategies such that each player has
correct beliefs about the others’ strategies, and certain strategies are best for
each player given beliefs about the other’s strategies. Nash equilibria can be
found by searching for dominant strategies, by successive elimination of dominated strategies, or with best-response analysis.
There are many classes of simultaneous games. Prisoners’ dilemma games
appear in many contexts. Coordination games, such as assurance, chicken, and
battle of the sexes, have multiple equilibria, and the solution of such games
­requires players to achieve coordination by some means. If a game has no equilibrium in pure strategies, we must look for an equilibrium in mixed strategies,
the analysis of which is presented in Chapter 7.
KEY TERMS
assurance game (114)
battle of the sexes (115)
belief (97)
best response (95)
best-response analysis (107)
chicken (116)
6841D CH04 UG.indd 120
convergence of expectations (113)
coordination game (111)
dominance solvable (104)
dominant strategy (100)
dominated strategy (100)
focal point (112)
12/18/14 3:10 PM
e x e r c i s e s 1 2 1
game matrix (92)
game table (92)
iterated elimination of
dominated strategies (104)
mixed strategy (92)
Nash equilibrium (95)
normal form (92)
payoff table (92)
prisoners’ dilemma (99)
pure coordination game (112)
pure strategy (92)
strategic form (92)
successive elimination of
dominated strategies (104)
Solved EXERCISES
S1.
Find all Nash equilibria in pure strategies for the following games. First
check for dominant strategies. If there are none, solve using iterated
elimination of dominated strategies. Explain your reasoning.
(a)
COLIN
Left
Right
Up
4, 0
3, 1
Down
2, 2
1, 3
ROWENA
(b)
COLIN
Left
Right
Up
2, 4
1, 0
Down
6, 5
4, 2
ROWENA
(c)
COLIN
ROWENA
6841D CH04 UG.indd 121
Left
Middle
Right
Up
1, 5
2, 4
5, 1
Straight
2, 4
4, 2
3, 3
Down
1, 5
3, 3
3, 3
12/18/14 3:10 PM
1 2 2 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
(d)
COLIN
ROWENA
S2.
Left
Middle
Right
Up
5, 2
1, 6
3, 4
Straight
6, 1
1, 6
2, 5
Down
1, 6
0, 7
0, 7
For each of the four games in Exercise S1, identify whether the game is
zero-sum or non-zero-sum. Explain your reasoning.
S3.
Another method for solving zero-sum games, important because it was
developed long before Nash developed his concept of equilibrium for
non-zero-sum games, is the minimax method. To use this method, assume that no matter which strategy a player chooses, her rival will
choose to give her the worst possible payoff from that strategy. For each
zero-sum game identified in Exercise S2, use the minimax method to
find the game’s equilibrium strategies by doing the following:
(a) For each row strategy, write down the minimum possible payoff to
Rowena (the worst that Colin can do to her in each case). For each
column strategy, write down the minimum possible payoff to Colin
(the worst that Rowena can do to him in each case).
(b) For each player, determine the strategy (or strategies) that gives
each player the best of these worst payoffs. This is called a “minimax” strategy for each player.
(Because this is a zero-sum game, players’ best responses do indeed involve minimizing each other’s payoff, so these minimax strategies are the
same as the Nash equilibrium strategies. John von Neumann proved the
existence of a minimax equilibrium in zero-sum games in 1928, more than
20 years before Nash generalized the theory to include zero-sum games.)
S4.
Find all Nash equilibria in pure strategies in the following non-zero-sum
games. Describe the steps that you used in finding the equilibria.
(a)
COLIN
Left
Right
Up
3, 2
2, 3
Down
4, 1
1, 4
ROWENA
6841D CH04 UG.indd 122
12/18/14 3:10 PM
e x e r c i s e s 1 2 3
(b)
COLIN
Left
Right
Up
1, 1
0, 1
Down
1, 0
1, 1
ROWENA
(c)
COLIN
ROWENA
Left
Middle
Right
Up
0, 1
9, 0
2, 3
Straight
5, 9
7, 3
1, 7
Down
7, 5
10, 10
3, 5
(d)
COLIN
West
Center
East
North
2, 3
8, 2
7, 4
Up
3, 0
4, 5
6, 4
Down
10, 4
6, 1
3, 9
South
4, 5
2, 3
5, 2
ROWENA
S5.
Consider the following game table:
COLIN
ROWENA
6841D CH04 UG.indd 123
North
South
East
West
Earth
1, 3
3, 1
0, 2
1, 1
Water
1, 2
1, 2
2, 3
1, 1
Wind
3, 2
2, 1
1, 3
0, 3
Fire
2, 0
3, 0
1, 1
2, 2
12/18/14 3:10 PM
1 2 4 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
(a) Does either Rowena or Colin have a dominant strategy? Explain why
or why not.
(b) Use iterated elimination of dominated strategies to reduce the game
as much as possible. Give the order in which the eliminations occur
and give the reduced form of the game.
(c) Is this game dominance solvable? Explain why or why not.
(d) State the Nash equilibrium (or equilibria) of this game.
S6.
“If a player has a dominant strategy in a simultaneous-move game, then
she is sure to get her best possible outcome.” True or false? Explain and
give an example of a game that illustrates your answer.
S7.
An old lady is looking for help crossing the street. Only one person is
needed to help her; if more people help her, this is no better. You and I
are the two people in the vicinity who can help; we have to choose simultaneously whether to do so. Each of us will get pleasure worth a 3 from
her success (no matter who helps her). But each one who goes to help
will bear a cost of 1, this being the value of our time taken up in helping.
If neither player helps, the payoff for each player is zero. Set this up as a
game. Write the payoff table, and find all pure-strategy Nash equilibria.
S8.
A university is contemplating whether to build a new lab or a new theater
on campus. The science faculty would rather see a new lab built, and the
humanities faculty would prefer a new theater. However, the funding for
the project (whichever it may turn out to be) is contingent on unanimous
support from the faculty. If there is disagreement, neither project will go
forward, leaving each group with no new building and their worst payoff.
The meetings of the two separate faculty groups on which proposal to
support occur simultaneously, with payoffs given in the following table:
HUMANITIES FACULTY
SCIENCE FACULTY
Lab
Theater
Lab
4, 2
0, 0
Theater
0, 0
1, 5
(a) What are the pure-strategy Nash equilibria of this game?
(b) Which game described in this chapter is most similar to this game?
Explain your reasoning.
6841D CH04 UG.indd 124
12/18/14 3:10 PM
e x e r c i s e s 1 2 5
S9.
Suppose two game-show contestants, Alex and Bob, each separately
select one of three doors numbered 1, 2, and 3. Both players get dollar
prizes if their choices match, as indicated in the following table:
BOB
ALEX
1
2
3
1
10, 10
0, 0
0, 0
2
0, 0
15, 15
0, 0
3
0, 0
0, 0
15, 15
(a) What are the Nash equilibria of this game? Which, if any, is likely to
emerge as the (focal) outcome? Explain.
(b) Consider a slightly changed game in which the choices are again
just numbers, but the two cells with (15, 15) in the table become (25,
25). What is the expected (average) payoff to each player if each flips
a coin to decide whether to play 2 or 3? Is this better than focusing
on both of them choosing 1 as a focal equilibrium? How should you
account for the risk that Alex might do one thing while Bob does the
other?
S10.
6841D CH04 UG.indd 125
Marta has three sons: Arturo, Bernardo, and Carlos. She discovers a broken lamp in her living room and knows that one of her sons must have
broken it at play. Carlos was actually the culprit, but Marta doesn’t know
this. She cares more about finding out the truth than she does about
punishing the child who broke the lamp, so Marta announces that her
sons are to play the following game.
Each child will write down his name on a piece of paper and write
down either “Yes, I broke the lamp,” or “No, I didn’t break the lamp.” If at
least one child claims to have broken the lamp, she will give the normal allowance of $2 to each child who claims to have broken the lamp, and $5 to
each child who claims not to have broken the lamp. If all three children
claim not to have broken the lamp, none of them receives any allowance
(each receives $0).
(a) Write down the game table. Make Arturo the row player, Bernardo
the column player, and Carlos the page player.
(b) Find all the Nash equilibria of this game.
(c) There are multiple Nash equilibria of this game. Which one would
you consider to be a focal point?
12/18/14 3:10 PM
1 2 6 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
S11.
Consider a game in which there is a prize worth $30. There are three contestants, Larry, Curly, and Moe. Each can buy a ticket worth $15 or $30 or
not buy a ticket at all. They make these choices simultaneously and independently. Then, knowing the ticket‑purchase decisions, the game organizer awards the prize. If no one has bought a ticket, the prize is not
awarded. Otherwise, the prize is awarded to the buyer of the highest‑cost
ticket if there is only one such player or is split equally between two or
three if there are ties among the highest‑cost ticket buyers. Show this
game in strategic form, using Larry as the row player, Curly as the column
player, and Moe as the page player. Find all pure-strategy Nash equilibria.
S12.
Anne and Bruce would like to rent a movie, but they can’t decide what
kind of movie to choose. Anne wants to rent a comedy, and Bruce wants
to rent a drama. They decide to choose randomly by playing “Evens or
Odds.” On the count of three, each of them shows one or two fingers. If
the sum is even, Anne wins and they rent the comedy; if the sum is odd,
Bruce wins and they rent the drama. Each of them earns a payoff of 1 for
winning and 0 for losing “Evens or Odds.”
(a) Draw the game table for “Evens or Odds.”
(b) Demonstrate that this game has no Nash equilibrium in pure
strategies.
S13.
In the film A Beautiful Mind, John Nash and three of his graduate‑school
colleagues find themselves faced with a dilemma while at a bar. There are
four brunettes and a single blonde available for them to approach. Each
young man wants to approach and win the attention of one of the young
women. The payoff to each of winning the blonde is 10; the payoff of
winning a brunette is 5; the payoff from ending up with no girl is 0. The
catch is that if two or more young men go for the blonde, she rejects all
of them, and then the brunettes also reject the men because they don’t
want to be second choice. Thus, each player gets a payoff of 10 only if he
is the sole suitor for the blonde.
(a) First consider a simpler situation in which there are only two young
men instead of four. (There are two brunettes and one blonde, but
these women merely respond in the manner just described and are
not active players in the game.) Show the payoff table for the game,
and find all of the pure‑strategy Nash equilibria of the game.
(b) Now show the (three-dimensional) table for the case in which there
are three young men (and three brunettes and one blonde who are
not active players). Again, find all of the Nash equilibria of the game.
(c) Without the use of a table, give all of the Nash equilibria for the case
in which there are four young men (as well as four brunettes and a
blonde).
6841D CH04 UG.indd 126
12/18/14 3:10 PM
e x e r c i s e s 1 2 7
(d) (Optional) Use your results to parts (a), (b), and (c) to generalize
your analysis to the case in which there are n young men. Do not attempt to write down an n-dimensional payoff table; merely find the
payoff to one player when k of the others choose Blonde and (n 2
k 2 1) choose Brunette, for k 5 0, 1, . . . (n 2 1). Can the outcome
specified in the movie as the Nash equilibrium of the game—that all
of the young men choose to go for brunettes—ever really be a Nash
equilibrium of the game?
UNsolved Exercises
U1.
Find all Nash equilibria in pure strategies for the following games. First
check for dominated strategies. If there are none, solve using iterated
elimination of dominated strategies.
(a)
COLIN
Left
Right
Up
3, 1
4, 2
Down
5, 2
2, 3
ROWENA
(b)
COLIN
ROWENA
Left
Middle
Right
Up
2, 9
5, 5
6, 2
Straight
6, 4
9, 2
5, 3
Down
4, 3
2, 7
7, 1
(c)
COLIN
ROWENA
6841D CH04 UG.indd 127
Left
Middle
Right
Up
5, 3
3, 5
2, 6
Straight
6, 2
4, 4
3, 5
Down
1, 7
6, 2
2, 6
12/18/14 3:10 PM
1 2 8 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
(d)
COLIN
ROWENA
North
South
East
West
Up
6, 4
7, 3
5, 5
6, 4
High
7, 3
3, 7
4, 6
5, 5
Low
8, 2
6, 4
3, 7
2, 8
Down
3, 7
5, 5
4, 6
5, 5
U2.
For each of the four games in Exercise U1, identify whether the game is
zero-sum or non-zero-sum. Explain your reasoning.
U3.
As in Exercise S3 above, use the minimax method to find the Nash equilibria for the zero-sum games identified in Exercise U2.
U4.
Find all Nash equilibria in pure strategies in the following games. Describe the steps that you used in finding the equilibria.
(a)
COLIN
Left
Right
Up
1, –1
4, –4
Down
2, –2
3, –3
ROWENA
(b)
COLIN
Left
Right
Up
0, 0
0, 0
Down
0, 0
1, 1
ROWENA
(c)
COLIN
Left
Right
Up
1, 3
2, 2
Down
4, 0
3, 1
ROWENA
6841D CH04 UG.indd 128
12/18/14 3:10 PM
e x e r c i s e s 1 2 9
(d)
COLIN
ROWENA
U5.
Left
Middle
Right
Up
5, 3
7, 2
2, 1
Straight
1, 2
6, 3
1, 4
Down
4, 2
6, 4
3, 5
Use successive elimination of dominated strategies to solve the following
game. Explain the steps you followed. Show that your solution is a Nash
equilibrium.
COLIN
Left
Middle
Right
Up
4, 3
2, 7
0, 4
Down
5, 0
5, –1
–4, –2
ROWENA
U6.
Find all of the pure-strategy Nash equilibria for the following game. Describe the process that you used to find the equilibria. Use this game
to explain why it is important to describe an equilibrium by using the
strategies employed by the players, not merely by the payoffs received in
equilibrium.
COLIN
ROWENA
U7.
Left
Center
Right
Up
1, 2
2, 1
1, 0
Level
0, 5
1, 2
7, 4
Down
–1, 1
3, 0
5, 2
Consider the following game table:
COLIN
Left
Top
ROWENA
6841D CH04 UG.indd 129
4,
Middle
3, 5
Bottom
,3
Center
Right
,2
3, 1
2,
2, 3
3, 4
4, 2
12/18/14 3:10 PM
1 3 0 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
(a) Complete the payoffs of the game table above so that Colin has a
dominant strategy. State which strategy is dominant and explain
why. (Note: There are many equally correct answers.)
(b) Complete the payoffs of the game table above so that neither player
has a dominant strategy, but also so that each player does have a
dominated strategy. State which strategies are dominated and explain why. (Again, there are many equally correct answers.)
U8.
The Battle of the Bismarck Sea (named for that part of the southwestern Pacific Ocean separating the Bismarck Archipelago from Papua New
Guinea) was a naval engagement played between the United States and
Japan during World War II. In 1943, a Japanese admiral was ordered to
move a convoy of ships to New Guinea; he had to choose between a rainy
northern route and a sunnier southern route, both of which required
three days’ sailing time. The Americans knew that the convoy would sail
and wanted to send bombers after it, but they did not know which route
it would take. The Americans had to send reconnaissance planes to scout
for the convoy, but they had only enough reconnaissance planes to explore one route at a time. Both the Japanese and the Americans had to
make their decisions with no knowledge of the plans being made by the
other side.
If the convoy was on the route that the Americans explored first,
they could send bombers right away; if not, they lost a day of bombing. Poor weather on the northern route would also hamper bombing.
If the Americans explored the northern route and found the Japanese
right away, they could expect only two (of three) good bombing days; if
they explored the northern route and found that the Japanese had gone
south, they could also expect two days of bombing. If the Americans
chose to explore the southern route first, they could expect three full
days of bombing if they found the Japanese right away but only one day
of bombing if they found that the Japanese had gone north.
(a) Illustrate this game in a game table.
(b) Identify any dominant strategies in the game and solve for the Nash
equilibrium.
U9.
Two players, Jack and Jill, are put in separate rooms. Then each is told the
rules of the game. Each is to pick one of six letters: G, K, L, Q, R, or W. If
the two happen to choose the same letter, both get prizes as follows:
6841D CH04 UG.indd 130
Letter
G
K
L
Q
R
W
Jack’s Prize
3
2
6
3
4
5
Jill’s Prize
6
5
4
3
2
1
12/18/14 3:10 PM
e x e r c i s e s 1 3 1
If they choose different letters, each gets 0. This whole schedule is revealed
to both players, and both are told that both know the schedules, and so on.
(a) Draw the table for this game. What are the Nash equilibria in pure
strategies?
(b) Can one of the equilibria be a focal point? Which one? Why?
U10.
Three friends (Julie, Kristin, and Larissa) independently go shopping for
dresses for their high-school prom. On reaching the store, each girl sees
only three dresses worth considering: one black, one lavender, and one
yellow. Each girl furthermore can tell that her two friends would consider
the same set of three dresses, because all three have somewhat similar
tastes.
Each girl would prefer to have a unique dress, so a girl’s utility is 0 if
she ends up purchasing the same dress as at least one of her friends. All
three know that Julie strongly prefers black to both lavender and yellow,
so she would get a utility of 3 if she were the only one wearing the black
dress, and a utility of 1 if she were either the only one wearing the lavender dress or the only one wearing the yellow dress. Similarly, all know
that Kristin prefers lavender and secondarily prefers yellow, so her utility
would be 3 for uniquely wearing lavender, 2 for uniquely wearing yellow,
and 1 for uniquely wearing black. Finally, all know that Larissa prefers
yellow and secondarily prefers black, so she would get 3 for uniquely
wearing yellow, 2 for uniquely wearing black, and 1 for uniquely wearing
lavender.
(a) Provide the game table for this three-player game. Make Julie the
row player, Kristin the column player, and Larissa the page player.
(b) Identify any dominated strategies in this game, or explain why there
are none.
(c) What are the pure-strategy Nash equilibria in this game?
U11.
Bruce, Colleen, and David are all getting together at Bruce’s house on Friday evening to play their favorite game, Monopoly. They all love to eat
sushi while they play. They all know from previous experience that two
orders of sushi are just the right amount to satisfy their hunger. If they
wind up with less than two orders, they all end up going hungry and don’t
enjoy the evening. More than two orders would be a waste, because they
can’t manage to eat a third order and the extra sushi just goes bad. Their
favorite restaurant, Fishes in the Raw, packages its sushi in such large
containers that each individual person can feasibly purchase at most one
order of sushi. Fishes in the Raw offers takeout, but unfortunately doesn’t
deliver.
6841D CH04 UG.indd 131
12/18/14 3:10 PM
1 3 2 [ C h . 4 ] s i m u lta n e o u s - m o v e g a m e s : d i s c r e t e s t r at e g i e s
Suppose that each player enjoys $20 worth of utility from having
enough sushi to eat on Friday evening, and $0 from not having enough to
eat. The cost to each player of picking up an order of sushi is $10.
Unfortunately, the players have forgotten to communicate about
who should be buying sushi this Friday, and none of the players has a
cell phone, so they must each make independent decisions of whether to
buy (B) or not buy (N) an order of sushi.
(a) Write down this game in strategic form.
(b) Find all the Nash equilibria in pure strategies.
(c) Which equilibrium would you consider to be a focal point? Explain
your reasoning.
U12.
Roxanne, Sara, and Ted all love to eat cookies, but there’s only one left
in the package. No one wants to split the cookie, so Sara proposes the
following extension of “Evens or Odds” (see Exercise S12) to determine
who gets to eat it. On the count of three, each of them will show one or
two fingers, they’ll add them up, and then divide the sum by 3. If the remainder is 0, Roxanne gets the cookie, if the remainder is 1, Sara gets it,
and if it is 2, Ted gets it. Each of them receives a payoff of 1 for winning
(and eating the cookie) and 0 otherwise.
(a) Represent this three-player game in normal form, with Roxanne
as the row player, Sara as the column player, and Ted as the page
player.
(b) Find all the pure-strategy Nash equilibria of this game. Is this game
a fair mechanism for allocating cookies? Explain why or why not.
U13.
(Optional ) Construct the payoff matrix for your own two-player game
that satisfies the following requirements. First, each player should have
three strategies. Second, the game should not have any dominant strategies. Third, the game should not be solvable using minimax. Fourth,
the game should have exactly two pure-strategy Nash equilibria. Provide
your game matrix, and then demonstrate that all of the above conditions
are true.
6841D CH04 UG.indd 132
12/18/14 3:10 PM
5
■
Simultaneous-Move Games:
Continuous Strategies,
Discussion, and Evidence
T
he discussion of simultaneous - move games
in Chapter 4 focused on
games in which each player had a discrete set of actions from which to
choose. Discrete strategy games of this type include sporting contests in
which a small number of well-defined plays can be used in a given situation—soccer penalty kicks, in which the kicker can choose to go high or low, to
a corner or the center, for example. Other examples include coordination and
prisoners’ dilemma games in which players have only two or three available
strategies. Such games are amenable to analysis with the use of a game table, at
least for situations with a reasonable number of players and available actions.
Many simultaneous-move games differ from those considered so far; they entail players choosing strategies from a wide range of possibilities. Games in which
manufacturers choose prices for their products, philanthropists choose charitable contribution amounts, or contractors choose project bid levels are examples
in which players have a virtually infinite set of choices. Technically, prices and
other dollar amounts do have a minimum unit, such as a cent, and so there is actually only a finite and discrete set of price strategies. But in practice the unit is
very small, and allowing the discreteness would require us to give each player too
many distinct strategies and make the game table too large; therefore, it is simpler
and better to regard such choices as continuously variable real numbers. When
players have such a large range of actions available, game t­ables become virtually useless as analytical tools; they become too unwieldy to be of practical use.
For these games we need a different solution technique. We p
­ resent the analytical
tools for handling such continuous ­strategy games in the first part of this chapter.
133
6841D CH05 UG.indd 133
12/18/14 3:11 PM
1 3 4 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
This chapter also takes up some broader matters relevant to behavior in
simultaneous-move games and to the concept of Nash equilibrium. We review
the empirical evidence on Nash equilibrium play that has been collected both
from the laboratory and from real-life situations. We also present some theoretical
criticisms of the Nash equilibrium concept and rebuttals of these criticisms. You
will see that game-theoretic predictions are often a reasonable starting point for
understanding actual behavior, with some caveats.
1 PURE STRATEGIES THAT ARE CONTINUOUS VARIABLES
In Chapter 4, we developed the method of best-response analysis for finding
all pure-strategy Nash equilibria of simultaneous-move games. Now we extend
that method to games in which each player has available a continuous range
of choices—for example, firms setting the prices of their products. To calculate
best responses in this type of game, we find, for each possible value of one firm’s
price, the value of the other firm’s price that is best for it (maximizes its payoff).
The continuity of the sets of strategies allows us to use algebraic formulas to
show how strategies generate payoffs and to show the best responses as curves
in a graph, with each player’s price (or any other continuous strategy) on one of
the axes. In such an illustration, the Nash equilibrium of the game occurs where
the two curves meet. We develop this idea and technique by using two stories.
A. Price Competition
Our first story is set in a small town, Yuppie Haven, which has two restaurants,
Xavier’s Tapas Bar and Yvonne’s Bistro. To keep the story simple, we assume that
each place has a set menu. Xavier and Yvonne have to set the prices of their respective menus. Prices are their strategic choices in the game of competing with
each other; each bistro’s goal is to set prices to maximize profit, the payoff in
this game. We suppose that they must get their menus printed separately without knowing the other’s prices, so the game has simultaneous moves.1 Because
prices can take any value within an (almost) infinite range, we start with general
or algebraic symbols for them. We then find best-response rules that we use to
solve the game and to determine equilibrium prices. Let us call Xavier’s price Px
and Yvonne’s price Py.
1
In reality, the competition extends over time, so each can observe the other’s past choices. This
repetition of the game introduces new considerations, which we cover in Chapter 10.
6841D CH05 UG.indd 134
12/18/14 3:11 PM
p u r e s t r at e g i e s t h at a r e c o n t i n u o u s va r i a b l e s 1 3 5
In setting its price, each restaurant has to calculate the consequences for
its profit. To keep things relatively simple, we put the two restaurants in a very
symmetric relationship, but readers with a little more mathematical skill can
do a similar analysis by using much more general numbers or even algebraic
symbols. Suppose the cost of serving each customer is $8 for each restaurateur. Suppose further that experience or market surveys have shown that, when
Xavier’s price is Px and Yvonne’s price is Py, the number of their respective customers, respectively Qx and Qy (measured in hundreds per month), are given by the
equations2
Qx  44  2Px  Py,
Qy  44  2Py  Px.
The key idea in these equations is that, if one restaurant raises its price by $1
(say, Yvonne increases Py by $1), its sales will go down by 200 per month (Qy
changes by 2) and those of the other restaurant will go up by 100 per month
(Qx changes by 1). Presumably, 100 of Yvonne’s customers switch to Xavier’s and
another 100 stay at home.
Xavier’s profit per week (in hundreds of dollars per week), call it Px—the
Greek letter P (pi) is the traditional economic symbol for profit—is given by the
product of the net revenue per customer (price less cost or Px  8) and the number of customers served:
Px  (Px  8) Qx  (Px  8) (44  2Px  Py ).
By multiplying out and rearranging the terms on the right-hand side of the preceding expression, we can write profit as a function of increasing powers of Px:
Px  –8(44  Py)  (16  44  Py )Px – 2(Px )2
 –8(44 Py )  (60  Py )Px – 2(Px )2.
Xavier sets his price Px to maximize this payoff. Doing so for each possible level
of Yvonne’s price Py gives us Xavier’s best-response rule; we can then graph it.
Many simple illustrative examples where one real number (such as the
price) is chosen to maximize another real number that depends on it (such as
the profit or the payoff) have a similar form. (In mathematical jargon, we would
describe the second number as a function of the first.) In the appendix to this
chapter, we develop a simple general technique for performing such maximization; you will find many occasions to use it. Here we just state the formula.
2
Readers who know some economics will recognize that the equations linking quantities to prices
are demand functions for the two products X and Y. The quantity demanded of each product is decreasing in its own price (demands are downward sloping) and increasing in the price of the other
product (the two are substitutes).
6841D CH05 UG.indd 135
12/18/14 3:11 PM
1 3 6 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
The function we want to maximize takes the general form
Y 5 A 1 BX 2 CX 2
where we have used the descriptor Y for the number we want to maximize and
X for the number we want to choose to maximize that Y. In our specific example,
profit, Px, would be represented by Y, and the price, Px, by X. Similarly, although
in any specific problem the terms A, B, and C in the equation above would be
known numbers, we have denoted them by general algebraic symbols so that
our formula can be applied across a wide variety of similar problems. (The technical term for the terms A, B, and C is parameters, or algebraic constants.) Because most of our applications involve nonnegative X entities, such as prices,
and the maximization of the Y entity, we require B . 0 and C . 0. Then the formula giving the choice of X to maximize Y in terms of the known parameters A,
B, and C is simply X 5 B (2C ). Observe that A does not appear in the formula,
although it will of course affect the value of Y that results.
Comparing the general function in the equation above and the specific example of the profit function in the pricing game on the previous page, we have3
B 5 60 1 Py and C 5 2.
Therefore, Xavier’s choice of price to maximize his profit will satisfy the formula
B (2C ) and will be
Px = 15 1 0.25Py.
This equation determines the value of Px that maximizes Xavier’s profit, given a
particular value of Yvonne’s price, Py. In other words, it is exactly what we want,
the rule for Xavier’s best response.
Yvonne’s best-response rule can be found similarly. Because the costs and
sales of the two restaurants are entirely symmetric, the equation is obviously
going to be
Py  15  0.25Px.
Both rules are used in the same way to develop best-response graphs. If Xavier
sets a price of 16, for example, then Yvonne plugs this value into her best‑response
rule to find Py  15  0.25(16)  19; similarly, Xavier’s best response to Yvonne’s
Py  16 is Px  19, and each restaurant’s best response to the other’s price of 4 is
16, that to 8 is 17, and so on.
Figure 5.1 shows the graphs of these two best-response relations. Owing
to the special features of our example—namely, the linear relation between
3
Although Py , chosen by Yvonne, is a variable in the full game, here we are considering only a part
of the game, namely Xavier’s best response, where he regards Yvonne’s choice as outside his control
and therefore like a constant.
6841D CH05 UG.indd 136
12/18/14 3:11 PM
p u r e s t r at e g i e s t h at a r e c o n t i n u o u s va r i a b l e s 1 3 7
Yvonne’s
price Py
Joint
best
30
Yvonne’s
best
response
20
Nash equilibrium
Xavier’s
best
response
10
0
10
20
30
Xavier’s price Px
FIGURE 5.1 Best-Response Curves and Equilibrium in the Restaurant Pricing Game
quantity sold and prices charged, and the constant cost of producing each meal—
each of the two best-response curves is a straight line. For other specifications
of demands and costs, the curves can be other than straight, but the method
of obtaining them is the same—namely, first holding one restaurant’s price
(say, Py ) fixed and finding the value of the other’s price (say, Px ) that maximizes
the second restaurant’s profit, and then the other way around.
The point of intersection of the two best-response curves is the Nash equilibrium of the pricing game between the two restaurants. That point represents the
pair of prices, one for each firm, that are best responses to each other. The specific
values for each restaurant’s pricing strategy in equilibrium can be found algebraically by solving the two best-response rules jointly for Px and Py . We deliberately
chose our example to make the equations linear, and the solution is easy. In this
case, we simply substitute the expression for Px into the expression for Py to find
Py  15  0.25Px  15  0.25(15  0.25Py)  18.75  0.0625 Py .
This last equation simplifies to Py  20. Given the symmetry of the problem, it
is simple to determine that Px  20 also.4 Thus, in equilibrium, each restaurant
charges $20 for its menu and makes a profit of $12 on each of the 2,400 customers
[2,400  (44  2  20  20) hundred] that it serves each month, for a total profit
of $28,800 per month.
4
Without this symmetry, the two best-response equations will be different, but given our other
specifications, still linear. So it is not much harder to solve the nonsymmetric case. You will have a
chance to do so in Exercise S2 at the end of this chapter.
6841D CH05 UG.indd 137
12/18/14 3:11 PM
1 3 8 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
B. Some Economics of Oligopoly
Our main purpose in presenting the restaurant pricing example was to illustrate how the Nash equilibrium can be found in a game where the strategies
are continuous variables, such as prices. But it is interesting to take a further
look into this situation and to explain some of the economics behind pricing
strategies and profits when a small number of firms (here just two) compete. In
the jargon of economics, such competition is referred to as oligopoly, from the
Greek words for “a small number of sellers.”
Begin by observing that each firm’s best-response curve slopes upward. Specifically, when one restaurant raises its price by $1, the other’s best response is
to raise its own price by 0.25, or 25 cents. When one restaurant raises its price,
some of its customers switch to the other restaurant, and its rival can then profit
from these new customers by raising its price part of the way. Thus, a restaurant
that raises its price is also helping to increase its rival’s profit. In Nash equilibrium, where each restaurant chooses its price independently and out of concern
for its own profit, it does not take into account this benefit that it conveys to
the other. Could they get together and cooperatively agree to raise their prices,
thereby raising both of their profits? Yes. Suppose the two restaurants charged
$24 each. Then each would make a profit of $16 on each of the 2,000 customers
[2,000  (44  2  24  24) hundred] that it would serve each month, for a total
profit of $32,000 per month.
This pricing game is exactly like the prisoners’ dilemma game presented
in Chapter 4, but now the strategies are continuous variables. In the story in
Chapter 4, the Husband and Wife were each tempted to cheat the other and
confess to the police; but, when they both did so, both ended up with longer prison sentences (worse outcomes). In the same way, the more profitable
price of $24 is not a Nash equilibrium. The separate calculations of the two
restaurants will lead them to undercut such a price. Suppose that Yvonne
somehow starts by charging $24. Using the best-response formula, we see
that Xavier will then charge 15  0.25  24  21. Then Yvonne will come back
with her best response to that: 15  0.25  21  20.25. Continuing this process, the prices of both will converge toward the Nash equilibrium price of
$20.
But what price is jointly best for the two restaurants? Given the symmetry,
suppose both charge the same price P. Then the profit of each will be
Px  Py  (P  8)(44  2P  P)  (P  8)(44  P) 5 2 352  52P 2 P 2.
The two can choose P to maximize this expression. Using the formula provided
in Section 1.A, we see that the solution is P  522  26. The resulting profit for
each restaurant is $32,400 per month.
6841D CH05 UG.indd 138
12/18/14 3:11 PM
p u r e s t r at e g i e s t h at a r e c o n t i n u o u s va r i a b l e s 1 3 9
In the jargon of economics, such collusion to raise prices to the jointly optimal level is called a cartel. The high prices hurt consumers, and regulatory agencies of the U.S. government often try to prevent the formation of cartels and to
make firms compete with one another. Explicit collusion over price is illegal, but
it may be possible to maintain tacit collusion in a repeated prisoners’ dilemma;
we examine such repeated games in Chapter 10.5
Collusion need not always lead to higher prices. In the preceding example, if one restaurant lowers its price, its sales increase, in part because it
draws some customers away from its rival because the products (meals) of
the two restaurants are substitutes for each other. In other contexts, two firms
may be selling products that are complements to each other—for example,
hardware and software. In that case, if one firm lowers its price, the sales
of both firms increase. In a Nash equilibrium, where the firms act independently, they do not take into account the benefit that would accrue to each
of them if they both lowered their prices. Therefore, they keep prices higher
than they would if they were able to coordinate their actions. Allowing them
to cooperate would lead to lower prices and thus be beneficial to the consumers as well.
Competition need not always involve the use of prices as the strategic
variables. For example, fishing fleets may compete to bring a larger catch to
market; this is quantity competition as opposed to the price competition considered in this section. We consider quantity competition later in this chapter
and in several of the end-of-chapter exercises.
C. Political Campaign Advertising
Our second example is one drawn from politics. It requires just a little more
mathematics than we normally use, but we explain the intuition behind the calculations in words and with a graph.
Consider an election contested by two parties or candidates. Each is trying to win votes away from the other by advertising—either positive ads that
highlight the good things about oneself or negative ads that emphasize the bad
things about the opponent. To keep matters simple, suppose the voters start
out entirely ignorant and unconcerned and form opinions solely as a result of
the ads. (Many people would claim that this is a pretty accurate description
of U.S. politics, but more advanced analyses in political science do recognize
that there are informed and strategic voters. We address the behavior of such
5
Firms do try to achieve explicit collusion when they think they can get away with it. An entertaining and instructive story of one such episode is in The Informant, by Kurt Eichenwald (New York:
Broadway Books, 2000).
6841D CH05 UG.indd 139
12/18/14 3:11 PM
1 4 0 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
Party
R’s ad, y
($ millions)
100
Party L’s
best response
Nash
equilibrium
Party R’s
best response
25
0
25
100
Party L’s ad, x
($ millions)
FIGURE 5.2 Best Responses and Nash Equilibrium in the Campaign Advertising Game
voters in detail in Chapter 15.) Even more simply, suppose the vote share of a
party equals its share of the total campaign advertising that is done. Call the
parties or candidates L and R; when L spends $x million on advertising and R
spends $y million, L will get a share x(xy) of the votes and R will get y(xy).
Once again, readers who get interested in this application can find more general
treatments in specialized political science writings.
Raising money to pay for these ads includes a cost: money to send letters
and make phone calls; time and effort of the candidates, party leaders, and activists; the future political payoff to large contributors; and possible future political costs if these payoffs are exposed and lead to scandals. For simplicity of
analysis, let us suppose all these costs are proportional to the direct campaign
expenditures x and y. Specifically, let us suppose that party L’s payoff is measured by its vote percentage minus its advertising expenditure, 100x(x  y)  x.
Similarly party R’s payoff is 100y(x  y)y.
Now we can find the best responses. Because we cannot do so without calculus, we derive the formula mathematically and then explain in words its general meaning intuitively. For a given strategy x of party L, party R chooses y to
maximize its payoff. The calculus first-order condition is found by holding x
fixed and setting the derivative of 100y(x  y)  y with respect to y equal to 0. It
is 100x(x  y)2  1  0, or y  10x  x . Figure 5.2 shows its graph and that of
the analogous best-response function of party L—namely, x  10y  y.
Look at the best-response curve of party R. As the value of party L’s x increases, party R’s y increases for a while and then decreases. If the other party is
advertising very little, then one’s own ads have a high reward in the form of votes,
and it pays to respond to a small increase in the other party’s expenditures by
spending more oneself to compete harder. But if the other party already spends
6841D CH05 UG.indd 140
12/18/14 3:11 PM
p u r e s t r at e g i e s t h at a r e c o n t i n u o u s va r i a b l e s 1 4 1
a great deal on ads, then one’s own ads get only a small return in relation to their
cost, so it is better to respond to the other party’s increase in spending by scaling
back.
As it happens, the two parties’ best-response curves intersect at their
peak points. Again, some algebraic manipulation of the equations for the two
curves yields us exact values for the equilibrium values of x and y. You should
verify that here x and y are each equal to 25, or $25 million. (This is presumably
a congressional election; Senate and presidential elections cost much more
these days.)
As in the pricing game, we have a prisoners’ dilemma. If both parties cut
back on their ads in equal proportions, their vote shares would be entirely unaffected, but both would save on their expenditures and so both would have a
larger payoff. Unlike a producers’ cartel for substitute products (which keeps
prices high and hurts consumers), a politicians’ cartel to advertise less would
probably benefit voters and society, like a producers’ cartel for complements
would lead to lower prices and benefit consumers. We could all benefit from
finding ways to resolve this particular prisoners’ dilemma. In fact, Congress
has been trying to do just that for several years and has imposed some partial curbs, but political competition seems too fierce to permit a full or lasting
resolution.
What if the parties are not symmetrically situated? Two kinds of asymmetries can arise. One party (say, R) may be able to advertise at a lower cost, because it has favored access to the media. Or R’s advertising dollars may be more
effective than L’s—for example, L’s vote share may be x(x  2y), while R’s vote
share is 2y(x  2y).
In the first of these cases, R exploits its cheaper access to advertising by
choosing a higher level of expenditures y for any given x for party L—that
is, R’s best-response curve in Figure 5.2 shifts upward. The Nash equilibrium
shifts to the northwest along L’s unchanged best-response curve. Thus, R ends
up advertising more and L ends up advertising less than before. It is as if the
advantaged party uses its muscle and the disadvantaged party gives up to some
extent in the face of this adversity.
In the second case, both parties’ best-response curves shift in more complex ways. The outcome is that both spend equal amounts, but less than the 25
that they spent in the symmetric case. In our example where R’s dollars are twice
as effective as L’s, it turns out that their common expenditure level is 2009 5
22.2 , 25. (Thus the symmetric case is the one of most intense competition.)
When R’s spending is more effective, it is also true that the best-response curves
are asymmetric in such a way that the new Nash equilibrium, rather than being
at the peak points of the two best-response curves, is on the downward part of L’s
best-response curve and on the upward part of R’s best-response curve. That is
to say, although both parties spend the same dollar amount, the favored party,
R, spends more than the amount that would bring forth the maximum response
6841D CH05 UG.indd 141
12/18/14 3:11 PM
1 4 2 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
from party L, and the underdog party, L, spends less than the amount that
would bring forth the maximum response from party R. We include an optional
exercise (Exercise U12) in this chapter that lets the mathematically advanced
students derive these results.
D. General Method for Finding Nash Equilibria
Although the strategies (prices or campaign expenditures) and payoffs (profits
or vote shares) in the two previous examples are specific to the context of competition between firms or political parties, the method for finding the Nash equilibrium of a game with continuous strategies is perfectly general. Here we state
its steps so that you can use it as a recipe for solving other games of this kind.
Suppose the players are numbered 1, 2, 3, . . . Label their strategies x, y, z, . . . in
that order, and their payoffs by the corresponding upper-case letters X, Y, Z, . . . The
payoff of each is in general a function of the choices of all; label the respective
functions F, G , H, . . . Construct payoffs from the information about the game,
and write them as
X 5 F(x, y, z, . . . ), Y 5 G(x, y, z, . . . ),
Z 5 H(x, y, z, . . . ).
Using this general format to describe our example of price competition between
two players (firms) makes the strategies x and y become the prices Px and Py. The
payoffs X and Y are the profits Px and Py. The functions F and G are the quadratic
formulas
Px 5 2 8(44 1 Py ) 1 (16 1 44 1 Py )Px 2 2(Px ),
and similarly for Py.
In the general approach, player 1 regards the strategies of players 2, 3, . . . as
outside his control, and chooses his own strategy to maximize his own payoff.
Therefore, for each given set of values of y, z, . . . , player 1’s choice of x maximizes X 5 F(x, y, z, . . . ). If you use calculus, the condition for this maximization is that the derivative of X with respect to x holding y, z, . . . constant (the
partial derivative) equals 0. For special functions, simple formulas are available,
such as the one we stated and used above for the quadratic. And even if an algebra or calculus formulation is too difficult, computer programs can tabulate or
graph best‑response functions for you. Whatever method you use, you can find
an equation for player 1’s optimal choice of x for given y, z, . . . that is player 1’s
best‑response function. Similarly, you can find the best-response functions for
each of the other players.
The best-response functions are equal in number to the number of the strategies in the game and can be solved simultaneously while regarding the strategy
variables as the unknowns. The solution is the Nash equilibrium we seek. Some
games may have multiple solutions, yielding multiple Nash equilibria. Other
6841D CH05 UG.indd 142
12/18/14 3:11 PM
c r i t i c a l d i s c u s s i o n o f t h e n a s h e q u i l i b r i u m c o n c e p t 1 4 3
games may have no solution, requiring further analysis, such as inclusion of
mixed strategies.
2 CRITICAL DISCUSSION OF THE NASH EQUILIBRIUM CONCEPT
Although Nash equilibrium is the primary solution concept for simultaneous
games, it has been subject to several theoretical criticisms. In this section, we
briefly review some of these criticisms and some rebuttals, in each case by using
an example.6 Some of the criticisms are mutually contradictory, and some can
be countered by thinking of the games themselves in a better way. Others tell
us that the Nash equilibrium concept by itself is not enough and suggest some
augmentations or relaxations of it that have better properties. We develop one
such alternative here and point to some others that appear in later chapters. We
believe our presentation will leave you with renewed but cautious confidence
in using the Nash equilibrium concept. But some serious doubts remain unresolved, indicating that game theory is not yet a settled science. Even this should
give encouragement to budding game theorists, because it shows that there is
a lot of room for new thinking and new research in the subject. A totally settled
science would be a dead science.
We begin by considering the basic appeal of the Nash equilibrium concept.
Most of the games in this book are noncooperative, in the sense that every player
takes her action independently. Therefore, it seems natural to suppose that, if
her action is not the best according to her own value system (payoff scale), given
what everyone else does, then she will change it. In other words, it is appealing
to suppose that every player’s action will be the best response to the actions of
all the others. Nash equilibrium has just this property of “simultaneous best responses”; indeed, that is its very definition. In any purported final outcome that
is not a Nash equilibrium, at least one player could have done better by switching to a different action.
This consideration led Nobel laureate Roger Myerson to rebut those criticisms of the Nash equilibrium that were based on the intuitive appeal of playing
a different strategy. His rebuttal simply shifted the burden of proof onto the critic.
“When asked why players in a game should behave as in some Nash equilibrium,”
he said, “my favorite response is to ask ‘Why not?’ and to let the challenger specify
what he thinks the players should do. If this specification is not a Nash equilibrium, then . . . we can show that it would destroy its own validity if the players believed it to be an accurate description of each other’s behavior.”7
6
David M. Kreps, Game Theory and Economic Modelling (Oxford: Clarendon Press, 1990), gives an
excellent in-depth discussion.
7
Roger Myerson, Game Theory (Cambridge, Mass.: Harvard University Press, 1991), p. 106.
6841D CH05 UG.indd 143
12/18/14 3:11 PM
1 4 4 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
A. The Treatment of Risk in Nash Equilibrium
Some critics argue that the Nash equilibrium concept does not pay due attention to risk. In some games, people might find strategies different from their Nash
equilibrium strategies to be safer and might therefore choose those strategies. We
offer two examples of this kind. The first comes from John Morgan, an economics professor at the University of California, Berkeley; Figure 5.3 shows the game
table.
Best-response analysis quickly reveals that this game has a unique Nash
equilibrium—namely, (A, A), yielding the payoffs (2, 2). But you may think, as
did several participants in an experiment conducted by Morgan, that playing C
has a lot of appeal, for the following reasons. It guarantees you the same payoff
as you would get in the Nash equilibrium—namely, 2; whereas if you play your
Nash equilibrium strategy A, you will get a 2 only if the other player also plays A.
Why take that chance? What is more, if you think the other player might use this
rationale for playing C, then you would be making a serious mistake by playing
A; you would get only a 0 when you could have gotten a 2 by playing C.
Myerson would respond, “Not so fast. If you really believe that the other
player would think this way and play C, then you should play B to get the payoff
3. And if you think the other person would think this way and play B, then your
best response to B should be A. And if you think the other person would figure
this out, too, you should be playing your best response to A—namely, A. Back to
the Nash equilibrium!” As you can see, criticizing Nash equilibrium and rebutting the criticisms is itself something of an intellectual game, and quite a fascinating one.
The second example comes from David Kreps, an economist at Stanford
Business School, and is even more dramatic. The payoff matrix is in Figure 5.4.
Before doing any theoretical analysis of this game, you should pretend that you
are actually playing the game and that you are player A. Which of the two actions would you choose?
Keep in mind your answer to the preceding question and let us proceed
to analyze the game. If we start by looking for dominant strategies, we see that
COLUMN
ROW
A
B
C
A
2, 2
3, 1
0, 2
B
1, 3
2, 2
3, 2
C
2, 0
2, 3
2, 2
FIGURE 5.3 A Game with a Questionable Nash Equilibrium
6841D CH05 UG.indd 144
12/18/14 3:11 PM
c r i t i c a l d i s c u s s i o n o f t h e n a s h e q u i l i b r i u m c o n c e p t 1 4 5
B
Left
Right
Up
9, 10
8, 9.9
Down
10, 10
–1000, 9.9
A
FIGURE 5.4 Disastrous Nash Equilibrium?
player A has no dominant strategy but player B does. Playing Left guarantees
B a payoff of 10, no matter what A does, versus the payoff of 9.9 earned from
playing Right (also no matter what A does). Thus, player B should play Left.
Given that player B is going to go Left, player A does better to go Down. The
unique pure‑strategy Nash equilibrium of this game is (Down, Left); each player
achieves a payoff of 10 at this outcome.
The problem that arises here is that many, but not all, people assigned to
be Player A would not choose to play Down. (What did you choose?) This is
true for those who have been students of game theory for years as well as for
those who have never heard of the subject. If A has any doubts about either
B’s payoffs or B’s rationality, then it is a lot safer for A to play Up than to play
her Nash equilibrium strategy of Down. What if A thought the payoffs were as
illustrated in Figure 5.4 but in reality B’s payoffs were the reverse—the 9.9 payoff went with Left and the 10 payoff went with Right? What if the 9.9 payoff
were only an approximation and the exact payoff was actually 10.1? What if
B was a player with a substantially different value system or was not a truly
rational player and might choose the “wrong” action just for fun? Obviously,
our assumptions of perfect information and rationality can really be crucial to
the analysis that we use in the study of strategy. Doubts about players can alter
equilibria from those that we would normally predict and can call the reasonableness of the Nash equilibrium concept into question.
However, the real problem with many such examples is not that the Nash
equilibrium concept is inappropriate but that the examples illustrate it in an inappropriately simplistic way. In this example, if there are any doubts about B’s
payoffs, then this fact should be made an integral part of the analysis. If A does
not know B’s payoffs, the game is one of asymmetric information (which we won’t
have the tools to discuss until Chapter 8). But this particular example is a relatively simple game of that kind, and we can figure out its equilibrium very easily.
Suppose A thinks there is a probability p that B’s payoffs from Left and Right
are the reverse of those shown in Figure 5.4; so (1  p) is the probability that
B’s payoffs are as stated in that figure. Because A must take her action without
knowing what B’s actual payoffs are, she must choose her strategy to be “best
on average.” In this game, the calculation is simple because in each case B has
6841D CH05 UG.indd 145
12/18/14 3:11 PM
1 4 6 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
a dominant strategy; the only problem for A is that in the two different cases
different strategies are dominant for B. With probability (1  p), B’s dominant
strategy is Left (the case shown in the figure), and with probability p, it is Right
(the opposite case). Therefore, if A chooses Up, then with probability (1  p)
he will meet B playing Left and so get a payoff of 9; with probability p, he will
meet B playing Right and so get a payoff of 8. Thus, A’s statistical or probabilityweighted average payoff from playing Up is 9(1  p )  8p. Similarly, A’s statistical average payoff from playing Down is 10(1  p)  1,000p. Therefore, it is
better for A to choose Up if
9(1  p)  8p . 10(1  p)  1,000p, or p . 11,009.
Thus, even if there is only a very slight chance that B’s payoffs are the opposite
of those in Figure 5.4, it is optimal for A to play Up. In this case, analysis based
on rational behavior, when done correctly, contradicts neither the intuitive suspicion nor the experimental evidence after all.
In the preceding calculation, we supposed that, facing an uncertain prospect of payoffs, player A would calculate the statistical average payoffs from her
different actions and would choose that action which yields her the highest statistical average payoff. This implicit assumption, though it serves the purpose in
this example, is not without its own problems. For example, it implies that a person faced with two situations, one having a 50–50 chance of winning or losing
$10 and the other having a 50–50 chance of winning $10,001 and losing $10,000,
should choose the second situation, because it yields a statistical average winning of 50 cents (−¹2  10,001  −¹2  10,000), whereas the first yields 0 (−¹2  10 
−¹2  10). But most people would think that the second situation carries a much
bigger risk and would therefore prefer the first situation. This difficulty is quite
easy to resolve. In the appendix to Chapter 7, we show how the construction of
a scale of payoffs that is suitably nonlinear in money amounts enables the decision maker to allow for risk as well as return. Then, in Chapter 8, we show how
the concept can be used for understanding how people respond to the presence
of risk in their lives—for example, by sharing the risk with others or by buying
insurance.
B. Multiplicity of Nash Equilibria
Another criticism of the Nash equilibrium concept is based on the observation that many games have multiple Nash equilibria. Thus, the argument goes,
the concept fails to pin down outcomes of games sufficiently precisely to give
unique predictions. This argument does not automatically require us to abandon the Nash equilibrium concept. Rather, it suggests that if we want a unique
prediction from our theory, we must add some criterion for deciding which one
of the multiple Nash equilibria we want to select.
6841D CH05 UG.indd 146
12/18/14 3:11 PM
c r i t i c a l d i s c u s s i o n o f t h e n a s h e q u i l i b r i u m c o n c e p t 1 4 7
In Chapter 4, we studied many games of coordination with multiple equilibria. From among these equilibria, the players may be able to select one as a focal
point if they have some common social, cultural, or historical knowledge. Consider the following coordination game, played by students at Stanford University.
One player was assigned the city of Boston and the other was assigned San Francisco. Each was then given a list of nine other U.S. cities—Atlanta, Chicago, Dallas,
Denver, Houston, Los Angeles, New York, Philadelphia, and Seattle—and asked to
choose a subset of those cities. The two chose simultaneously and independently.
If and only if their choices divided up the nine cities completely and without any
overlap between them, both got a prize. Despite the existence of 512 different Nash
equilibria, when both players were Americans or long-time U.S. residents, more
than 80% of the time they chose a unique equilibrium based on geography. The
student assigned Boston chose all the cities east of the Mississippi, and the student
assigned San Francisco chose all the cities west of the Mississippi. Such coordination was much less likely when one or both students were non-U.S. residents. In
such pairs, the choices were sometimes made alphabetically, but with much less
coordination on the same dividing point.8
The features of the game itself, combined with shared cultural background, can help player expectations to converge. As another example of
multiplicity of equilibria, consider a game where two players write down, simultaneously and independently, the share that each wants from a total prize
of $100. If the amounts that they write down add up to $100 or less, each
player receives what she wrote. If the two add up to more than $100, neither
gets anything. For any x, one player writing x and the other writing (100  x)
is a Nash equilibrium. Thus, the game has an (almost) infinite range of Nash
equilibria. But, in practice, 5050 emerges as a focal point. This social norm of
equality or fairness seems so deeply ingrained as to be almost an instinct; players who choose 50 say that it is the obvious answer. To be a true focal point, not
only should it be obvious to each, but everyone should know that it is obvious
to each, and everyone should know that . . . ; in other words, its obviousness
should be common knowledge. That need not always be the case, as we see
when we consider a situation in which one player is a woman from an enlightened and egalitarian society who believes that 5050 is obvious and the other is
a man from a patriarchal society who believes it is obvious that, in any matter of
division, a man should get three times as much as a woman. Then each will do
what is obvious to her or him, and they will end up with nothing, because neither’s obvious solution is obvious as common knowledge to both.
The existence of focal points is often a matter of coincidence, and creating them where none exist is basically an art that requires a lot of attention to
8
See David Kreps, A Course in Microeconomic Theory (Princeton: Princeton University Press, 1990),
pp. 392–93, 414–15.
6841D CH05 UG.indd 147
12/18/14 3:11 PM
1 4 8 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
the historical and cultural context of a game and not merely its mathematical
description. This bothers many game theorists, who would prefer the outcome
to depend only on an abstract specification of a game—players and their strategies should be identified by numbers without any external associations. We disagree. We think that historical and cultural contexts are just as important to a
game as is its purely mathematical description, and, if such context helps in selecting a unique outcome from multiple Nash equilibria, that is all to the better.
In Chapter 6, we will see that sequential-move games can have multiple
Nash equilibria. There, we will introduce the requirement of credibility that enables us to select a particular equilibrium; it turns out that this one is in fact
the rollback equilibrium of Chapter 3. In more complex games with information
asymmetries or additional complications, other restrictions called refinements
have been developed to identify and rule out Nash equilibria that are unreasonable in some way. In Chapter 8, we will consider one such refinement process
that selects an outcome called a perfect Bayesian equilibrium. The motivation
for each refinement is often specific to a particular type of game. A refinement
stipulates how players update their information when they observe what moves
other players made or failed to make. Each such stipulation is often perfectly reasonable in its context, and in many games it is not difficult to eliminate most of
the Nash equilibria and therefore to narrow down the ambiguity in prediction.
The opposite of the criticism that some games may have too many Nash
equilibria is that some games may have none at all. We saw an example of this
in Chapter 4 in Section 4.7 and said that, by extending the concept of strategy to random mixtures, Nash equilibrium could be restored. In Chapter 7, we
will explain and consider Nash equilibria in mixed strategies. In higher reaches
of game theory, there are more esoteric examples of games that have no Nash
equilibrium in mixed strategies either. However, this added complication is not
relevant for the types of analysis and applications that we deal with in this book,
so we do not attempt to address it here.
C. Requirements of Rationality for Nash Equilibrium
Remember that Nash equilibrium can be regarded as a system of the strategy
choices of each player and the belief that each player holds about the other
players’ choices. In equilibrium, (1) the choice of each should give her the best
payoff given her belief about the others’ choices, and (2) the belief of each player
should be correct—that is, her actual choices should be the same as what this
player believes them to be. These seem to be natural expressions of the requirements of the mutual consistency of individual rationality. If all players have
common knowledge that they are all rational, how can any one of them rationally believe something about others’ choices that would be inconsistent with a
rational response to her own actions?
6841D CH05 UG.indd 148
12/18/14 3:11 PM
r at i o n a l i z a b i l i t y 1 4 9
COLUMN
ROW
C1
C2
C3
R1
0, 7
2, 5
7, 0
R2
5, 2
3, 3
5, 2
R3
7, 0
2, 5
0, 7
FIGURE 5.5 Justifying Choices by Chains of Beliefs and Responses
To begin to address this question, we consider the three-by-three game in
Figure 5.5. Best-response analysis quickly reveals that it has only one Nash
equilibrium—namely, (R2, C2), leading to payoffs (3, 3). In this equilibrium, Row
plays R2 because she believes that Column is playing C2. Why does she believe
this? Because she knows Column to be rational, Row must simultaneously believe that Column believes that Row is choosing R2, because C2 would not be
Column’s best choice if she believed Row would be playing either R1 or R3. Thus,
the claim goes, in any rational process of formation of beliefs and responses, beliefs would have to be correct.
The trouble with this argument is that it stops after one round of thinking
about beliefs. If we allow it to go far enough, we can justify other choice combinations. We can, for example, rationally justify Row’s choice of R1. To do so, we
note that R1 is Row’s best choice if she believes Column is choosing C3. Why
does she believe this? Because she believes that Column believes that Row is
playing R3. Row justifies this belief by thinking that Column believes that Row
believes that Column is playing C1, believing that Row is playing R1, believing
in turn . . . This is a chain of beliefs, each link of which is perfectly r­ ational.
Thus, rationality alone does not justify Nash equilibrium. There are more
sophisticated arguments of this kind that do justify a special form of Nash equilibrium in which players can condition their strategies on a publicly observable
randomization device. But we leave that to more advanced treatments. In the
next section, we develop a simpler concept that captures what is logically implied by the players’ common knowledge of their rationality alone.
3 RATIONALIZABILITY
What strategy choices in games can be justified on the basis of rationality
alone? In the matrix of Figure 5.5, we can justify any pair of strategies, one for
each player, by using the same type of logic that we used in Section 2.C. In other
words, we can justify any one of the nine logically conceivable combinations.
6841D CH05 UG.indd 149
12/18/14 3:11 PM
1 5 0 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
Thus, rationality alone does not give us any power to narrow down or predict
outcomes at all. Is this a general feature of all games? No. For example, if a strategy is dominated, rationality alone can rule it out of consideration. And when
players recognize that other players, being rational, will not play dominated
strategies, iterated elimination of dominated strategies can be performed on
the basis of common knowledge of rationality. Is this the best that can be done?
No. Some more ruling out of strategies can be done, by using a property slightly
stronger than being dominated in pure strategies. This property identifies strategies that are never a best response. The set of strategies that survive elimination on this ground are called rationalizable, and the concept itself is known as
rationalizability.
Why introduce this additional concept, and what does it do for us? As for
why, it is useful to know how far we can narrow down the possible outcomes of
a game based on the players’ rationality alone, without invoking correctness of
expectations about the other player’s actual choice. It is sometimes possible to
figure out that the other player will not choose some available action or actions,
even when it is not possible to pin down the single action that she will choose. As
for what it achieves, that depends on the context. In some cases rationalizability
may not narrow down the outcomes at all. This was so in the three‑by‑three example of Figure 5.5. In some cases, it narrows down the possibilities to some extent, but not all the way down to the Nash equilibrium if the game has a unique
one, or to the set of Nash equilibria if there are several. An example of such a
situation is the four‑by‑four enlargement of the previous example, considered
in Section 3.A below. In some other cases, the narrowing down may go all the
way to the Nash equilibrium; in these cases, we have a more powerful justification for the Nash equilibrium that relies on rationality alone, without assuming
correctness of expectations. The quantity competition example of Section 3.B
below is an example in which the rationalizability argument takes us all the way
to the game’s unique Nash equilibrium.
A. Applying the Concept of Rationalizability
Consider the game in Figure 5.6, which is the same as Figure 5.5 but with an
additional strategy for each player.9 We just indicated that nine of the strategy
combinations that pick one of the first three strategies for each of the players
can be justified by a chain of beliefs about each other’s beliefs. That remains
true in this enlarged matrix. But can R4 and C4 be justified in this way?
9
This example comes from Douglas Bernheim, “Rationalizable Strategic Behavior,” Econometrica,
vol. 52, no. 4 (July 1984), pp. 1007–1028, an article that originally developed the concept of rationalizability. See also Andreu Mas-Colell, Michael Whinston, and Jerry Green, Microeconomic Theory
(New York: Oxford University Press, 1995), pp. 242–45.
6841D CH05 UG.indd 150
12/18/14 3:11 PM
r at i o n a l i z a b i l i t y 1 5 1
COLUMN
C1
C2
C3
C4
R1
0, 7
2, 5
7, 0
0, 1
R2
5, 2
3, 3
5, 2
0, 1
R3
7, 0
2, 5
0, 7
0, 1
R4
0, 0
0, –2
0, 0
10, –1
ROW
FIGURE 5.6 Rationalizable Strategies
Could Row ever believe that Column would play C4? Such a belief would
have to be justified by Column’s beliefs about Row’s choice. What might Column
believe about Row’s choice that would make C4 Column’s best response?
Nothing. If Column believes that Row would play R1, then Column’s best choice
is C1. If Column believes that Row will play R2, then Column’s best choice is C2.
If Column believes that Row will play R3, then C3 is Column’s best choice. And,
if Column believes that Row will play R4, then C1 and C3 are tied for her best
choice. Thus, C4 is never a best response for Column.10 This means that Row,
knowing Column to be rational, can never attribute to Column any belief about
Row’s choice that would justify Column’s choice of C4. Therefore, Row should
never believe that Column would choose C4.
Note that, although C4 is never a best response, it is not dominated by any
of C1, C2, and C3. For Column, C4 does better than C1 against Row’s R3, better
than C2 against Row’s R4, and better than C3 against Row’s R1. If a strategy is
dominated, it also can never be a best response. Thus, “never a best response” is
a more general concept than “dominated.” Eliminating strategies that are never
a best response may be possible even when eliminating dominated strategies is
not. So eliminating strategies that are never a best response can narrow down the
set of possible outcomes more than can elimination of dominated strategies.11
The elimination of “never best response” strategies can also be carried out
iteratively. Because a rational Row can never believe that a rational Column
will play C4, a rational Column should foresee this. Because R4 is Row’s best
response only against C4, Column should never believe that Row will play R4.
10
Note that in each case the best choice is strictly better than C4 for Column. Thus, C4 is never even
tied for a best response. We can distinguish between weak and strong senses of never being a best response just as we distinguished between weak and strong dominance. Here, we have the strong sense.
11
When one allows for mixed strategies, as we will do in Chapter 7, there arises the possibility of a
pure strategy being dominated by a mixture of other pure strategies. With such an expanded definition of a dominated strategy, iterated elimination of strictly dominated strategies turns out to be
equivalent to rationalizability. The details are best left for a more advanced course in game theory.
6841D CH05 UG.indd 151
12/18/14 3:11 PM
1 5 2 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
Thus, R4 and C4 can never figure in the set of rationalizable strategies. The
concept of rationalizability does allow us to narrow down the set of possible
outcomes of this game to this extent.
If a game has a Nash equilibrium, it is rationalizable and in fact can be sustained by a simple one-round system of beliefs, as we saw in Section 2.C above.
But, more generally, even if a game does not have a Nash equilibrium, it may
have rationalizable outcomes. Consider the two-by-two game obtained from
Figure 5.5 or Figure 5.6 by retaining just the strategies R1 and R3 for Row and C1
and C3 for Column. It is easy to see that it has no Nash equilibrium in pure strategies. But all four outcomes are rationalizable with the use of exactly the chain
of beliefs, constructed earlier, that went around and around these strategies.
Thus, the concept of rationalizability provides a possible way of solving
games that do not have a Nash equilibrium. And more important, the concept
tells us how far we can narrow down the possibilities in a game on the basis of
rationality alone.
B. Rationalizability Can Take Us All the Way to Nash Equilibrium
In some games, iterated elimination of never-best-response strategies can narrow things down all the way to Nash equilibrium. Note we said can, not must.
But if it does, that is useful because in these games we can strengthen the case
for Nash equilibrium by arguing that it follows purely from the players’ rational
thinking about each other’s thinking. Interestingly, one class of games that can
be solved in this way is very important in economics. This class consists of competition between firms that choose the quantities that they produce, knowing
that the total quantity that is put on the market will determine the price.
We illustrate a game of this type in the context of a small coastal town. It has
two fishing boats that go out every evening and return the following morning
to put their night’s catch on the market. The game is played out in an era before
modern refrigeration, so all the fish has to be sold and eaten the same day. Fish are
quite plentiful in the ocean near the town, so the owner of each boat can decide
how much to catch each night. But each knows that, if the total that is brought to
the market is too large, the glut of fish will mean a low price and low profits.
Specifically, we suppose that, if one boat brings R barrels and the other
brings S barrels of fish to the market, the price P (measured in ducats per barrel)
will be P  60  (R  S ). We also suppose that the two boats and their crews
are somewhat different in their fishing efficiency. Fishing costs the first boat 30
ducats per barrel and the second boat 36 ducats per barrel.
Now we can write down the profits of the two boat owners, U and V, in terms
of their strategies R and S:
U  [(60  R  S )  30]R  (30  S ) R  R 2,
V  [(60  R  S )  36]S  (24  R ) S  S 2.
6841D CH05 UG.indd 152
12/18/14 3:11 PM
r at i o n a l i z a b i l i t y 1 5 3
With these payoff expressions, we construct best-response curves and find the
Nash equilibrium. As in our price competition example from Section 1, each
player’s payoff is a quadratic function of his own strategy, holding the strategy
of the other player constant. Therefore, the same mathematical methods we develop there and in the appendix to this chapter can be applied.
The first boat’s best response R should maximize U for each given value of
the other boat’s S. With the use of calculus, this means that we should differentiate U with respect to R, holding S fixed, and set the derivative equal to 0, which
gives
S.
(30  R)  2R  0; so R  15  −
2
The noncalculus approach uses the result that the U-maximizing value of
R 5 B(2C ), where in this case B 5 30  S and C 5 1. This gives R 5 (30  S )2,
or R 5 15  S2.
Similarly, the best-response equation of the second boat is found by choosing S to maximize V for each fixed R, yielding
S
24 R
R
; so S 12 .
2
2
The Nash equilibrium is found by solving the two best-response equations
jointly for R and S, which is easy to do.12 So we just state the results: quantities
are R  12 and S  6; price is P  42; and profits are U  144 and V  36.
Figure 5.7 shows the two fishermen’s best-response curves (labeled BR1 and
BR2 with the equations displayed) and the Nash equilibrium (labeled N with
its coordinates displayed) at the intersection of the two curves. Figure 5.7 also
shows how the players’ beliefs about each other’s choices can be narrowed down
by iteratively eliminating strategies that are never best responses.
What values of S can the first owner rationally believe the second owner will
choose? That depends on what the second owner thinks the first owner will produce. But no matter what this might be, the whole range of the second owner’s
best responses is between 0 and 12. So the first owner cannot rationally believe
that the second owner will choose anything else; all negative choices of S (obviously) and all choices of S greater than 12 (less obviously) are eliminated. Similarly, the second owner cannot rationally think that the first owner will produce
anything less than 0 or greater than 15.
Now take this to the second round. When the first owner has restricted the
second owner’s choices of S to the range between 0 and 12, her own choices of
R are restricted to the range of best responses to S’s range. The best response to
12
Although they are incidental to our purpose, some interesting properties of the solution are worth
pointing out. The quantities differ because the costs differ; the more efficient (lower-cost) boat gets
to sell more. The cost and quantity differences together imply even bigger differences in the resulting profits. The cost advantage of the first boat over the second is only 20%, but it makes four times
as much profit as the second boat.
6841D CH05 UG.indd 153
12/18/14 3:11 PM
1 5 4 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
S
30
BR1: R = 15 – S
2
12
N = (12, 6)
7.5
4.5
BR2: S = 12 – R
2
9
12.75 15
24
R
FIGURE 5.7 Nash Equilibrium through Rationalizability
S  0 is R  15, and the best response to S  12 is R  15  122  9. Because BR1
has a negative slope throughout, the whole range of R allowed at this round of
thinking is between 9 and 15. Similarly, the second owner’s choice of S is restricted
to the range of best responses to R between 0 and 15—namely, values between
S  12 and S  12  152  4.5. Figure 5.7 shows these restricted ranges on the axes.
The third round of thinking narrows the ranges further. Because R must be
at least 9 and BR2 has a negative slope, S can be at most the best response to
9—namely, S  12  92  7.5. In the second round, S was already shown to be
at least 4.5. Thus, S is now restricted to be between 4.5 and 7.5. Similarly, because
S must be at least 4.5, R can be at most 15  4.52  12.75. In the second round,
R was shown to be at least 9, so now it is restricted to the range from 9 to 12.75.
This succession of rounds can be carried on as far as you like, but it is already evident that the successive narrowing of the two ranges is converging on
the Nash equilibrium, R  12 and S  6. Thus, the Nash equilibrium is the only
outcome that survives the iterated elimination of strategies that are never best
responses.13 We know that in general the rationalizability argument need not
narrow down the outcomes of a game to its Nash equilibria, so this is a special
feature of this example. Actually, the process works for an entire class of games;
13
This example can also be solved by iteratively eliminating dominated strategies, but proving
dominance is harder and needs more calculus, whereas the never-best-response property is obvious
from Figure 5.7, so we use the simpler argument.
6841D CH05 UG.indd 154
12/18/14 3:11 PM
E m p i r i c a l E v i d e n c e c o n c e r n i n g n a s h e q u i l i b r i u m 1 5 5
it will work for any game that has a unique Nash equilibrium at the intersection
of downward-sloping best-response curves.14
This argument should be carefully distinguished from an older one based
on a succession of best responses. The old reasoning proceeded as follows. Start
at any strategy for one of the players—say, R  18. Then the best response of
the other is S  12  182  3. The best response of R to S  3 is R  15  32 
13.5. In turn, the best response of S to R  13.5 is 12  13.52  5.25. Then, in its
turn, the best R against this S is R  15  5.252  12.375. And so on.
The chain of best responses in the old argument also converges to the Nash
equilibrium. But the argument is flawed. The game is played once with simultaneous moves. It is not possible for one player to respond to what the other
player has chosen, then have the first player respond back again, and so on. If
such dynamics of actual play were allowed, would the players not foresee that
the other is going to respond and so do something different in the first place?
The rationalizability argument is different. It clearly incorporates the fact
that the game is played only once and with simultaneous moves. All the thinking regarding the chain of best responses is done in advance, and all the successive rounds of thinking and responding are purely conceptual. Players are not
responding to actual choices but are merely calculating those choices that will
never be made. The dynamics are purely in the minds of the players.
4 Empirical evidence concerning Nash equilibrium
In Chapter 3, when we considered empirical evidence on sequential-move
games and rollback, we presented empirical evidence from observations on
games actually played in the world, as well as games deliberately constructed
for testing the theory in the laboratory. There we pointed out the different merits
and drawbacks of the two methods for assessing the validity of rollback equilibrium predictions. Similar issues arise in securing and interpreting the evidence
on Nash equilibrium play in simultaneous-move games.
Real-world games are played for substantial stakes, often by experienced
players who have the knowledge and the incentives to employ good strategies.
But these situations include many factors beyond those considered in the theory. In particular, in real-life situations, it is difficult to observe the quantitative
14
A similar argument works with upward-sloping best-response curves, such as those in the pricing
game of Figure 5.1, for narrowing the range of best responses starting at low prices. Narrowing from
the higher end is possible only if there is some obvious starting point. This starting point might be
a very high price that can never be exceeded for some externally enforced reason—if, for example,
people simply do not have the money to pay prices beyond a certain level.
6841D CH05 UG.indd 155
12/18/14 3:11 PM
1 5 6 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
payoffs that players would have earned for all possible combinations of strategies. Therefore, if their behavior does not bear out the predictions of the theory,
we cannot tell whether the theory is wrong or whether some other factors overwhelm the strategic considerations.
Laboratory experiments attempt to control for other factors in an attempt
to provide cleaner tests of the theory. But they bring in inexperienced players
and provide them with little time and relatively weak incentives to learn the
game and play it well. Confronted with a new game, most of us would initially
flounder and try things out at random. Thus, the first several plays of the game
in an experimental setting may represent this learning phase and not the equilibrium that experienced player would learn to play. Experiments often control
for inexperience and learning by discarding several initial plays from their data,
but the learning phase may last longer than the one morning or one afternoon
that is the typical limit of laboratory sessions.
A. Laboratory Experiments
Researchers have conducted numerous laboratory experiments in the past three
decades to test how people act when placed in certain interactive strategic situations. In particular, such research asks, “Do participants play their Nash equilibrium strategies?” Reviewing this work, Douglas Davis and Charles Holt conclude
that, in relatively simple single-move games with a unique Nash equilibrium,
the equilibrium “has considerable drawing power . . . after some repetitions with
different partners.”15 But the theory’s success is more mixed in more complex
situations, such as when multiple Nash equilibria exist, when emotional factors
modify payoffs beyond the stated cash amounts, when the calculations for finding a Nash equilibrium are more complex, or when the game is repeated with
fixed partners. We will briefly consider the performance of Nash equilibrium in
several of these circumstances.
I. Choosing among multiple equilibria In Section 2.B above, we presented examples
demonstrating that focal points sometimes emerge to help players choose
among multiple Nash equilibria. Players may not manage to coordinate 100%
of the time, but circumstances often enable players to achieve much more coordination than would be experienced by random choices across possible
equilibrium strategies. Here we present a coordination game designed with an
interesting trade-off: the equilibrium with the highest payoff to all players also
happens to be the riskiest one to play, in the sense of Section 2.A above.
John Van Huyck, Raymond Battalio, and Richard Beil describe a 16-player
game in which each player simultaneously chooses an “effort” level between 1
15
Douglas D. Davis and Charles A. Holt, Experimental Economics (Princeton: Princeton University
Press, 1993), Chapter 2.
6841D CH05 UG.indd 156
12/18/14 3:11 PM
E m p i r i c a l E v i d e n c e c o n c e r n i n g n a s h e q u i l i b r i u m 1 5 7
and 7. Individual payoffs depend on group “output,” a function of the minimum
effort level chosen by anyone in the group, minus the cost of one’s individual effort. The game has exactly seven Nash equilibria in pure strategies; any outcome
in which all players choose the same effort level is an equilibrium. The highest possible payoff ($1.30 per player) occurs when all subjects choose an effort
level of 7, while the lowest equilibrium payoff ($0.70 per player) occurs when all
subjects choose an effort level of 1. The highest-payoff equilibrium is a natural
candidate for a focal point, but in this case there is a risk to choosing the highest
effort; if just one other player chooses a lower effort level than you, then your
extra effort is wasted. For example, if you play 7 and at least one other person
chooses 1, you get a payoff of just $0.10, far worse than the worst equilibrium
payoff of $0.70. This makes players nervous about whether others will choose
maximum effort, and as a result, large groups typically fail to coordinate on
the best equilibrium. A few players inevitably choose lower than the maximum effort, and in repeated rounds play converges toward the lowest-effort
equilibrium.16
II. Emotions and social norms In Chapter 3, we saw several examples in sequential-
move games where players were more generous to each other than Nash equilibrium would predict. Similar observations occur in simultaneous-move games
such as the prisoners’ dilemma game. One reason may be that the players’
payoffs are different from those assumed by the experimenter: in addition to
cash, their payoffs may also include the experience of emotions such as empathy, anger, or guilt. In other words, the players’ value systems may have internalized some social norms of niceness and fairness that have proved useful in the
larger social context and that therefore carry over to their behavior in the experimental game.17 Seen through this lens, these observations do not show any
deficiency of the Nash equilibrium concept itself, but they do warn us against
using the concept under naive or mistaken assumptions about people’s payoffs.
16
See John B. Van Huyck, Raymond C. Battalio, and Richard O. Beil, “Tacit Coordination Games,
Strategic Uncertainty, and Coordination Failure,” American Economic Review, vol. 80, no. 1 (March
1990), pp. 234–48. Subsequent research has suggested methods that can promote coordination on
the best equilibrium. Subhasish Dugar, “Non-monetary Sanction and Behavior in an Experimental
Coordination Game,” Journal of Economic Behavior & Organization, vol. 73, no. 3 (March 2010), pp.
377–86, shows that players gradually manage to coordinate on the highest-payoff outcome merely
by allowing players, between rounds, to express the numeric strength of their disapproval for each
other player’s decision. Roberto A. Weber, “Managing Growth to Achieve Efficient Coordination in
Large Groups,” American Economic Review, vol. 96, no. 1 (March 2006), pp. 114–26, shows that starting with a small group and slowly adding additional players can sustain the highest-payoff equilibrium, suggesting that a firm may do well to expand slowly and make sure that employees understand
the corporate culture of cooperation.
17
The distinguished game theorist Jörgen Weibull argues this position in detail in “Testing Game
Theory,” in Advances in Understanding Strategic Behaviour: Game Theory, Experiments and Bounded
Rationality: Essays in Honour of Werner Güth, ed. Steffen Huck (Basingstoke, UK: Palgrave MacMillan, 2004), pp. 85–104.
6841D CH05 UG.indd 157
12/18/14 3:11 PM
1 5 8 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
It might be a mistake, for example, to assume that players are always driven by
the selfish pursuit of money.
III. Cognitive Errors As we saw in the experimental evidence on rollback equilib-
rium in Chapter 3, players do not always fully think through the entire game
before playing, nor do they always expect other players to do so. Behavior in a
game known as the travelers’ dilemma illustrates a similar limitation of Nash
equilibrium in simultaneous-move games. In this game, two travelers purchase
identical souvenirs while on vacation, and the airline loses both of their bags on
the return trip. The airline announces to the two players that it intends to reimburse them for their losses, but it does not know the exact amount to reimburse.
It knows the correct amount is between $80 and $200 per person, so it designs
a game as follows. Each player may submit a claim between $80 and $200. The
airline will reimburse both players at an amount equal to the lower of the two
claims submitted. In addition, if the two claims differ, the airline will pay a reward of $5 to the person making the smaller claim and deduct a penalty of $5
from the reimbursement of the person making the larger claim.
With these rules, irrespective of the actual value of the lost luggage, each
player has an incentive to undercut the other’s claim. In fact, it turns out that
the only Nash equilibrium, and indeed the only rationalizable outcome, is for
both players to report the minimum number of $80. However, in the laboratory,
players rarely claim $80; instead they claim amounts much closer to $200. (Real
payoff amounts in the laboratory are typically in cents rather than in dollars.) Interestingly, if the penalty/reward parameter is increased by a factor of 10, from $5
to $50, behavior conforms much more closely to the Nash equilibrium, with reported amounts generally near $80. Thus, behavior in this experiment varies tremendously with a parameter that does not affect the Nash equilibrium at all; the
unique equilibrium is $80, regardless of the size of the penalty/reward amount.
To explain these results from their laboratory, Monica Capra and her coauthors employed a theoretical model called quantal-response equilibrium
(QRE), originally proposed by Richard McKelvey and Thomas Palfrey. This
model’s mathematics are beyond the scope of this text, but its main contribution is that it allows for the possibility that players make errors, with the probability of a given error being much smaller for costly mistakes than for mistakes
that reduce one’s payoff by very little. Furthermore, the model incorporates
players who expect each other to make errors in this way. It turns out that
quantal-response analysis can explain the data quite well. Reporting a high
claim is not very costly when the penalty is only $5, so players are more willing to report values near $200—especially knowing that their rivals are likely to
behave similarly, so the payoff to reporting a high number can be quite large.
However, with a penalty/reward of $50 instead of $5, reporting a high claim becomes quite costly, so players are very unlikely to expect each other to make
6841D CH05 UG.indd 158
12/18/14 3:11 PM
E m p i r i c a l E v i d e n c e c o n c e r n i n g n a s h e q u i l i b r i u m 1 5 9
such a mistake. This expectation pushes behavior toward the Nash equilibrium
claim of $80. Building on this success, quantal-response equilibrium has become a very active area of game-theoretic research.18
IV. Common Knowledge of Rationality We just saw that to better explain experimen-
tal results, QRE allows for the possibility that players may not believe that others are perfectly rational. Another way to explain data from experiments is to
allow for the possibility that different players engage in different levels of reasoning. A strategic guessing game that is often used in classrooms or laboratories asks each participant to choose a number between 0 and 100. Typically, the
players are handed cards on which to write their names and a choice, so this
game is a simultaneous-move game. When the cards are collected, the average
of the numbers is calculated. The person whose choice is closest to a specified
fraction—say two-thirds—of the average is the winner. The rules of the game
(this whole procedure) are announced in advance.
The Nash equilibrium of this game is for everyone to choose 0. In fact, the
game is dominance solvable. Even if everyone chooses 100, half of the average
can never exceed 67, so for each player, any choice above 67 is dominated by
67.19 But all players should rationally figure this out, so the average can never exceed 67 and two-thirds of it can never exceed 44, and so any choice above 44 is
dominated by 44. The iterated elimination of dominated strategies goes on until
only 0 is left.
However, when a group actually plays this game for the first time, the winner is not a person who plays 0. Typically, the winning number is somewhere
around 15 or 20. The most commonly observed choices are 33 and 22, suggesting that a large number of players perform one or two rounds of iterated dominance without going further. That is, “level-1” players imagine that all other
players will choose randomly, with an average of 50, so they best-respond with
a choice of two-thirds of this amount, or 33. Similarly, “level-2” players imagine
that everyone else will be a “level-1” player, and so they best-respond by playing
two-thirds of 33, or 22. Note that all of these choices are far from the Nash equilibrium of 0. It appears that many players follow a limited number of steps of
18
See Kaushik Basu, “The Traveler’s Dilemma,” Scientific American, vol. 296, no. 6 (June 2007),
pp. 90–95. The experiments and modeling can be found in C. Monica Capra, Jacob K. Goeree,
Rosario Gomez, and Charles A. Holt, “Anomalous Behavior in a Traveler’s Dilemma?” American Economic Review, vol. 89, no. 3 (June 1999), pp. 678–90. Quantal-response equilibrium (QRE) was first
proposed by Richard D. McKelvey and Thomas R. Palfrey, “Quantal Response Equilibria for Normal
Form Games,” Games and Economic Behavior, vol. 10, no. 1 (July 1995), pp. 6–38.
19
If you factor in your own choice, the calculation is strengthened. Suppose there are N players.
In the “worst-case scenario,” where all the other (N  1) players choose 100 and you choose x, the
average is [x  (N  1)100]N. Then your best choice is two-thirds of this, so x  (23)[x  (N 
1)100]N, or x  100(2N  2)(3N  2). If N  10, then x  (1828)100  64 (approximately). So any
choice above 64 is dominated by 64. The same reasoning applies to the successive rounds.
6841D CH05 UG.indd 159
12/18/14 3:11 PM
1 6 0 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
iterated elimination of dominated strategies, in some cases because they expect
others to be limited in their number of rounds of thinking.20
V. Learning and moving toward equilibrium What happens when the strategic guessing
game is repeated with the same group of players? In classroom experiments, we
find that the winning number can easily drop 50% in each subsequent round, as
the students expect all their classmates to play numbers as low as the previous
round’s winning number or lower. By the third round of play, winning numbers
tend to be as low as 5 or less.
How should one interpret this result? Critics would say that, unless the exact
Nash equilibrium is reached, the theory is refuted. Indeed, they would argue, if
you have good reason to believe that other players will not play their Nash equilibrium strategies, then your best choice is not your Nash equilibrium strategy
either. If you can figure out how others will deviate from their Nash equilibrium
strategies, then you should play your best response to what you believe they are
choosing. Others would argue that theories in social science can never hope for
the kind of precise prediction that we expect in sciences such as physics and
chemistry. If the observed outcomes are close to the Nash equilibrium, that is a
vindication of the theory. In this case, the experiment not only produces such a
vindication, but illustrates the process by which people gather experience and
learn to play strategies close to Nash equilibrium. We tend to agree with this latter viewpoint.
Interestingly, we have found that people learn somewhat faster by observing others play a game than by playing it themselves. This may be because, as
observers, they are free to focus on the game as a whole and think about it analytically. Players’ brains are occupied with the task of making their own choices,
and they are less able to take the broader perspective.
We should clarify the concept of gaining experience by playing the game.
The quotation from Davis and Holt at the start of this section spoke of “repetitions with different partners.” In other words, experience should be gained
by playing the game frequently, but with different opponents each time. However, for any learning process to generate outcomes increasingly closer to the
Nash equilibrium, the whole population of learners needs to be stable. If novices keep appearing on the scene and trying new experimental strategies, then
the original group may unlearn what they had learned by playing against one
another.
20
You will analyze similar games in Exercises S12 and U11. For a summary of results from largescale experiments run in European newspapers with thousands of players, see Rosemarie Nagel,
Antoni Bosch-Domènech, Albert Satorra, and Juan Garcia-Montalvo, “One, Two, (Three), Infinity:
Newspaper and Lab Beauty-Contest Experiments,” American Economic Review, vol. 92, no. 5 (December 2002), pp. 1687–1701.
6841D CH05 UG.indd 160
12/18/14 3:11 PM
E m p i r i c a l E v i d e n c e c o n c e r n i n g n a s h e q u i l i b r i u m 1 6 1
If a game is played repeatedly between two players or even among the same
small group of known players, then any pair is likely to play each other repeatedly. In such a situation, the whole repeated game becomes a game in its own
right. It can have very different Nash equilibria from those that simply repeat the
Nash equilibrium of a single play. For example, tacit cooperation may emerge
in repeated prisoners’ dilemmas, owing to the expectation that any temporary
gain from cheating will be more than offset by the subsequent loss of trust. If
games are repeated in this way, then learning about them must come from playing whole sets of the repetitions frequently, against different partners each time.
B. Real-World Evidence
While the field does not allow for as much direct observation as the laboratory
does, observations outside the laboratory can also provide valuable evidence
about the relevance of Nash equilibrium. Conversely, Nash equilibrium often provides a valuable starting point for social scientists to make sense of the real world.
I. Applications of Nash Equilibrium One of the earliest applications of the Nash
equilibrium concept to real-world behavior was in the area of international relations. Thomas Schelling pioneered the use of game theory to explain phenomena such as the escalation of arms races, even between countries that have no
intention of attacking each other, and the credibility of deterrent threats. Subsequent applications in this area have included the questions of when and how a
country can credibly signal its resolve in diplomatic negotiations or in the face
of a potential war. Game theory began to be used systematically in economics
and business in the mid-1970s, and such applications continue to proliferate.21
As we saw earlier in this chapter, price competition is one important application of Nash equilibrium. Other strategic choices by firms include product
quality, investment, R&D, and so on. The theory has also helped us to understand when and how the established firms in an industry can make credible
commitments to deter new competition—for example, to wage a destructive
price war against any new entrant. Game-theoretic models, based on the Nash
21
For those who would like to see more applications, here are some suggested sources. Thomas
Schelling’s The Strategy of Conflict (Cambridge, Mass.: Harvard University Press, 1960) and Arms
and Influence (New Haven: Yale University Press, 1966) are still required for all students of game
theory. The classic textbook on game-theoretic treatment of industries is Jean Tirole, The Theory of
Industrial Organization (Cambridge, Mass.: MIT Press, 1988). In political science, an early classic is
William H. Riker, Liberalism Against Populism (San Francisco: W. H. Freeman, 1982). For advancedlevel surveys of research, see several articles in The Handbook of Game Theory with Economic Applications, ed. Robert J. Aumann and Sergiu Hart (Amsterdam: North-Holland/Elsevier Science
B. V., 1992, 1994, 2002), particularly Barry O’Neill, “Game Theory Models of Peace and War,” in volume 2, and Kyle Bagwell and Asher Wolinsky, “Game Theory and Industrial Organization,” and Jeff­
rey Banks, “Strategic Aspects of Political Systems,” both of which are in volume 3.
6841D CH05 UG.indd 161
12/18/14 3:11 PM
1 6 2 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
equilibrium concept and its dynamic generalizations, fit the data for many
major industries, such as automobile manufacturers, reasonably well. They
also give us a better understanding of the determinants of competition than the
older models, which assumed perfect competition and estimated supply and
demand curves.22
Pankaj Ghemawat, a professor at the IESE Business School in Barcelona, has
developed a number of case studies of individual firms or industries, supported
by statistical analysis of the data. His game-theoretic models are remarkably
successful in improving our understanding of several initially puzzling observed
business decisions on pricing, capacity, innovation, and so on. For example,
DuPont constructed an enormous amount of manufacturing capacity for titanium dioxide in the 1970s. It added capacity in excess of the projected growth in
worldwide demand over the next decade. At first glance, this choice looked like
a terrible strategy because the excess capacity could lead to lower market prices
for this commodity. However, DuPont successfully foresaw that, by having excess capacity in reserve, it could punish competitors who cut prices by increasing its production and driving prices even lower in the future. This ability made
it a price leader in the industry, and it enjoyed high profit margins. The strategy
worked quite well, with DuPont continuing to be a worldwide leader in titanium
dioxide 40 years later.23
More recently, game theory has become the tool of choice for the study of
political systems and institutions. As we shall see in Chapter 15, game theory
has shown how voting and agenda setting in committees and elections can be
strategically manipulated in pursuit of one’s ultimate objectives. Part Four of
this book will develop other applications of Nash equilibrium in auctions, voting, and bargaining. We also develop our own case study of the Cuban missile
crisis in Chapter 14.
Some critics remain unpersuaded of the value of Nash equilibrium, claiming that the same understanding of these phenomena can be obtained using
previously known general principles of economics, political science, and so on.
In one sense they are right. A few of these analyses existed before Nash equilibrium came along. For example, the equilibrium of the interaction between two
price-setting firms, which we developed in Section 1 of this chapter, was known
in economics for more than 100 years. One can think of Nash equilibrium as just
a general formulation of that equilibrium concept for all games. Some theories
22
For simultaneous-move models of price competition, see Timothy F. Bresnahan, “Empirical Studies of Industries with Market Power,” in Handbook of Industrial Organization, vol. 2, ed.
Richard L. Schmalensee and Robert D. Willig (Amsterdam: North-Holland/Elsevier, 1989),
pp. 1011–57. For models of entry, see Steven Berry and Peter Reiss, “Empirical Models of Entry and
Market Structure,” in Handbook of Industrial Organization, vol. 3, ed. Mark Armstrong and Robert
Porter (Amsterdam: North-Holland/Elsevier, 2007), pp. 1845–86.
23
Pankaj Ghemawat, “Capacity Expansion in the Titanium Dioxide Industry,” Journal of Industrial
Economics, vol. 33, no. 2 (December 1894), pp. 145–63. For more examples, see Pankaj Ghemawat,
Games Businesses Play: Cases and Models (Cambridge, Mass.: MIT Press, 1997).
6841D CH05 UG.indd 162
12/18/14 3:11 PM
E m p i r i c a l E v i d e n c e c o n c e r n i n g n a s h e q u i l i b r i u m 1 6 3
of strategic voting date to the eighteenth century, and some notions of credibility can be found in history as far back as Thucydides’ Peloponnesian War. However, what Nash equilibrium does is to unify all these applications and thereby
facilitate the development of new ones.
Furthermore, the development of game theory has also led directly to a
wealth of new ideas and applications that did not exist before—for example,
how the existence of a second-strike capability reduces the fear of surprise attack, how different auction rules affect bidding behavior and seller revenues,
how governments can successfully manipulate fiscal and monetary policies to
achieve reelection even when voters are sophisticated and aware of such attempts, and so on. If these examples had all been amenable to previously known
approaches, they would have been discovered long ago.
II. Real-world examples of learning We conclude by offering an interesting illustra-
tion of equilibrium and the learning process in the real-world game of majorleague baseball. In this game, the stakes are high and players play more than
100 games per year, creating strong motivation and good opportunities to learn.
Stephen Jay Gould discovered this beautiful example.24 The best batting averages recorded in a baseball season declined over most of the twentieth century.
In particular, the number of instances of a player averaging .400 or better used
to be much more frequent than they are now. Devotees of baseball history often
explain this decline by invoking nostalgia: “There were giants in those days.” A
moment’s thought should make one wonder why there were no corresponding
pitching giants who would have kept batting averages low. But Gould demolishes such arguments in a more systematic way. He points out that we should
look at all batting averages, not just the top ones. The worst batting averages are
not as bad as they used to be; there are also many fewer .150 hitters in the major
leagues than there used to be. He argues that this overall decrease in variation is
a standardization or stabilization effect:
When baseball was very young, styles of play had not become sufficiently
regular to foil the antics of the very best. Wee Willie Keeler could “hit ’em
where they ain’t” (and compile an average of .432 in 1897) because fielders
didn’t yet know where they should be. Slowly, players moved toward optimal methods of positioning, fielding, pitching, and batting—and variation
inevitably declined. The best [players] now met an opposition too finely
honed to its own perfection to permit the extremes of achievement that
characterized a more casual age. [emphasis added]
In other words, through a succession of adjustments of strategies to counter one
another, the system settled down into its (Nash) equilibrium.
24
Stephen Jay Gould, “Losing the Edge,” in The Flamingo’s Smile: Reflections in Natural History
(New York: W. W. Norton & Company, 1985), pp. 215–29.
6841D CH05 UG.indd 163
12/18/14 3:11 PM
1 6 4 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
Gould marshals decades of hitting statistics to demonstrate that such a decrease in variation did indeed occur, except for occasional “blips.” And indeed
the blips confirm his thesis, because they occur soon after an equilibrium is disturbed by an externally imposed change. Whenever the rules of the game are altered (the strike zone is enlarged or reduced, the pitching mound is lowered, or
new teams and many new players enter when an expansion takes place) or the
technology changes (a livelier ball is used or perhaps, in the future, aluminum
bats are allowed), the preceding system of mutual best responses is thrown out
of equilibrium. Variation increases for a while as players experiment, and some
succeed while others fail. Finally, a new equilibrium is attained, and variation
goes down again. That is exactly what we should expect in the framework of
learning and adjustment to a Nash equilibrium.
Michael Lewis’s 2003 book Moneyball (later made into a movie starring Brad
Pitt) describes a related example of movement toward equilibrium in baseball. Instead of focusing on the strategies of individual players, it focuses on
the teams’ back-office strategies of which players to hire. The book documents
Oakland A’s general manager Billy Beane’s decision to use “sabermetrics” in hiring decisions—that is, paying close attention to baseball statistics based on the
theory of maximizing runs scored and minimizing runs given up to opponents.
These decisions involved paying more attention to attributes undervalued
by the market, such as a player’s documented ability to earn walks. Such decisions arguably led to the A’s becoming a very strong team, going to the playoffs
in five out of seven seasons, despite having less than half the payroll of largermarket teams such as the New York Yankees. Beane’s innovative payroll strategies have subsequently been adopted by other teams, such as the Boston Red
Sox, who, under general manager Theo Epstein, managed to break the “curse
of the Bambino” in 2004 and win their first World Series in 86 years. Over the
course of a decade, nearly a dozen teams decided to hire full-time sabermetricians, with Beane noting in September 2011 that he was once again “fighting
uphill” against larger teams that had learned to best-respond to his strategies.
Real-world games often involve innovation followed by gradual convergence to
equilibrium; the two examples from baseball both give evidence of moving toward equilibrium, although full convergence may sometimes take years or even
decades to complete.25
We take up additional evidence about other game-theoretic predictions at
appropriate points in later chapters. For now, the experimental and empirical
evidence that we have presented should make you cautiously optimistic about
using Nash equilibrium, especially as a first approach. On the whole, we believe you should have considerable confidence in using the Nash equilibrium
25
Susan Slusser, “Michael Lewis on A’s ‘Moneyball’ Legacy,” San Francisco Chronicle, September 18,
2011, p. B-1. The original book is Michael Lewis, Moneyball: The Art of Winning an Unfair Game (New
York: W. W. Norton & Company, 2003).
6841D CH05 UG.indd 164
12/18/14 3:11 PM
s u m m a r y 1 6 5
concept when the game in question is played frequently by players from a reasonably stable population and under relatively unchanging rules and conditions.
When the game is new or is played just once and the players are inexperienced,
you should use the equilibrium concept more cautiously and should not be surprised if the outcome that you observe is not the equilibrium that you calculate. But even then, your first step in the analysis should be to look for a Nash
equilibrium; then you can judge whether it seems a plausible outcome and, if
not, proceed to the further step of asking why not.26 Often the reason will be
your misunderstanding of the players’ objectives, not the players’ failure to play
the game correctly giving their true objectives.
SUMMARY
When players in a simultaneous-move game have a continuous range of actions
to choose, best-response analysis yields mathematical best-response rules that
can be solved simultaneously to obtain Nash equilibrium strategy choices. The
best-response rules can be shown on a diagram in which the intersection of the
two curves represents the Nash equilibrium. Firms choosing price or quantity
from a large range of possible values and political parties choosing campaign
advertising expenditure levels are examples of games with continuous strategies.
Theoretical criticisms of the Nash equilibrium concept have argued that
the concept does not adequately account for risk, that it is of limited use because
many games have multiple equilibria, and that it cannot be justified on the
basis of rationality alone. In many cases, a better description of the game and
its payoff structure or a refinement of the Nash equilibrium concept can lead to
better predictions or fewer potential equilibria. The concept of rationalizability
relies on the elimination of strategies that are never a best response to obtain a
set of rationalizable outcomes. When a game has a Nash equilibrium, that outcome will be rationalizable, but rationalizability also allows one to predict equilibrium outcomes in games that have no Nash equilibria.
The results of laboratory tests of the Nash equilibrium concept show that
a common cultural background is essential for coordinating in games with
multiple equilibria. Repeated play of some games shows that players can learn
from experience and begin to choose strategies that approach Nash equilibrium
choices. Further, predicted equilibria are accurate only when the experimenters’ assumptions match the true preferences of players. Real-world applications
26
In an article probing the weaknesses of Nash equilibrium in experimental data and proposing
QRE-style alternative models for dealing with them, two prominent researchers write, “we will be
the first to admit that we begin the analysis of a new strategic problem by considering the equilibria derived from standard game theory before considering” other possibilities. Jacob K. Goeree and
Charles A. Holt, “Ten Little Treasures of Game Theory and Ten Intuitive Contradictions,” American
Economic Review, vol. 91, no. 5 (December 2001), pp. 1402–22.
6841D CH05 UG.indd 165
12/18/14 3:11 PM
1 6 6 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
of game theory have helped economists and political scientists, in particular,
to understand important consumer, firm, voter, legislative, and government
behaviors.
KEY TERMS
best-response curve (137)
best-response rule (134)
continuous strategy (133)
never a best response (150)
quantal-response
equilibrium (QRE) (158)
rationalizability (150)
rationalizable (150)
refinement (148)
Solved EXERCISES
S1.
In the political campaign advertising game in Section 1.B, party L chooses
an advertising budget, x (millions of dollars), and party R similarly chooses
an advertising budget, y (millions of dollars). We showed there that the
best-response rules in that game are y 5 10x  x for party R and x 5
10y  y for party L.
(a) What is party R’s best response if party L spends $16 million?
(b) Use the specified best-response rules to verify that the Nash equilibrium advertising budgets are x 5 y 5 25, or $25 million.
S2.
The restaurant pricing game illustrated in Figure 5.1 defines customer
demand functions for meals at Xavier’s (Qx) and Yvonne’s (Qy) as Qx 5 44
2 2Px 1 Py , and Qy 5 44 2 2Py 1 Px. Profits for each firm depend in addition on their costs of serving each customer. Suppose that Yvonne’s is
able to reduce its costs to a mere $2 per customer by completely eliminating the wait staff (customers pick up their orders at the counter, and
a few remaining employees bus the tables). Xavier’s continues to incur a
cost of $8 per customer.
(a) Recalculate the best-response rules and the Nash equilibrium prices
for the two firms, given the change in the cost conditions.
(b) Graph the two best-response curves and describe the differences
between your graph and Figure 5.1. In particular, which curve has
moved and by how much? Explain why these changes occurred in
the diagram.
S3.
Yuppietown has two food stores, La Boulangerie, which sells bread, and
La Fromagerie, which sells cheese. It costs $1 to make a loaf of bread and
6841D CH05 UG.indd 166
12/18/14 3:11 PM
e x e r c i s e s 1 6 7
$2 to make a pound of cheese. If La Boulangerie’s price is P1 dollars per
loaf of bread and La Fromagerie’s price is P2 dollars per pound of cheese,
their respective weekly sales, Q1 thousand loaves of bread and Q2 thousand pounds of cheese, are given by the following equations:
Q1 5 14 2 P1 2 0.5P2,
Q2 5 19 2 0.5P1 2 P2.
(a) For each store, write its profit as a function of P1 and P2 (in the exercises that follow, we will call this “the profit function” for brevity). Then find their respective best-response rules. Graph the
best-response curves, and find the Nash equilibrium prices in this
game.
(b) Suppose that the two stores collude and set prices jointly to maximize the sum of their profits. Find the joint profit-maximizing
prices for the stores.
(c) Provide a short intuitive explanation for the differences between the
Nash equilibrium prices and those that maximize joint profit. Why
is joint profit maximization not a Nash equilibrium?
(d) In this problem, bread and cheese are mutual complements. They
are often consumed together; that is why a drop in the price of one increases the sales of the other. The products in our bistro example in
Section 1.A are substitutes for each other. How does this difference
explain the differences among your findings for the best-response
rules, the Nash equilibrium prices, and the joint profit-maximizing
prices in this question, and the corresponding entities in the bistro
example in the text?
6841D CH05 UG.indd 167
S4.
The game illustrated in Figure 5.3 has a unique Nash equilibrium in pure
strategies. However, all nine outcomes in that game are rationalizable.
Confirm this assertion, explaining your reasoning for each outcome.
S5.
For the game presented in Exercise S5 in Chapter 4, what are the rationalizable strategies for each player? Explain your reasoning.
S6.
Section 3.B of this chapter describes a fishing game played in a small
coastal town. When the response rules for the two boats have been derived, rationalizability can be used to justify the Nash equilibrium in the
game. In the description in the text, we take the process of narrowing
down strategies that can never be best responses through three rounds.
By the third round, we know that R (the number of barrels of fish brought
home by boat 1) must be at least 9, and that S (the number of barrels of
fish brought home by boat 2) must be at least 4.5. The narrowing process
in that round restricted R to the range between 9 and 12.75 while restricting S to the range between 4.5 and 7.5. Take this process of narrowing
12/18/14 3:11 PM
1 6 8 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
through one additional (fourth) round and show the reduced ranges of R
and S that are obtained at the end of the round.
S7.
Two carts selling coconut milk (from the coconut) are located at 0 and
1, 1 mile apart on the beach in Rio de Janeiro. (They are the only two
coconut-milk carts on the beach.) The carts—Cart 0 and Cart 1—charge
prices p0 and p1, respectively, for each coconut. One thousand beachgoers buy coconut milk, and these customers are uniformly distributed
along the beach between carts 0 and 1. Each beachgoer will purchase
one coconut milk in the course of her day at the beach, and in addition
to the price, each will incur a transport cost of 0.5 3 d 2, where d is the
distance (in miles) from her beach blanket to the coconut cart. In this
system, Cart 0 sells to all of the beachgoers located between 0 and x, and
Cart 1 sells to all of the beachgoers located between x and 1, where x is
the location of the beachgoer who pays the same total price if she goes to
0 or 1. Location x is then defined by the expression:
p0 1 0.5x 2 5 p1 1 0.5(1 2 x)2.
The two carts will set their prices to maximize their bottom-line profit
figures, B; profits are determined by revenue (the cart’s price times its
number of customers) and cost (each cart incurs a cost of $0.25 per coconut times the number of coconuts sold).
(a) For each cart, determine the expression for the number of customers served as a function of p0 and p1. (Recall that Cart 0 gets the customers between 0 and x, or just x, while Cart 1 gets the customers
between x and 1, or 1 2 x. That is, cart 0 sells to x customers, where
x is measured in thousands, and cart 1 sells to (1 2 x) thousand.)
(b) Write the profit functions for the two carts. Find the two best-response
rules for each cart as a function of their rival’s price.
(c) Graph the best-response rules, and then calculate (and show on
your graph) the Nash equilibrium price level for coconut milk on
the beach.
S8.
6841D CH05 UG.indd 168
Crude oil is transported across the globe in enormous tanker ships called
Very Large Crude Carriers (VLCCs). By 2001, more than 92% of all new
VLCCs were built in South Korea and Japan. Assume that the price of new
VLCCs (in millions of dollars) is determined by the function P 5 180 2 Q,
where Q 5 q Korea 1 q Japan. (That is, assume that only Japan and Korea
produce VLCCs, so they are a duopoly.) Assume that the cost of building
each ship is $30 million in both Korea and Japan. That is, cKorea 5 cJapan 5
30, where the per-ship cost is measured in millions of dollars.
(a) Write the profit functions for each country in terms of qKorea and qJapan
and either cKorea or cJapan. Find each country’s best-response function.
12/18/14 3:11 PM
e x e r c i s e s 1 6 9
(b) Using the best-response functions found in part (a), solve for the
Nash equilibrium quantity of VLCCs produced by each country per
year. What is the price of a VLCC? How much profit is made in each
country?
(c) Labor costs in Korean shipyards are actually much lower than in
their Japanese counterparts. Assume now that the cost per ship in
Japan is $40 million and that in Korea it is only $20 million. Given
cKorea 5 20 and cJapan 5 40, what is the market share of each country (that is, the percentage of ships that each country sells relative
to the total number of ships sold)? What are the profits for each
country?
S9.
Extending the previous problem, suppose China decides to enter the
VLCC construction market. The duopoly now becomes a triopoly, so that
although price is still P 5 180 2 Q, quantity is now given by Q 5 qKorea 1
qJapan 1 qChina. Assume that all three countries have a per-ship cost of $30
million: cKorea 5 cJapan 5 cChina 5 30.
(a) Write the profit functions for each of the three countries in terms of
qKorea, qJapan, and qChina, and cKorea, cJapan, or cChina. Find each country’s
best-response rule.
(b) Using your answer to part (a), find the quantity produced, the market share captured [see Exercise S8, part (c)], and the profits earned
by each country. This will require the solution of three equations in
three unknowns.
(c) What happens to the price of a VLCC in the new triopoly relative to
the duopoly situation in Exercise S8, part (b)? Why?
S10.
Monica and Nancy have formed a business partnership to provide consulting services in the golf industry. They each have to decide how much
effort to put into the business. Let m be the amount of effort put into the
business by Monica, and n be the amount of effort put in by Nancy.
The joint profits of the partnership are given by 4m 1 4n 1 mn, in
tens of thousands of dollars, and the two partners split these profits
equally. However, they must each separately incur the costs of their own
effort; the cost to Monica of her effort is m2, while the cost to Nancy of
her effort is n2 (both measured in tens of thousands of dollars). Each
partner must make her effort decision without knowing what effort decision the other player has made.
(a) If Monica and Nancy each put in effort of m 5 n 5 1, then what are
their payoffs?
(b) If Monica puts in effort of m 5 1, then what is Nancy’s best
response?
(c) What is the Nash equilibrium to this game?
6841D CH05 UG.indd 169
12/18/14 3:11 PM
1 7 0 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
S11.
Nash equilibrium through rationalizability can be achieved in games
with upward‑sloping best-response curves if the rounds of eliminating
never‑best‑response strategies begin with the smallest possible values.
Consider the pricing game between Xavier’s Tapas Bar and Yvonne’s Bistro that is illustrated in Figure 5.1. Use Figure 5.1 and the best-response
rules from which it is derived to begin rationalizing the Nash equilibrium in that game. Start with the lowest possible prices for the two firms
and describe (at least) two rounds of narrowing the set of rationalizable
prices toward the Nash equilibrium.
S12.
A professor presents the following game to Elsa and her 49 classmates.
Each of them simultaneously and privately writes down a number between 0 and 100 on a piece of paper, and they all hand in their numbers.
The professor then computes the mean of these numbers and defines X
to be the mean of the students’ numbers. The student who submits the
number closest to one-half of X wins $50. If multiple students tie, they
split the prize equally.
(a) Show that choosing the number 80 is a dominated strategy.
(b) What would the set of best responses be for Elsa if she knew that all
of her classmates would submit the number 40? That is, what is the
range of numbers for which each number in the range is closer to
the winning number than 40?
(c) What would the set of best responses be for Elsa if she knew that all
of her classmates would submit the number 10?
(d) Find a symmetric Nash equilibrium to this game. That is, what
number is a best response to everyone else submitting that same
number?
(e) Which strategies are rationalizable in this game?
UNSolved EXERCISES
U1.
6841D CH05 UG.indd 170
Diamond Trading Company (DTC), a subsidiary of De Beers, is the dominant supplier of high-quality diamonds for the wholesale market. For simplicity, assume that DTC has a monopoly on wholesale diamonds. The
quantity that DTC chooses to sell thus has a direct impact on the wholesale price of diamonds. Let the wholesale price of diamonds (in hundreds
of dollars) be given by the following inverse demand function: P 5 120 2
Q DTC. Assume that DTC has a cost of 12 (hundred dollars) per high-quality
diamond.
(a) Write DTC’s profit function in terms of Q DTC, and solve for DTC’s
profit‑maximizing quantity. What will be the wholesale price of diamonds at that quantity? What will DTC’s profit be?
12/18/14 3:11 PM
e x e r c i s e s 1 7 1
Frustrated with DTC’s monopoly, several diamond mining interests
and large retailers collectively set up a joint venture called Adamantia to
act as a competitor to DTC in the wholesale market for diamonds. The
wholesale price is now given by P 5 120 2 Q DTC 2 Q ADA. Assume that Adamantia has a cost of 12 (hundred dollars) per high-quality diamond.
(b) Write the best-response functions for both DTC and Adamantia.
What quantity does each wholesaler supply to the market in equilibrium? What wholesale price do these quantities imply? What will
the profit of each supplier be in this duopoly situation?
(c) Describe the differences in the market for wholesale diamonds
under the duopoly of DTC and Adamantia relative to the monopoly
of DTC. What happens to the quantity supplied in the market and
the market price when Adamantia enters? What happens to the collective profit of DTC and Adamantia?
U2.
There are two movie theaters in the town of Harkinsville: Modern Multiplex, which shows first-run movies, and Sticky Shoe, which shows movies
that have been out for a while at a cheaper price. The demand for movies
at Modern Multiplex is given by Q MM 5 14 2 P MM 1 P SS, while the demand for movies at Sticky Shoe is Q SS 5 8 2 2P SS + P MM, where prices are
in dollars and quantities are measured in hundreds of moviegoers. Modern Multiplex has a per-customer cost of $4, while Sticky Shoe has a percustomer cost of only $2.
(a) From the demand equations alone, what indicates whether Modern Multiplex and Sticky Shoe offer services that are substitutes or
complements?
(b) Write the profit function for each theater in terms of PSS and P MM.
Find each theater’s best-response rule.
(c) Find the Nash equilibrium price, quantity, and profit for each
theater.
(d) What would each theater’s price, quantity, and profit be if the two
decided to collude to maximize joint profits in this market? Why
isn’t the collusive outcome a Nash equilibrium?
U3.
Fast forward a decade beyond the situation in Exercise S3. Yuppietown’s
demand for bread and cheese has decreased, and the town’s two food
stores, La Boulangerie and La Fromagerie, have been bought out by a
third company: L’Épicerie. It still costs $1 to make a loaf of bread and $2
to make a pound of cheese, but the quantities of bread and cheese sold
(Q1 and Q2 respectively, measured in thousands) are now given by the
equations:
Q1 5 8 2 P1 2 0.5P2,
6841D CH05 UG.indd 171
Q2 5 16 2 0.5P1 2 P2.
12/18/14 3:11 PM
1 7 2 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
Again, P1 is the price in dollars of a loaf of bread, and P2 is the price in
dollars of a pound of cheese.
(a) Initially, L’Épicerie runs La Boulangerie and La Fromagerie as if they
were separate firms, with independent managers who each try to
maximize their own profit. What are the Nash equilibrium quantities, prices, and profits for the two divisions of L’Épicerie, given the
new quantity equations?
(b) The owners of L’Épicerie think that they can make more total profit
by coordinating the pricing strategies of the two Yuppietown divisions of their company. What are the joint-profit-maximizing prices
for bread and cheese under collusion? What quantities do La Boulangerie and La Fromagerie sell of each good, and what is the profit
that each division earns separately?
(c) In general, why might companies sell some of their goods at prices
below cost? That is, explain a rationale of loss leaders, using your
answer from part (b) as an illustration.
U4.
The coconut-milk carts from Exercise S7 set up again the next day. Nearly
everything is exactly the same as in Exercise S7: the carts are in the same
locations, the number and distribution of beachgoers is identical, and
the demand of the beachgoers for exactly one coconut milk each is unchanged. The only difference is that it is a particularly hot day, so that
now each beachgoer incurs a higher transport cost of 0.6d 2. Again, Cart 0
sells to all of the beachgoers located between 0 and x, and Cart 1 sells to
all of the beachgoers located between x and 1, where x is the location of
the beachgoer who pays the same total price if she goes to 0 or 1. However, now location x is defined by the expression:
p0 1 0.6x2 5 p1 1 0.6(1 2 x)2.
Again, each cart has a cost of $0.25 per coconut sold.
(a) For each cart, determine the expression for the number of customers served as a function of p0 and p1. (Recall that Cart 0 gets the customers between 0 and x, or just x, while Cart 1 gets the customers
between x and 1, or 1 2 x. That is, Cart 0 sells to x customers, where
x is measured in thousands, and Cart 1 sells to (1 2 x) thousand.)
(b) Write out profit functions for the two carts and find the two
best-response rules.
(c) Calculate the Nash equilibrium price level for coconuts on the
beach. How does this price compare with the price found in Exercise
S7? Why?
U5.
6841D CH05 UG.indd 172
The game illustrated in Figure 5.4 has a unique Nash equilibrium in pure
strategies. Find that Nash equilibrium, and then show that it is also the
unique rationalizable outcome in that game.
12/18/14 3:11 PM
e x e r c i s e s 1 7 3
U6.
What are the rationalizable strategies of the game “Evens or Odds” from
Exercise S12 in Chapter 4?
U7.
In the fishing-boat game of Section 3.B, we showed how it is possible
for there to be a uniquely rationalizable outcome in continuous strategies that is also a Nash equilibrium. However, this is not always the case;
there may be many rationalizable strategies, and not all of them will necessarily be part of a Nash equilibrium.
Returning to the political advertising game of Exercise S1, find the set
of rationalizable strategies for party L. (Due to their symmetric payoffs,
the set of rationalizable strategies will be the same for party R.) Explain
your reasoning.
U8.
Intel and AMD, the primary producers of computer central processing
units (CPUs), compete with one another in the mid-range chip category
(among other categories). Assume that global demand for mid‑range
chips depends on the quantity that the two firms make, so that the price
(in dollars) for mid-range chips is given by P 5 210 2 Q, where Q 5 q Intel 1 q AMD and where the quantities are measured in millions. Each midrange chip costs Intel $60 to produce. AMD’s production process is more
streamlined; each chip costs them only $48 to produce.
(a) Write the profit function for each firm in terms of q Intel and q AMD.
Find each firm’s best-response rule.
(b) Find the Nash equilibrium price, quantity, and profit for each firm.
(c) (Optional) Suppose Intel acquires AMD, so that it now has two separate divisions with two different production costs. The merged firm
wishes to maximize total profits from the two divisions. How many
chips should each division produce? (Hint: You may need to think
carefully about this problem, rather than blindly applying mathematical techniques.) What is the market price and the total profit to
the firm?
U9.
Return to the VLCC triopoly game of Exercise S9. In reality, the three countries do not have identical production costs. China has been gradually
entering the VLCC construction market for several years, and its production costs started out rather high due to lack of experience.
(a) Solve for the triopoly quantities, market shares, price, and profits for
the case where the per-ship costs are $20 million for Korea, $40 million for Japan, and $60 million for China (c Korea 5 20, c Japan 5 40, and
c China 5 60).
After it gains experience and adds production capacity, China’s per-ship
cost will decrease dramatically. Because labor is even cheaper in China
than in Korea, eventually the per-ship cost will be even lower in China
than it is in Korea.
6841D CH05 UG.indd 173
12/18/14 3:11 PM
1 7 4 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
(b) Repeat part (a) with the adjustment that China’s per-ship cost is $16
million (c Korea 5 20, c Japan 5 40, and c China 5 16).
U10.
Return to the story of Monica and Nancy from Exercise S10. After some
additional professional training, Monica is more productive on the job,
so that the joint profits of their company are now given by 5m 1 4n 1 mn,
in tens of thousands of dollars. Again, m is the amount of effort put into
the business by Monica, n is the amount of effort put in by Nancy, and
the costs are m2 and n2 to Monica and Nancy respectively (in tens of
thousands of dollars).
The terms of their partnership still require that the joint profits be
split equally, despite the fact that Monica is more productive. Assume
that their effort decisions are made simultaneously.
(a) What is Monica’s best response if she expects Nancy to put in an effort of n 5 −43 ?
(b) What is the Nash equilibrium to this game?
(c) Compared to the old Nash equilibrium found in Exercise S10, part
(c), does Monica now put in more, less, or the same amount of effort? What about Nancy?
(d) What are the final payoffs to Monica and Nancy in the new Nash
equilibrium (after splitting the joint profits and accounting for their
costs of effort)? How do they compare to the payoffs to each of them
under the old Nash equilibrium? In the end, who receives more benefit from Monica’s additional training?
U11.
A professor presents a new game to Elsa and her 49 classmates (similar
to the situation in Exercise S12). As before, each of the students simultaneously and privately writes down a number between 0 and 100 on a
piece of paper, and the professor computes the mean of these numbers
and calls it X. This time the student who submits the number closest to
−23 3 (X 1 9) wins $50. Again, if multiple students tie, they split the prize
equally.
(a) Find a symmetric Nash equilibrium to this game. That is, what number is a best response to everyone else submitting the same number?
(b) Show that choosing the number 5 is a dominated strategy. (Hint:
What would class average X have to be for the target number to be 5?)
(c) Show that choosing the number 90 is a dominated strategy.
(d) What are all of the dominated strategies?
(e) Suppose Elsa believes that none of her classmates will play the
dominated strategies found in part (d). Given these beliefs, what
strategies are never a best response for Elsa?
6841D CH05 UG.indd 174
12/18/14 3:11 PM
e x e r c i s e s 1 7 5
(f)
U12.
Which strategies do you think are rationalizable in this game? Explain your reasoning.
(Optional—requires calculus) Recall the political campaign advertising
example from Section 1.C concerning parties L and R. In that example,
when L spends $x million on advertising and R spends $y million, L gets
a share x (x 1 y) of the votes and R gets a share y(x 1 y). We also mentioned that two types of asymmetries can arise between the parties in
that model. One party—say, R—may be able to advertise at a lower cost
or R’s advertising dollars may be more effective in generating votes than
L’s . To allow for both possibilities, we can write the payoff functions of
the two parties as
VL x
x
x ky
and
VR ky
cy,
x ky
where k 0 and c 0.
These payoff functions show that R has an advantage in the relative effectiveness of its ads when k is high and that R has an advantage in the
cost of its ads when c is low.
(a) Use the payoff functions to derive the best-response functions for R
(which chooses y) and L (which chooses x).
(b) Use your calculator or your computer to graph these best-response
functions when k 5 1 and c 5 1. Compare the graph with the one for
the case in which k 5 1 and c 5 0.8. What is the effect of having an
advantage in the cost of advertising?
(c) Compare the graph from part (b), when k 5 1 and c 5 1 with the one
for the case in which k 5 2 and c 5 1. What is the effect of having an
advantage in the effectiveness of advertising dollars?
(d) Solve the best-response functions that you found in part (a), jointly
for x and y, to show that the campaign advertising expenditures in
Nash equilibrium are
x
ck
x
(c k)2
and
y
k
.
(c k)2
(e) Let k 5 1 in the equilibrium spending-level equations and show
how the two equilibrium spending levels vary with changes in c
(that is, interpret the signs of dx dc and dydc). Then let c 5 1 and
show how the two equilibrium spending levels vary with changes in
k (that is, interpret the signs of dxdk and dydk). Do your answers
support the effects that you observed in parts (b) and (c) of this
exercise?
6841D CH05 UG.indd 175
12/18/14 3:11 PM
1 7 6 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
■
Appendix:
Finding a Value to Maximize a Function
Here we develop in a simple way the method for choosing a variable X to obtain the
maximum value of a variable that is a function of it, say Y 5 F(X). Our applications
will mostly be to cases where the function is quadratic, such as Y 5 A 1BX 2 CX 2.
For such functions we derive the formula X 5 B(2C) that was stated and used in the
chapter. We develop the general idea using calculus, and then offer an alternative
approach that does not use calculus but applies only to the quadratic function.27
The calculus method tests a value of X for optimality by seeing what happens to the value of the function for other values on either side of X. If X does
indeed maximize Y 5 F(X), then the effect of increasing or decreasing X should
be a drop in the value of Y. Calculus gives us a quick way to perform such a test.
Figure 5A.1 illustrates the basic idea. It shows the graph of a function Y 5
F(X), where we have used a function of the type that fits our application, even
though the idea is perfectly general. Start at any point P with coordinates (X, Y)
on the graph. Consider a slightly different value of X, say (X 1 h). Let k be the resulting change in Y 5 F(X), so the point Q with coordinates (X 1 h, Y 1 k) is also
on the graph. The slope of the chord joining P to Q is the ratio kh. If this ratio is
positive, then h and k have the same sign; as X increases, so does Y. If the ratio is
negative, then h and k have opposite signs; as X increases, Y decreases.
If we now consider smaller and smaller changes h in X, and the corresponding smaller and smaller changes k in Y, the chord PQ will approach the tangent
to the graph at P. The slope of this tangent is the limiting value of the ratio kh. It
is called the derivative of the function Y 5 F(X) at the point X. Symbolically, it is
written as F9(X) or dYdX. Its sign tells us whether the function is increasing or
decreasing at precisely the point X.
For the quadratic function in our application, Y 5 A 1 BX 2 CX 2 and
Y 1 k 5 A 1 B(X 1 h) 2 C(X 1 h)2.
Therefore, we can find an expression for k as follows:
k 5 [A 1 B(X 1 h) 2 C(X 1 h)2] 2 (A 1 BX 2 CX 2)
5 Bh 2 C [(X 1 h)2 2 X 2]
5 Bh 2 C(X 2 1 2Xh 1 h 2 2 X 2)
5 (B 2 2CX)h 2 Ch 2.
27
Needless to say, we give only the briefest, quickest treatment, leaving out all issues of functions
that don’t have derivatives, functions that are maximized at an extreme point of the interval over
which they are defined, and so on. Some readers will know all we say here; some will know much
more. Others who want to find out more should refer to any introductory calculus textbook.
6841D CH05 UG.indd 176
12/18/14 3:11 PM
A p p e n d i x : F i n d i n g a Va l u e t o M a x i m i z e a f u n c t i o n 1 7 7
Y
tangent
Q
Y k
chord
Y F(X)
k
Y
P
h
X
Xh
X
FIGURE 5a.1 Derivative of a Function Illustrated
Then k h 5 (B 2 2CX ) 2 Ch. In the limit as h goes to zero, k h 5 (B 2 2CX ).
This last expression is then the derivative of our function.
Now we use the derivative to find a test for optimality. Figure 5A.2 illustrates the idea. The point M yields the highest value of Y 5 F(X). The function
increases as we approach the point M from the left and decreases after we have
passed to the right of M. Therefore the derivative F 9(X) should be positive for
values of X smaller than M and negative for values of X larger than M. By continuity, the derivative precisely at M should be 0. In ordinary language, the graph
of the function should be flat where it peaks.
In our quadratic example, the derivative is: F9(X ) 5 B 2 2CX. Our optimality
test implies that the function is optimized when this is 0, or at X 5 B(2C). This
is exactly the formula given in the chapter.
One additional check needs to be performed. If we turn the whole figure
upside down, M is the minimum value of the upside-down function, and at this
trough the graph will also be flat. So for a general function F(X), setting F9(X ) 5
0 might yield an X that gives its minimum rather than the maximum. How do we
distinguish the two possibilities?
At a maximum, the function will be increasing to its left and decreasing to
its right. Therefore the derivative will be positive for values of X smaller than the
purported maximum, and negative for larger values. In other words, the derivative, itself regarded as a function of X, will be decreasing at this point. A decreasing function has a negative derivative. Therefore, the derivative of the derivative,
what is called the second derivative of the original function, written as F99(X) or
d 2YdX 2, should be negative at a maximum. Similar logic shows that the second
6841D CH05 UG.indd 177
12/18/14 3:11 PM
1 7 8 [ C h . 5 ] s i m u lta n e o u s - m o v e g a m e s
Y
M
F(X) 0 at M
L
R
X
FIGURE 5a.2 Optimum of a Function
derivative should be positive at a minimum; that is what distinguishes the two
cases.
For the derivative F9(X) 5 B 2 2CX of our quadratic example, applying the
same h, k procedure to F 9(X) as we did to F (X) shows F99(X ) 5 22C. This is negative so long as C is positive, which we assumed when stating the problem in the
chapter. The test F9(X) 5 0 is called the first-order condition for maximization of
F(X), and F99(X) , 0 is the second-order condition.
To fix the idea further, let us apply it to the specific example of Xavier’s best
response that we considered in the chapter. We had the expression
Px  28(44  Py)  (16  44  Py )Px 2 2(Px )2.
This is a quadratic function of Px (holding the other restaurant’s price, Py, fixed).
Our method gives its derivative:
d x
dPx
(60 Py ) 4Px.
The first-order condition for Px to maximize Px is that this derivative should be
0. Setting it equal to 0 and solving for Px gives the same equation as derived in
Section 1.A. (The second-order condition is d 2 Px dP 2x , 0, which is satisfied
because the second-order derivative is just –4.)
6841D CH05 UG.indd 178
12/18/14 3:11 PM
A p p e n d i x : F i n d i n g a Va l u e t o M a x i m i z e a f u n c t i o n 1 7 9
We hope you will regard the calculus method as simple enough and that you
will have occasion to use it again in a few places later, for example, in Chapter 11
on collective action. But if you find it too difficult, here is a noncalculus alternative method that works for quadratic functions. Rearrange terms to write the
function as
Y A BX CX 2
B2
B2
BX CX 2
4C
4C
A
B2
B2
B
2
X2
C
4C
4C 2
C
A
2
B
B2
X .
C
2C
4C
A
In the final form of the expression, X appears only in the last term, where a
square involving it is being subtracted (remember C . 0). The whole expression
is maximized when this subtracted term is made as small as possible, which
happens when X 5 B(2C). Voila!
This method of “completing the square” works for quadratic functions and
therefore will suffice for most of our uses. It also avoids calculus. But we must
admit it smacks of magic. Calculus is more general and more methodical. It repays a little study many times over.
6841D CH05 UG.indd 179
12/18/14 3:11 PM
6
6
■
Combining Sequential and
Simultaneous Moves
I
C hapter 3, we considered games of purely sequential moves; Chapters 4
and 5 dealt with games of purely simultaneous moves. We developed concepts and techniques of analysis appropriate to the pure game types—trees
and rollback equilibrium for sequential moves, payoff tables and Nash equilibrium for simultaneous moves. In reality, however, many strategic situations
contain elements of both types of interaction. Also, although we used game
trees (extensive forms) as the sole method of illustrating sequential-move
games and game tables (strategic forms) as the sole method of illustrating
­simultaneous-move games, we can use either form for any type of game.
In this chapter, we examine many of these possibilities. We begin by showing how games that combine sequential and simultaneous moves can be solved
by combining trees and payoff tables and by combining rollback and Nash equilibrium analysis in appropriate ways. Then we consider the effects of changing
the nature of the interaction in a particular game. Specifically, we look at the effects of changing the rules of a game to convert sequential play into simultaneous play and vice versa and of changing the order of moves in sequential play.
This topic gives us an opportunity to compare the equilibria found by using the
concept of rollback, in a sequential-move game, with those found by using the
Nash equilibrium concept, in the simultaneous version of the same game. From
this comparison, we extend the concept of Nash equilibria to sequential-play
games. It turns out that the rollback equilibrium is a special case, usually called
a refinement, of these Nash equilibria.
n
180
6841D CH06 UG.indd 180
12/18/14 3:11 PM
g a m e s w i t h b o t h s i m u lta n e o u s a n d s e q u e n t i a l m o v e s 1 8 1
1 GAMES WITH BOTH SIMULTANEOUS AND SEQUENTIAL MOVES
As mentioned several times thus far, most real games that you will encounter will be made up of numerous smaller components. Each of these components may entail simultaneous play or sequential play, so the full game requires
you to be familiar with both. The most obvious examples of strategic interactions containing both sequential and simultaneous parts are those between
two (or more) players over an extended period of time. You may play a number of different simultaneous-play games against your roommate during your
year together: Your action in any one of these games is influenced by the history of your interactions up to then and by your expectations about the interactions to come. Also, many sporting events, interactions between competing
firms in an industry, and political relationships are sequentially linked series of
simultaneous-move games. Such games are analyzed by combining the tools
presented in Chapter 3 (trees and rollback) and in Chapters 4 and 5 (payoff
tables and Nash equilibria).1 The only difference is that the actual analysis becomes more complicated as the number of moves and interactions increases.
A. Two-Stage Games and Subgames
Our main illustrative example for such situations includes two would-be telecom
giants, CrossTalk and GlobalDialog. Each can choose whether to invest $10 billion in the purchase of a fiber-optic network. They make their investment decisions simultaneously. If neither chooses to make the investment, that is the end
of the game. If one invests and the other does not, then the investor has to make
a pricing decision for its telecom services. It can choose either a high price,
which will attract 60 million customers, from each of whom it will make an operating profit of $400, or a low price, which will attract 80 million customers,
from each of whom it will make an operating profit of $200. If both firms acquire
fiber-optic networks and enter the market, then their pricing choices become a
second simultaneous-move game. Each can choose either the high or the low
price. If both choose the high price, they will split the total market equally; so
each will get 30 million customers and an operating profit of $400 from each.
If both choose the low price, again they will split the total market equally; so
each will get 40 million customers and an operating profit of $200 from each.
If one chooses the high price and the other the low price, then the low-price
1
Sometimes the simultaneous part of the game will have equilibria in mixed strategies; then, the
tools we develop in Chapter 7 will be required. We mention this possibility in this chapter where relevant and give you an opportunity to use such methods in exercises for the later chapters.
6841D CH06 UG.indd 181
12/18/14 3:11 PM
1 8 2 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
firm will get all the 80 million customers at that price, and the high‑price firm
will get nothing.
The interaction between CrossTalk and GlobalDialog forms a two-stage
game. Of the four combinations of the simultaneous-move choices at the first
(investment) stage, one ends the game, two lead to a second-stage (pricing) decision by just one player, and the fourth leads to a simultaneous-move (pricing)
game at the second stage. We show this game pictorially in Figure 6.1.
Regarded as a whole, Figure 6.1 illustrates a game tree, but one that is more
complex than the trees in Chapter 3. You can think of it as an elaborate “tree
house” with multiple levels. The levels are shown in different parts of the same
two-dimensional figure, as if you are looking down at the tree from a helicopter
positioned directly above it.
The first-stage game is represented by the payoff table in the top-left quadrant of Figure 6.1. You can think of it as the first floor of the tree house. It has
four “rooms.” The room in the northwest corner corresponds to the “Don’t invest” first-stage moves of both firms. If the firms’ decisions take the game to this
room, there are no further choices to be made, so we can think of it being like a
terminal node of a tree in Chapter 3 and show the payoffs in the cell of the table;
First stage:
investment game
Second stage:
GlobalDialog’s pricing decision
GLOBALDIALOG
CROSSTALK
Don’t
Invest
Don’t
0, 0
0, 4
Invest
2, 0
1, 2
High
14
Low
6
GLOBALDIALOG
Second stage:
CrossTalk’s pricing decision
Second stage:
pricing game
High
14
Low
6
GLOBALDIALOG
High
CROSSTALK
CROSSTALK
Low
High
2, 2
–10, 6
Low
6, –10
–2, –2
FIGURE 6.1 Two-Stage Game Combining Sequential and Simultaneous Moves
6841D CH06 UG.indd 182
12/18/14 3:11 PM
g a m e s w i t h b o t h s i m u lta n e o u s a n d s e q u e n t i a l m o v e s 1 8 3
both firms get 0. However, all of the other combinations of actions for the two
firms lead to rooms that lead to further choices; so we cannot yet show the
payoffs in those cells. Instead, we show branches leading to the second floor.
The northeast and southwest rooms show only the payoff to the firm that has not
invested; the branches leading from each of these rooms take us to single‑firm
pricing decisions in the second stage. The southeast room leads to a multiroom
second-floor structure within the tree house, which represents the second-stage
pricing game that is played if both firms have invested in the first stage. This
second-floor structure has four rooms corresponding to the four combinations
of the two firms’ pricing moves.
All of the second-floor branches and rooms are like terminal nodes of a
game tree, so we can show the payoffs in each case. Payoffs here consist of each
firm’s operating profits minus the previous investment costs; payoff values are
written in billions of dollars.
Consider the branch leading to the southwest corner of Figure 6.1. The game
arrives in that corner if CrossTalk is the only firm that has invested. Then, if it
chooses the high price, its operating profit is $400  60 million  $24 billion;
after subtracting the $10 billion investment cost, its payoff is $14 billion, which
we write as 14. In the same corner, if CrossTalk chooses the low price, then its
operating profit is $200  80 million  $16 billion, yielding the payoff 6 after accounting for its original investment. In this situation, GlobalDialog’s payoff is 0,
as shown in the southwest room of the first floor of our tree. Similar calculations
for the case in which GlobalDialog is the only firm to invest give us the payoffs
shown in the northeast corner of Figure 6.1; again, the payoff of 0 for CrossTalk
is shown in the northeast room of the first-stage game table.
If both firms invest, both play the second-stage pricing game illustrated in
the southeast corner of the figure. When both choose the high price in the second stage, each gets operating profit of $400  30 million (half of the market),
or $12 billion; after subtracting the $10 billion investment cost, each is left with
a net profit of $2 billion, or a payoff of 2. If both firms choose the low price in
the second stage, each gets operating profit of $200  40 million  $8 billion,
and, after subtracting the $10 billion investment cost, each is left with a net loss
of $2 billion, or a payoff of 2. Finally, if one firm charges the high price and the
other firm the low price, then the low-price firm has operating profit of $200 
80 million  $16 billion, leading to the payoff 6, while the high-price firm gets no
operating profit and simply loses its $10 billion investment, for a payoff of 10.
As with any multistage game in Chapter 3, we must solve this game backward,
starting with the second-stage game. In the two single-firm decision problems,
we see at once that the high-price policy yields the higher payoff. We highlight
this by showing that payoff in a larger-size type.
The second-stage pricing game has to be solved by using methods developed
in Chapter 4. It is immediately evident, however, that this game is a prisoners’
6841D CH06 UG.indd 183
12/18/14 3:11 PM
1 8 4 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
GLOBALDIALOG
CROSSTALK
Don’t
Invest
Don’t
0, 0
0, 14
Invest
14, 0
–2, –2
FIGURE 6.2 First-Stage Investment Game (After Substituting Rolled-Back Payoffs from the
Equilibrium of the Second Stage)
dilemma. Low is the dominant strategy for each firm; so the outcome is the room
in the southeast corner of the second-stage game table; each firm gets payoff
2.2 Again, we show these payoffs in a larger type size to highlight the fact that
they are the payoffs obtained in the second-stage equilibrium.
Rollback now tells us that each first-stage configuration of moves should
be evaluated by looking ahead to the equilibrium of the second-stage game (or
the optimum second-stage decision) and the resulting payoffs. We can therefore
substitute the payoffs that we have just calculated into the previously empty or
partly empty rooms on the first floor of our tree house. This substitution gives us
a first floor with known payoffs, shown in Figure 6.2.
Now we can use the methods of Chapter 4 to solve this simultaneous-move
game. You should immediately recognize the game in Figure 6.2 as a chicken
game. It has two Nash equilibria, each of which entails one firm choosing Invest
and the other choosing Don’t. The firm that invests makes a huge profit; so each
firm prefers the equilibrium in which it is the investor while the other firm stays
out. In Chapter 4, we briefly discussed the ways in which one of the two equilibria might get selected. We also pointed out the possibility that each firm might
try to get its preferred outcome, with the result that both of them invest and both
lose money. Indeed, this is what seems to have happened in the real-life play of
this game. In Chapter 7, we investigate this type of game further, showing that it
has a third Nash equilibrium, in mixed strategies.
Analysis of Figure 6.2 shows that the first-stage game in our example does
not have a unique Nash equilibrium. This problem is not too serious, because
we can leave the solution ambiguous to the extent that was done in the preceding paragraph. Matters would be worse if the second-stage game did not have
a unique equilibrium. Then it would be essential to specify the precise process
by which an outcome gets selected so that we could figure out the second-stage
payoffs and use them to roll back to the first stage.
2
As is usual in a prisoners’ dilemma, if the firms could successfully collude and charge high prices,
both could get the higher payoff of 2. But this outcome is not an equilibrium because each firm is
tempted to cheat to try to get the much higher payoff of 6.­
6841D CH06 UG.indd 184
12/18/14 3:11 PM
g a m e s w i t h b o t h s i m u lta n e o u s a n d s e q u e n t i a l m o v e s 1 8 5
The second-stage pricing game shown in the table in the bottom-right quadrant of Figure 6.1 is one part of the complete two-stage game. However, it is also
a full-fledged game in its own right, with a fully specified structure of players,
strategies, and payoffs. To bring out this dual nature more explicitly, it is called a
subgame of the full game.
More generally, a subgame is the part of a multimove game that begins at
a particular node of the original game. The tree for a subgame is then just that
part of the tree for the full game that takes this node as its root, or initial, node. A
multimove game has as many subgames as it has decision nodes.
B. Configurations of Multistage Games
In the multilevel game illustrated in Figure 6.1, each stage consists of a
simultaneous-move game. However, that may not always be the case.
Simultaneous and sequential components may be mixed and matched in any
way. We give two more examples to clarify this point and to reinforce the ideas
introduced in the preceding section.
The first example is a slight variation of the CrossTalk–GlobalDialog game.
Suppose one of the firms—say, GlobalDialog—has already made the $10 billion
investment in the fiber-optic network. CrossTalk knows of this investment and
now has to decide whether to make its own investment. If CrossTalk does not
invest, then GlobalDialog will have a simple pricing decision to make. If CrossTalk invests, then the two firms will play the second-stage pricing game already
described. The tree for this multistage game has conventional branches at the
initial node and has a simultaneous-move subgame starting at one of the nodes
to which these initial branches lead. The complete tree is shown in Figure 6.3.
Second stage:
pricing game
GLOBALDIALOG
High
Invest
CROSSTALK
Low
High
2, 2
–10, 6
Low
6, –10
–2, –2
CROSSTALK
Don’t
Second stage:
GlobalDialog’s
pricing decision
High
0, 14
Low
0, 6
FIGURE 6.3 Two-Stage Game When One Firm Has Already Invested
6841D CH06 UG.indd 185
12/18/14 3:11 PM
1 8 6 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
When the tree has been set up, it is easy to analyze the game. We show the
rollback analysis in Figure 6.3 by using large type for the equilibrium payoffs
that result from the second-stage game or decision and a thicker branch for
CrossTalk’s first-stage choice. In words, CrossTalk figures out that, if it invests,
the ensuing prisoners’ dilemma of pricing will leave it with payoff 2, whereas
staying out will get it 0. Thus, it prefers the latter. GlobalDialog gets 14 instead of
the 2 that it would have gotten if CrossTalk had invested, but CrossTalk’s concern is to maximize its own payoff and not to ruin GlobalDialog deliberately.
This analysis does raise the possibility, though, that GlobalDialog may try
to get its investment done quickly before CrossTalk makes its decision so as
to ensure its most preferred outcome from the full game. And CrossTalk may
try to beat GlobalDialog to the punch in the same way. In Chapter 9, we study
some methods, called strategic moves, that may enable players to secure such
advantages.
Our second example comes from football. Before each play, the coach for
the offense chooses the play that his team will run; simultaneously, the coach
for the defense sends his team out with instructions on how they should align
themselves to counter the offense. Thus, these moves are simultaneous. Suppose the offense has just two alternatives, a safe play and a risky play, and the
defense may align itself to counter either of them. If the offense has planned
to run the risky play and the quarterback sees the defensive alignment that will
counter it, he can change the play at the line of scrimmage. And the defense,
hearing the change, can respond by changing its own alignment. Thus, we have
a simultaneous-move game at the first stage, and one of the combination of
choices of moves at this stage leads to a sequential-move subgame. Figure 6.4
shows the complete tree.
This is a zero-sum game in which the offense’s payoffs are measured in the
number of yards that it expects to gain, and the defense’s payoffs are exactly the
opposite, measured in the number of yards it expects to give up. The safe play
for the offense gets it 2 yards, even if the defense is ready for it; if the defense
is not ready for it, the safe play does not do much better, gaining 6 yards. The
risky play, if it catches the defense unready to cover it, gains 30 yards. But if the
defense is ready for the risky play, the offense loses 10 yards. We show this set of
payoffs of 10 for the offense and 10 for the defense at the terminal node where
the offense does not change the play. If the offense changes the play (back to
safe), the payoffs are (2, 2) if the defense responds and (6, 6) if it does not;
these payoffs are the same as those that arise when the offense plans the safe
play from the start.
We show the chosen branches in the sequential subgame as thick lines in
Figure 6.4. It is easy to see that, if the offense changes its play, the defense will respond to keep its payoff at 2 rather than 6 and that the offense should change
the play to get 2 rather than 10. Rolling back, we should put the resulting set of
6841D CH06 UG.indd 186
12/18/14 3:11 PM
c h a n g i n g t h e o r d e r o f m o v e s i n a g a m e 1 8 7
First stage:
coaches choose alignment
DEFENSE TO COVER
OFFENSE
TO PLAY
Safe
Risky
Safe
2, –2
6, –6
Risky
30, –30
1, 2
Respond
Change
play
DEFENSE
Don’t
OFFENSE
Don’t
2, –2
6, –6
–10, 10
FIGURE 6.4 Simultaneous-Move First Stage Followed by Sequential Moves
payoffs, (2, 2), in the bottom-right cell of the simultaneous-move game of the
first stage. Then we see that this game has no Nash equilibrium in pure strategies. The reason is the same as that in the tennis game of Chapter 4, Section 7;
one player (defense) wants to match the moves (align to counter the play that the
offense is choosing) while the other (offense) wants to unmatch the moves (catch
the defense in the wrong alignment). In Chapter 7, we show how to calculate the
mixed-strategy equilibrium of such a game. It turns out that the offense should
choose the risky play with probability 18, or 12.5%.
2 CHANGING THE ORDER OF MOVES IN A GAME
The games considered in preceding chapters were presented as either sequential or simultaneous in nature. We used the appropriate tools of analysis to predict equilibria in each type of game. In Section 1 of this chapter, we discussed
games with elements of both sequential and simultaneous play. These games
required both sets of tools to find solutions. But what about games that could be
played either sequentially or simultaneously? How would changing the play of
a particular game and thus changing the appropriate tools of analysis alter the
expected outcomes?
The task of turning a sequential-play game into a simultaneous one requires
changing only the timing or observability with which players make their choices
of moves. Sequential-move games become simultaneous if the players cannot
observe moves made by their rivals before making their own choices. In that
6841D CH06 UG.indd 187
12/18/14 3:11 PM
1 8 8 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
case, we would analyze the game by searching for a Nash equilibrium rather
than for a rollback equilibrium. Conversely, a simultaneous-move game could
become sequential if one player were able to observe the other’s move before
choosing her own.
Any changes to the rules of the game can also change its outcomes. Here, we
illustrate a variety of possibilities that arise owing to changes in different types
of games.
A. Changing Simultaneous-Move Games into Sequential-Move Games
i. FIRST-MOVER ADVANTAGE A first-mover advantage may emerge when the rules of a
game are changed from simultaneous to sequential play. At a minimum, if the
simultaneous-move version has multiple equilibria, the sequential-move version enables the first mover to choose his preferred outcome. We illustrate such
a situation with the use of chicken, the game in which two teenagers drive toward each other in their cars, both determined not to swerve. We reproduce the
strategic form of Figure 4.14 from Chapter 4 in Figure 6.5a and two extensive
forms, one for each possible ordering of play, in Figure 6.5b and c.
Under simultaneous play, the two outcomes in which one player swerves (is
“chicken”) and the other goes straight (is “tough”) are both pure-strategy Nash
equilibria. Without specification of some historical, cultural, or other convention, neither has a claim to be a focal point. Our analysis in Chapter 4 suggested
that coordinated play could help the players in this game, perhaps through an
agreement to alternate between the two equilibria.
When we alter the rules of the game to allow one of the players the opportunity to move first, there are no longer two equilibria. Rather, we see that the
second mover’s equilibrium strategy is to choose the action opposite that chosen by the first mover. Rollback then shows that the first mover’s equilibrium
strategy is Straight. We see in Figure 6.5b and c that allowing one person to move
first and to be observed making the move results in a single rollback equilibrium in which the first mover gets a payoff of 1, while the second mover gets a
payoff of 1. The actual play of the game becomes almost irrelevant under such
rules, which may make the sequential version uninteresting to many observers.
Although teenagers might not want to play such a game with the rule change,
the strategic consequences of the change are significant.
ii. SECOND-MOVER ADVANTAGE In other games, a second-mover advantage may emerge
when simultaneous play is changed into sequential play. This result can be illustrated using the tennis game of Chapter 4. Recall that, in that game, Evert is
planning the location of her return while Navratilova considers where to cover.
The version considered earlier assumed that both players were skilled at disguising their intended moves until the very last moment so that they moved at
6841D CH06 UG.indd 188
12/18/14 3:11 PM
c h a n g i n g t h e o r d e r o f m o v e s i n a g a m e 1 8 9
(a) Simultaneous play
DEAN
Swerve (Chicken) Straight (Tough)
JAMES
Swerve (Chicken)
0, 0
–1, 1
Straight (Tough)
1, –1
–2, –2
(b) Sequential play: James moves first
JAMES, DEAN
ve
DEAN
e
Strai
erv
Sw
Swer
ght
0, 0
–1, 1
JAMES
Str
ve
aig
Swer
ht
DEAN
Strai
ght
(c) Sequential play: Dean moves first
e
Swer
Strai
erv
Sw
–2, –2
DEAN, JAMES
ve
JAMES
1, –1
ght
0, 0
–1, 1
DEAN
Str
ve
aig
Swer
ht
JAMES
Strai
ght
1, –1
–2, –2
FIGURE 6.5 Chicken in Simultaneous-Play and Sequential-Play Versions
6841D CH06 UG.indd 189
12/18/14 3:11 PM
1 9 0 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
essentially the same time. If Evert’s movement as she goes to hit the ball belies
her shot intentions, however, then Navratilova can react and move second in the
game. In the same way, if Navratilova leans toward the side that she intends to
cover before Evert actually hits her return, then Evert is the second mover.
The simultaneous-play version of this game has no equilibrium in pure
strategies. In each ordering of the sequential version, however, there is a unique
rollback equilibrium outcome; the equilibrium differs, depending on who
moves first. If Evert moves first, then Navratilova chooses to cover whichever direction Evert chooses and Evert opts for a down-the-line shot. Each player is expected to win the point half the time in this equilibrium. If the order is reversed,
Evert chooses to send her shot in the opposite direction from that which Navratilova covers; so Navratilova should move to cover crosscourt. In this case, Evert
is expected to win the point 80% of the time. The second mover does better by
being able to respond optimally to the opponent’s move. You should be able to
draw game trees similar to those in Figure 6.5b and c that illustrate exactly these
outcomes.
We return to the simultaneous version of this game in Chapter 7. There
we show that it does have a Nash equilibrium in mixed strategies. In that equilibrium, Evert succeeds on average 62% of the time. Her success rate in the
mixed‑strategy equilibrium of the simultaneous game is thus better than the
50% that she gets by moving first but is worse than the 80% that she gets by
moving second in the two sequential-move versions.
iII. BOTH PLAYERS MAY DO BETTER That a game may have a first-mover or a second‑mover
advantage, which is suppressed when moves have to be simultaneous but
emerges when an order of moves is imposed, is quite intuitive. Somewhat
more surprising is the possibility that both players may do better under one
set of rules of play than under another. We illustrate this possibility by using
the game of monetary and fiscal policies played by the Federal Reserve and
Congress. In Chapter 4, we studied this game with simultaneous moves;
we reproduce the payoff table (Figure 4.5) as Figure 6.6a and show the two
­sequential-move versions as Figure 6.6b and c. For brevity, we write the strategies as Balance and Deficit instead of Budget Balance and Budget Deficit for
Congress and as High and Low instead of High Interest Rates and Low Interest
Rates for the Fed.
In the simultaneous-move version, Congress has a dominant strategy (Deficit), and the Fed, knowing this, chooses High, yielding payoffs of 2 to both players. Almost the same thing happens in the sequential version where the Fed
moves first. The Fed foresees that, for each choice it might make, Congress
will respond with Deficit. Then High is the better choice for the Fed, yielding 2
instead of 1.
6841D CH06 UG.indd 190
12/18/14 3:11 PM
c h a n g i n g t h e o r d e r o f m o v e s i n a g a m e 1 9 1
(a) Simultaneous moves
FEDERAL RESERVE
Low interest rates High interest rates
CONGRESS
Budget balance
3, 4
1, 3
Budget deficit
4, 1
2, 2
(b) Sequential moves: Fed moves first
FED, CONGRESS
ce
CONGRESS
Balan
Defic
Low
it
4, 3
1, 4
FED
Hig
ce
Balan
h
CONGRESS
Defic
it
(c) Sequential moves: Congress moves first
FED
ce
2, 2
CONGRESS, FED
Low
High
an
Bal
3, 1
3, 4
1, 3
CONGRESS
De
fici
Low
t
FED
High
4, 1
2, 2
FIGURE 6.6 Three Versions of the Monetary–Fiscal Policy Game
6841D CH06 UG.indd 191
12/18/14 3:11 PM
1 9 2 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
But the sequential-move version where Congress moves first is different.
Now Congress foresees that, if it chooses Deficit, the Fed will respond with
High, whereas, if it chooses Balance, the Fed will respond with Low. Of these two
developments, Congress prefers the latter, where it gets payoff 3 instead of 2.
Therefore, the rollback equilibrium with this order of moves is for Congress to
choose a balanced budget and the Fed to respond with low interest rates. The
resulting payoffs, 3 for Congress and 4 for the Fed, are better for both players
than those of the other two versions.
The difference between the two outcomes is even more surprising because
the better outcome obtained in Figure 6.6c results from Congress choosing Balance, which is its dominated strategy in Figure 6.6a. To resolve the apparent
paradox, one must understand more precisely the meaning of dominance. For
Deficit to be a dominant strategy, it must be better than Balance from Congress’s
perspective for each given choice of the Fed. This type of comparison between
Deficit and Balance is relevant in the simultaneous-move game because there
Congress must make a decision without knowing the Fed’s choice. Congress
must think through, or formulate a belief about, the Fed’s action and choose
its best response to that. In our example, this best response is always Deficit for
Congress. The concept of dominance is also relevant with sequential moves if
Congress moves second, because then it knows what the Fed has already done
and merely picks its best response, which is always Deficit. However, if Congress
moves first, it cannot take the Fed’s choice as given. Instead, it must recognize
how the Fed’s second move will be affected by its own first move. Here it knows
that the Fed will respond to Deficit with High and to Balance with Low. Congress is then left to choose between these two alternatives; its most preferred
outcome of Deficit and Low becomes irrelevant because it is precluded by the
Fed’s response.
The idea that dominance may cease to be a relevant concept for the first
mover reemerges in Chapter 9. There we consider the possibility that one player
or the other may deliberately change the rules of a game to become the first
mover. Players can alter the outcome of the game in their favor in this way.
Suppose that the two players in our current example could choose the order
of moves in the game. In this case, they would agree that Congress should move
first. Indeed, when budget deficits and inflation threaten, the chairs of the Federal Reserve in testimony before various congressional committees often offer
such deals; they promise to respond to fiscal discipline by lowering interest
rates. But it is often not enough to make a verbal deal with the other player. The
technical requirements of a first move—namely, that it be observable to the
second mover and not reversible thereafter—must be satisfied. In the context
of macroeconomic policies, it is fortunate that the legislative process of fiscal
policy in the United States is both very visible and very slow, whereas monetary
6841D CH06 UG.indd 192
12/18/14 3:11 PM
c h a n g i n g t h e o r d e r o f m o v e s i n a g a m e 1 9 3
policy can be changed quite quickly in a meeting of the Federal Reserve Board.
Therefore, the sequential play where Congress moves first and the Fed moves
second is quite realistic.
IV. no change in outcome So far, we have encountered only games that yield different outcomes when played sequentially instead of simultaneously. But certain
games have the same outcomes in both types of play and regardless of the order
of moves. This result generally arises only when both or all players have dominant strategies. We show that it holds for the prisoners’ dilemma.
Consider the prisoners’ dilemma game of Chapter 4, in which a husband
and wife are being questioned regarding their roles in a crime. The Nash equilibrium of that simultaneous-play game is for each player to confess (or to defect
from cooperating with the other). But how would play transpire if one spouse
made an observable choice before the other chose at all? Using rollback on a
game tree similar to that in Figure 6.5b (which you can draw on your own as a
check of our analysis) would show that the second player does best to confess if
the first has already confessed (10 years rather than 25 years in jail), and the second player also does best to confess if the first has denied (1 year rather than 3
years in jail). Given these choices by the second player, the first player does best
to confess (10 years rather than 25 years in jail). The equilibrium entails 10 years
of jail for both spouses regardless of which one moves first. Thus, the equilibrium is the same in all three versions of this game!
B. Other Changes in the Order of Moves
The preceding section presented various examples in which the rules of the
game were changed from simultaneous play to sequential play. We saw how
and why such rule changes can change the outcome of a game. The same examples also serve to show what happens if the rules are changed in the opposite ­direction, from sequential to simultaneous moves. Thus, if a first- or a
second‑mover advantage exists with sequential play, it can be lost under simultaneous play. And if a specific order benefits both players, then losing the order
can hurt both.
The same examples also show us what happens if the rules are changed
to reverse the order of play while keeping the sequential nature of a game unchanged. If there is a first-mover or a second-mover advantage, then the player
who shifts from moving first to moving second may benefit or lose accordingly,
with the opposite change for the other player. And if one order is in the common interests of both, then an externally imposed change of order can benefit
or hurt them both.
6841D CH06 UG.indd 193
12/18/14 3:11 PM
1 9 4 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
3 CHANGE IN THE METHOD OF ANALYSIS
Game trees are the natural way to display sequential-move games, and payoff
tables are the natural representation of simultaneous-move games. However,
each technique can be adapted to the other type of game. Here we show how to
translate the information contained in one illustration to an illustration of the
other type. In the process, we develop some new ideas that will prove useful in
subsequent analysis of games.
A. Illustrating Simultaneous-Move Games by Using Trees
Consider the game of the passing shot in tennis as originally described in Chapter 4, where the action is so quick that moves are truly simultaneous, as shown
in Figure 6.5a. But suppose we want to show the game in extensive form—that
is, by using a tree rather than in a table as in Figure 4.14. We show how this can
be done in Figure 6.7.
To draw the tree in the figure, we must choose one player—say, Evert—to
make her choice at the initial node of the tree. The branches for her two choices,
DL and CC, then end in two nodes, at each of which Navratilova makes her
choices. However, because the moves are actually simultaneous, Navratilova
must choose without knowing what Evert has picked. That is, she must choose
without knowing whether she is at the node following Evert’s DL branch or the
one following Evert’s CC branch. Our tree diagram must in some way show this
lack of information on Navratilova’s part.
EVERT, NAVRATILOVA
NAVRATILOVA
DL
CC
DL
50, 50
80, 20
Information
set
EVERT
CC
DL
NAVRATILOVA
CC
90, 10
20, 80
FIGURE 6.7 Simultaneous-Move Tennis Game Shown in Extensive Form
6841D CH06 UG.indd 194
12/18/14 3:11 PM
C h a n g e i n t h e m e t h o d o f a n a ly s i s 1 9 5
We illustrate Navratilova’s strategic uncertainty about the node from which
her decision is being made by drawing an oval to surround the two relevant
nodes. (An alternative is to connect them by a dotted line; a dotted line is
used to distinguish it from the solid lines that represent the branches of the
tree.) The nodes within this oval or balloon are called an information set for
the player who moves there. Such a set indicates the presence of imperfect information for the player; she cannot distinguish between the nodes in the set,
given her available information (because she cannot observe the row player’s
move before making her own). As such, her strategy choice from within a single
information set must specify the same move at all the nodes contained in it.
That is, Navratilova must choose either DL at both the nodes in this information set or CC at both of them. She cannot choose DL at one and CC at the
other, as she could in Figure 6.5b, where the game had sequential moves and
she moved second.
Accordingly, we must adapt our definition of strategy. In Chapter 3, we defined a strategy as a complete plan of action, specifying the move that a player
would make at each node where the rules of the game specified that it was her
turn to move. We should now more accurately redefine a strategy as a complete
plan of action, specifying the move that a player would make at each information set at whose nodes the rules of the game specify that it is her turn to move.
The concept of an information set is also relevant when a player faces external uncertainty about some conditions that affect his decision, rather than
about another player’s moves. For example, a farmer planting a crop is uncertain
about the weather during the growing season, although he knows the probabilities of various alternative possibilities from past experience or meteorological
forecasts. We can regard the weather as a random choice of an outside player,
Nature, who has no payoffs but merely chooses according to known probabilities.3 We can then enclose the various nodes corresponding to Nature’s moves
into an information set for the farmer, constraining the farmer’s choice to be the
same at all of these nodes. Figure 6.8 illustrates this situation.
Using the concept of an information set, we can formalize the concepts of
perfect and imperfect information in a game, which we introduced in Chapter 2
(Section 2.D). A game has perfect information if it has neither strategic nor external uncertainty, which will happen if it has no information sets enclosing two
or more nodes. Thus, a game has perfect information if all of its information sets
consist of singleton nodes.
Although this representation is conceptually simple, it does not provide any
simpler way of solving the game. Therefore, we use it only occasionally, where
3
Some people believe that Nature is actually a malevolent player who plays a zero-sum game with
us, so its payoffs are higher when ours are lower. For example, it is more likely to rain if we have
forgotten to bring an umbrella. We understand such thinking, but it does not have real statistical
support.
6841D CH06 UG.indd 195
12/18/14 3:11 PM
1 9 6 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
Farmer’s
information
set
y
Dr
p=
NATURE
FARMER
i
Cact
Rice
0.2
Mild
FARMER
i
Cact
p = 0.5
Rice
W
p = et
0.3
i
Cact
FARMER
Rice
FIGURE 6.8 Nature and Information Sets
it conveys some point more simply. Some examples of game illustrations using
information sets can be found later in Chapters 8 and 14.
B. Showing and Analyzing Sequential-Move Games in Strategic Form
Consider now the sequential-move game of monetary and fiscal policy from
Figure 6.6c, in which Congress has the first move. Suppose we want to show it in
normal or strategic form—that is, by using a payoff table. The rows and the columns of the table are the strategies of the two players. We must therefore begin
by specifying the strategies.
For Congress, the first mover, listing its strategies is easy. There are just
two moves—Balance and Deficit—and they are also the two strategies. For the
second mover, matters are more complex. Remember that a strategy is a complete plan of action, specifying the moves to be made at each node where it is a
player’s turn to move. Because the Fed gets to move at two nodes (and because
we are supposing that this game actually has sequential moves and so the two
nodes are not confounded into one information set) and can choose either Low
or High at each node, there are four combinations of its choice patterns. These
combinations are (1) Low if Balance, High if Deficit (we write this as “L if B, H if
D” for short); (2) High if Balance, Low if Deficit (“H if B, L if D” for short); (3) Low
always; and (4) High always.
We show the resulting two-by-four payoff matrix in Figure 6.9. The last two
columns are no different from those for the two-by-two payoff matrix for the
6841D CH06 UG.indd 196
12/18/14 3:11 PM
C h a n g e i n t h e m e t h o d o f a n a ly s i s 1 9 7
FED
L if B, H if D
H if B, L if D
Low always
High always
Balance
3, 4
1, 3
3, 4
1, 3
Deficit
2, 2
4, 1
4, 1
2, 2
CONGRESS
FIGURE 6.9 Sequential-Move Game of Monetary and Fiscal Policy in Strategic Form
game under simultaneous-move rules (Figure 6.6a). This is because, if the Fed
is choosing a strategy in which it makes the same move always, it is just as if the
Fed were moving without taking into account what Congress had done; it is as
if their moves were simultaneous. But calculation of the payoffs for the first two
columns, where the Fed’s second move does depend on Congress’s first move,
needs some care.
To illustrate, consider the cell in the first row and the second column. Here
Congress is choosing Balance, and the Fed is choosing “H if B, L if D.” Given
Congress’s choice, the Fed’s actual choice under this strategy is High. Then the
payoffs are those for the Balance and High combination—namely, 1 for Congress and 3 for the Fed.
Best-response analysis quickly shows that the game has two pure-strategy
Nash equilibria, which we show by shading the cells gray. One is in the top-left
cell, where Congress’s strategy is Balance and the Fed’s is “L if B, H if D,” and so
the Fed’s actual choice is L. This outcome is just the rollback equilibrium of the
sequential-move game. But there is another Nash equilibrium in the bottom-right
cell, where Congress chooses Deficit and the Fed chooses “High always.” As always in a Nash equilibrium, neither player has a clear reason to deviate from the
strategies that lead to this outcome. Congress would do worse by switching to Balance, and the Fed could do no better by switching to any of its other three strategies, although it could do just as well with “L if B, H if D.”
The sequential-move game, when analyzed in its extensive form, produced
just one rollback equilibrium. But when analyzed in its normal or strategic form,
it has two Nash equilibria. What is going on?
The answer lies in the different nature of the logic of Nash and rollback
analyses. Nash equilibrium requires that neither player have a reason to deviate,
given the strategy of the other player. However, rollback does not take the strategies of later movers as given. Instead, it asks what would be optimal to do if the
opportunity to move actually arises.
In our example, the Fed’s strategy of “High always” does not satisfy the criterion of being optimal if the opportunity to move actually arises. If Congress
chose Deficit, then High is indeed the Fed’s optimal response. However, if
Congress chose Balance and the Fed had to respond, it would want to choose
6841D CH06 UG.indd 197
12/18/14 3:11 PM
1 9 8 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
Low, not High. So “High always” does not describe the Fed’s optimal response
in all possible configurations of play and cannot be a rollback equilibrium.
But the logic of Nash equilibrium does not impose such a test, instead regarding the Fed’s “High always” as a strategy that Congress could legitimately take
as given. If it does so, then Deficit is Congress’s best response. And, conversely,
“High always” is one best response of the Fed to Congress’s Deficit (although it is
tied with “L if B, H if D”). Thus, the pair of strategies “Deficit” and “High always”
are mutual best responses and constitute a Nash equilibrium, although they do
not constitute a rollback equilibrium.
We can therefore think of rollback as a further test, supplementing the requirements of a Nash equilibrium and helping to select from among multiple
Nash equilibria of the strategic form. In other words, it is a refinement of the
Nash equilibrium concept.
To state this idea somewhat more precisely, recall the concept of a subgame.
At any one node of the full game tree, we can think of the part of the game that
begins there as a subgame. In fact, as successive players make their choices,
the play of the game moves along a succession of nodes, and each move can
be thought of as starting a subgame. The equilibrium derived by using rollback
corresponds to one particular succession of choices in each subgame and gives
rise to one particular path of play. Certainly, other paths of play are consistent
with the rules of the game. We call these other paths off-equilibrium paths, and
we call any subgames that arise along these paths off-equilibrium subgames,
for short.
With this terminology, we can now say that the equilibrium path of play
is itself determined by the players’ expectations of what would happen if they
chose a different action—if they moved the game to an off-equilibrium path
and started an off-equilibrium subgame. Rollback requires that all players
make their best choices in every subgame of the larger game, whether or not the
subgame lies along the path to the ultimate equilibrium outcome.
Strategies are complete plans of action. Thus, a player’s strategy must
specify what she will do in each eventuality, or each and every node of the
game, whether on or off the equilibrium path, where it is her turn to act.
When one such node arrives, only the plan of action starting there—namely,
the part of the full strategy that pertains to the subgame starting at that
node—is pertinent. This part is called the continuation of the strategy for
that subgame. Rollback requires that the equilibrium strategy be such that
its continuation in every subgame is optimal for the player whose turn it is to
act at that node, whether or not the node and the subgame lie on the equilibrium path of play.
Return to the monetary policy game with Congress moving first, and consider the second Nash equilibrium that arises in its strategic form. Here the
path of play is for Congress to choose Deficit and the Fed to choose High.
6841D CH06 UG.indd 198
12/18/14 3:11 PM
C h a n g e i n t h e m e t h o d o f a n a ly s i s 1 9 9
On the equilibrium path, High is indeed the Fed’s best response to Deficit.
Congress’s choice of Balance would be the start of an off-equilibrium path. It
leads to a node where a rather trivial subgame starts—namely, a decision by the
Fed. The Fed’s purported equilibrium strategy “High always” asks it to choose
High in this subgame. But that is not optimal; this second equilibrium is specifying a nonoptimal choice for an off-equilibrium subgame.
In contrast, the equilibrium path of play for the Nash equilibrium in the
upper-left corner of Figure 6.9 is for Congress to choose Balance and the Fed to
follow with Low. The Fed is responding optimally on the equilibrium path. The
off-equilibrium path would have Congress choosing Deficit, and the Fed, given
its strategy of “L if B, H if D,” would follow with High. It is optimal for the Fed
to respond to Deficit with High, so the strategy remains optimal off the equilibrium path, too.
The requirement that continuation of a strategy remain optimal under all
circumstances is important because the equilibrium path itself is the result of
players’ thinking strategically about what would happen if they did something
different. A later player may try to achieve an outcome that she would prefer by
threatening the first mover that certain actions would be met with dire responses
or by promising that certain other actions would be met with nice responses.
But the first mover will be skeptical of the credibility of such threats and promises. The only way to remove that doubt is to check if the stated responses would
actually be optimal if the need arose. If the responses are not optimal, then the
threats or promises are not credible, and the responses would not be observed
along the equilibrium path of play.
The equilibrium found by using rollback is called a subgame-perfect equilibrium (SPE). It is a set of strategies (complete plans of action), one for each
player, such that, at every node of the game tree, whether or not the node lies
along the equilibrium path of play, the continuation of the same strategy in the
subgame starting at that node is optimal for the player who takes the action
there. More simply, an SPE requires players to use strategies that constitute a
Nash equilibrium in every subgame of the larger game.
In fact, as a rule, in games with finite trees and perfect information, where
players can observe every previous action taken by all other players so that
there are no multiple nodes enclosed in one information set, rollback finds the
unique (except for trivial and exceptional cases of ties) subgame-perfect equilibrium of the game. Consider: If you look at any subgame that begins at the
last decision node for the last player who moves, the best choice for that player
is the one that gives her the highest payoff. But that is precisely the action chosen with the use of rollback. As players move backward through the game tree,
rollback eliminates all unreasonable strategies, including incredible threats
or promises, so that the collection of actions ultimately selected is the SPE.
Therefore, for the purposes of this book, subgame perfectness is just a fancy
6841D CH06 UG.indd 199
12/18/14 3:11 PM
2 0 0 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
name for rollback. At more advanced levels of game theory, where games include
complex information structures and information sets, subgame perfectness becomes a richer notion.
4 THREE-PLAYER GAMES
We have restricted the discussion so far in this chapter to games with two players and two moves each. But the same methods also work for some larger and
more general examples. We now illustrate this by using the street–garden game
of Chapter 3. Specifically, we (1) change the rules of the game from sequential to simultaneous moves and then (2) keep the moves sequential but show
and analyze the game in its strategic form. First we reproduce the tree of that
sequential-move game (Figure 3.6) as Figure 6.10 here and remind you of the
rollback equilibrium.
The equilibrium strategy of the first mover (Emily) is simply a move, “Don’t
contribute.” The second mover chooses from among four possible strategies
(choice of two responses at each of two nodes) and chooses the strategy “Don’t
contribute (D) if Emily has chosen her Contribute, and Contribute (C) if Emily
has chosen her Don’t contribute,” or, more simply, “D if C, C if D,” or even
PAYOFFS
te
TALIA Contribu
NINA
te
bu
tri
b
Don
’t
n
Co
EMILY
te
tribu
Con
d
Don’t
e
Contrib
3, 4, 3
Don’t
1, 2, 2
ute
TALIA
a
ute
TALIA Contrib
Do
n’t
e
ibut
tr
c
NINA
Con
Don
’t
3, 3, 3
3, 3, 4
4, 3, 3
f
Don’t
g
Contrib
2, 2, 1
Don’t
2, 2, 2
TALIA
ute
2, 1, 2
FIGURE 6.10 The Street–Garden Game with Sequential Moves
6841D CH06 UG.indd 200
12/18/14 3:11 PM
t h r e e - p l ay e r g a m e s 2 0 1
more simply “DC.” Talia has 16 available strategies (choice of two responses at
each of four nodes), and her equilibrium strategy is “D following Emily’s C and
Nina’s C, C following their CD, C following their DC, and D following their DD,”
or “DCCD” for short.
Remember, too, the reason for these choices. The first mover has the opportunity to choose Don’t, knowing that the other two will recognize that the nice
garden won’t be forthcoming unless they contribute and that they like the nice
garden sufficiently strongly that they will contribute.
Now we change the rules of the game to make it a simultaneous-move game.
(In Chapter 4, we solved a simultaneous-move version with somewhat different
payoffs; here we keep the payoffs the same as in Chapter 3.) The payoff matrix is
in Figure 6.11. Best-response analysis shows very easily that there are four Nash
equilibria.
In three of the Nash equilibria of the simultaneous-move game, two players
contribute, while the third does not. These equilibria are similar to the rollback
equilibrium of the sequential-move game. In fact, each one corresponds to the
rollback equilibrium of the sequential game with a particular order of play. Further, any given order of play in the sequential-move version of this game leads
to the same simultaneous-move payoff table.
But there is also a fourth Nash equilibrium here, where no one contributes.
Given the specified strategies of the other two—namely, Don’t contribute—any
one player is powerless to bring about the nice garden and therefore chooses
not to contribute as well. Thus, in the change from sequential to simultaneous
moves, the first-mover advantage has been lost. Multiple equilibria arise, only
one of which retains the original first mover’s high payoff.
Next we return to the sequential-move version—Emily first, Nina second, and Talia third—but show the game in its normal or strategic form. In the
sequential-move game, Emily has 2 pure strategies, Nina has 4, and Talia has 16;
so this means constructing a payoff table that is 2 by 4 by 16. With the use of
TALIA chooses:
Contribute
Don’t Contribute
NINA
Contribute
NINA
Contribute
Don't
3, 3, 3
3, 4, 3
EMILY
Contribute
Don't
Contribute
3, 3, 4
1, 2, 2
Don't
2, 1, 2
2, 2, 2
EMILY
Don't
4, 3, 3
2, 2, 1
FIGURE 6.11 The Street–Garden Game with Simultaneous Moves
6841D CH06 UG.indd 201
12/18/14 3:11 PM
2 0 2 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
the same conventions as we used for three-player tables in Chapter 4, this particular game would require a table with 16 “pages” of two-by-four payoff tables.
That would look too messy; so we opt instead for a reshuffling of the players.
Let Talia be the row player, Nina be the column player, and Emily be the page
player. Then “all” that is required to illustrate this game is the 16 by 4 by 2 game
table shown in Figure 6.12. The order of payoffs still corresponds to our earlier
convention in that they are listed row, column, page player; in our example, that
means the payoffs are now listed in the order Talia, Nina, and Emily.
As in the monetary–fiscal policy game between the Fed and Congress, there are multiple Nash equilibria in the simultaneous street–garden
EMILY
Contribute
Don’t
NINA
NINA
TALIA
CC
CD
DC
DD
CC
CD
DC
DD
CCCC
3, 3, 3
3, 3, 3
3, 4, 3
3, 4, 3
3, 3, 4
1, 2, 2
3, 3, 4
1, 2, 2
CCCD
3, 3, 3
3, 3, 3
3, 4, 3
3, 4, 3
3, 3, 4
2, 2, 2
3, 3, 4
2, 2, 2
CCDC
3, 3, 3
3, 3, 3
3, 4, 3
3, 4, 3
2, 1, 2
1, 2, 2
2, 1, 2
1, 2, 2
CDCC
3, 3, 3
3, 3, 3
2, 2, 1
2, 2, 1
3, 3, 4
1, 2, 2
3, 3, 4
1, 2, 2
DCCC
4, 3, 3
4, 3, 3
3, 4, 3
3, 4, 3
3, 3, 4
1, 2, 2
3, 3, 4
1, 2, 2
CCDD
3, 3, 3
3, 3, 3
3, 4, 3
3, 4, 3
2, 1, 2
2, 2, 2
2, 1, 2
2, 2, 2
CDDC
3, 3, 3
3, 3, 3
2, 2, 1
2, 2, 1
2, 1, 2
1, 2, 2
2, 1, 2
1, 2, 2
DDCC
4, 3, 3
4, 3, 3
2, 2, 1
2, 2, 1
3, 3, 4
1, 2, 2
3, 3, 4
1, 2, 2
CDCD
3, 3, 3
3, 3, 3
2, 2, 1
2, 2, 1
3, 3, 4
2, 2, 2
3, 3, 4
2, 2, 2
DCDC
4, 3, 3
4, 3, 3
3, 4, 3
3, 4, 3
2, 1, 2
1, 2, 2
2, 1, 2
1, 2, 2
DCCD
4, 3, 3
4, 3, 3
3, 4, 3
3, 4, 3
3, 3, 4
2, 2, 2
3, 3, 4
2, 2, 2
CDDD
3, 3, 3
3, 3, 3
2, 2, 1
2, 2, 1
2, 1, 2
2, 2, 2
2, 1, 2
2, 2, 2
DCDD
4, 3, 3
4, 3, 3
3, 4, 3
3, 4, 3
2, 1, 2
2, 2, 2
2, 1, 2
2, 2, 2
DDCD
4, 3, 3
4, 3, 3
2, 2, 1
2, 2, 1
3, 3, 4
2, 2, 2
3, 3, 4
2, 2, 2
DDDC
4, 3, 3
4, 3, 3
2, 2, 1
2, 2, 1
2, 1, 2
1, 2, 2
2, 1, 2
1, 2, 2
DDDD
4, 3, 3
4, 3, 3
2, 2, 1
2, 2, 1
2, 1, 2
2, 2, 2
2, 1, 2
2, 2, 2
FIGURE 6.12 Street–Garden Game in Strategic Form
6841D CH06 UG.indd 202
12/18/14 3:11 PM
s u m m a r y 2 0 3
game. (In Exercise S8, we ask you to find them all.) But there is only one
subgame-perfect equilibrium, corresponding to the rollback equilibrium found
in Figure 6.11. Although best-response analysis does find all of the Nash equilibria, iterated elimination of dominated strategies can reduce the number of reasonable equilibria for us here. This process works because elimination identifies
those strategies that include noncredible components (such as “High always”
for the Fed in Section 3.B). As it turns out, such elimination can take us all the
way to the unique subgame-perfect equilibrium.
In Figure 6.12, we start with Talia and eliminate all of her (weakly) dominated
strategies. This step eliminates all but the strategy listed in the eleventh row of
the table, DCCD, which we have already identified as Talia’s rollback equilibrium
strategy. Elimination can continue with Nina, for whom we must compare outcomes from strategies across both pages of the table. To compare her CC to CD,
for example, we look at the payoffs associated with CC in both pages of the table
and compare these payoffs with the similarly identified payoffs for CD. For Nina,
the elimination process leaves only her strategy DC; again, this is the rollback
equilibrium strategy found for her above. Finally, Emily has only to compare the
two remaining cells associated with her choice of Don’t and Contribute; she gets
the highest payoff when she chooses Don’t and so makes that choice. As before,
we have identified her rollback equilibrium strategy.
The unique subgame-perfect outcome in the game table in Figure 6.12
thus corresponds to the cell associated with the rollback equilibrium strategies
for each player. Note that the process of iterated elimination that leads us to this
subgame-perfect equilibrium is carried out by considering the players in reverse
order of the actual play of the game. This order conforms to the order in which
player actions are considered in rollback analysis and therefore allows us to eliminate exactly those strategies, for each player, that are not consistent with rollback.
In so doing, we eliminate all of the Nash equilibria that are not subgame-perfect.
SUMMARY
Many games include multiple components, some of which entail simultaneous
play and others of which entail sequential play. In two-stage (and multistage)
games, a “tree house” can be used to illustrate the game; this construction
­allows the identification of the different stages of play and the ways in which
those stages are linked together. Full-fledged games that arise in later stages of
play are called subgames of the full game.
Changing the rules of a game to alter the timing of moves may or may not
alter the equilibrium outcome of a game. Simultaneous-move games that are
changed to make moves sequential may have the same outcome (if both players
6841D CH06 UG.indd 203
12/18/14 3:11 PM
2 0 4 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
have dominant strategies), may have a first-mover or second-mover advantage,
or may lead to an outcome in which both players are better off. The sequential
version of a simultaneous game will generally have a unique rollback equilibrium even if the simultaneous version has no equilibrium or multiple equilibria.
Similarly, a sequential-move game that has a unique rollback equilibrium may
have several Nash equilibria when the rules are changed to make the game a
­simultaneous-move game.
Simultaneous-move games can be illustrated in a game tree by collecting decision nodes in information sets when players make decisions without knowing
at which specific node they find themselves. Similarly, sequential-move games
can be illustrated by using a game table; in this case, each player’s full set of
strategies must be carefully identified. Solving a sequential-move game from its
strategic form may lead to many possible Nash equilibria. The number of potential equilibria can be reduced by using the criteria of credibility to eliminate
some strategies as possible equilibrium strategies. This process leads to the
­subgame-perfect equilibrium (SPE) of the sequential-move game. These solution
processes also work for games with additional players.
KEY TERMS
continuation (198)
credibility (199)
information set (195)
off-equilibrium path (198)
off-equilibrium subgame (198)
subgame (185)
subgame-perfect equilibrium (SPE) (199)
SOLVED EXERCISES
S1.
Consider the simultaneous-move game with two players that has no Nash
equilibrium in pure strategies, illustrated in Figure 4.13 in Chapter 4.
If the game were transformed into a sequential-move game, would you
expect that game to exhibit a first-mover advantage, a second-mover advantage, or neither? Explain your reasoning.
S2.
Consider the game represented by the game tree below. The first mover,
Player 1, may move either Up or Down, after which Player 2 may move
either Left or Right. Payoffs for the possible outcomes appear below. Reexpress this game in strategic (table) form. Find all of the pure-strategy
Nash equilibria in the game. If there are multiple equilibria, indicate
which one is subgame-perfect. For those equilibria that are not subgame-perfect, identify the reason (the source of the lack of credibility).
6841D CH06 UG.indd 204
12/18/14 3:11 PM
e x e rc i s e s 2 0 5
2, 4
Left
2
Up
Right
4, 1
1
3, 3
Left
Down
2
Right
1, 2
6841D CH06 UG.indd 205
S3.
Consider the Airbus–Boeing game in Exercise S4 in Chapter 3. Show that
game in strategic form and locate all of the Nash equilibria. Which one
of the equilibria is subgame-perfect? For those equilibria that are not
subgame-perfect, identify the source of the lack of credibility.
S4.
Return to the two-player game tree in part (a) of Exercise S2 in Chapter 3.
(a) Write the game in strategic form, making Scarecrow the row player
and Tinman the column player.
(b) Find the Nash equilibrium.
S5.
Return to the two-player game tree in part (b) of Exercise S2 in Chapter 3.
(a) Write the game in strategic form. (Hint: Refer to your answer to Exercise S2 of Chapter 3.) Find all of the Nash equilibria. There will be
many.
(b) For the equilibria that you found in part (a) that are not subgameperfect, identify the credibility problems.
S6.
Return to the three-player game tree in part (c) of Exercise S2 in Chapter 3.
(a) Draw the game table. Make Scarecrow the row player, Tinman the column player, and Lion the page player. (Hint: Refer to your answer to
Exercise S2 of Chapter 3.) Find all of the Nash equilibria. There will be
many.
(b) For the equilibria that you found in part (a) that are not subgameperfect, identify the credibility problems.
S7.
Consider a simplified baseball game played between a pitcher and a batter. The pitcher chooses between throwing a fastball or a curve, while the
batter chooses which pitch to anticipate. The batter has an advantage
12/18/14 3:11 PM
2 0 6 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
if he correctly anticipates the type of pitch. In this constant-sum game,
the batter’s payoff is the probability that the batter will get a base hit. The
pitcher’s payoff is the probability that the batter fails to get a base hit,
which is simply one minus the payoff of the batter. There are four potential outcomes:
(i) If a pitcher throws a fastball, and the batter guesses fastball, the
probability of a hit is 0.300.
(ii) If the pitcher throws a fastball, and the batter guesses curve, the
probability of a hit is 0.200.
(iii) If the pitcher throws a curve, and the batter guesses curve, the probability of a hit is 0.350.
(iv) If the pitcher throws a curve, and the batter guesses fastball, the
probability of a hit is 0.150.
Suppose that the pitcher is “tipping” his pitches. This means that the
pitcher is holding the ball, positioning his body, or doing something else
in a way that reveals to the batter which pitch he is going to throw. For
our purposes, this means that the pitcher-batter game is a sequential
game in which the pitcher announces his pitch choice before the batter
has to choose his strategy.
(a) Draw this situation, using a game tree.
(b) Suppose that the pitcher knows he is tipping his pitches but can’t
stop himself from doing so. Thus, the pitcher and batter are playing
the game you just drew. Find the rollback equilibrium of this game.
(c) Now change the timing of the game, so that the batter has to reveal
his action (perhaps by altering his batting stance) before the pitcher
chooses which pitch to throw. Draw the game tree for this situation,
and find the rollback equilibrium.
Now assume that the tips of each player occur so quickly that neither
opponent can react to them, so that the game is in fact simultaneous.
(d) Draw a game tree to represent this simultaneous game, indicating
information sets where appropriate.
(e) Draw the game table for the simultaneous game. Is there a Nash
equilibrium in pure strategies? If so, what is it?
S8.
6841D CH06 UG.indd 206
The street–garden game analyzed in Section 4 of this chapter has a
16‑by‑4‑by‑2 game table when the sequential-move version of the game
is expressed in strategic form, as in Figure 6.12. There are many Nash
equilibria to be found in this table.
(a) Use best-response analysis to find all of the Nash equilibria in the
table in Figure 6.12.
(b) Identify the subgame-perfect equilibrium from among your set of all
Nash equilibria. Other equilibrium outcomes look identical to the
12/18/14 3:11 PM
e x e rc i s e s 2 0 7
subgame-perfect one—they entail the same payoffs for each of the
three players—but they arise after different combinations of strategies. Explain how this can happen. Describe the credibility problems that arise in the nonsubgame-perfect equilibria.
S9.
As it appears in the text, Figure 6.1 represents the two-stage game between CrossTalk and GlobalDialog with a combination of tables and
trees. Instead, represent the entire two-stage game in a single, very large
game tree. Be careful to label which player makes the decision at each
node, and remember to draw information sets between nodes where
necessary.
S10.
Recall the mall location game in Exercise S9 in Chapter 3. That threeplayer sequential game has a game tree that is similar to the one for the
street–garden game, shown in Figure 6.10.
(a) Draw the tree for the mall location game. How many strategies does
each store have?
(b) Illustrate the game in strategic form and find all of the pure-strategy
Nash equilibria in the game.
(c) Use iterated dominance to find the subgame-perfect equilibrium.
(Hint: Reread the last two paragraphs of Section 4.)
S11.
The rules of the mall location game, analyzed in Exercise S10 above, specify that when all three stores request space in Urban Mall, the two bigger (more prestigious) stores get the available spaces. The original version
of the game also specifies that the firms move sequentially in requesting
mall space.
(a) Suppose that the three firms make their location requests simultaneously. Draw the payoff table for this version of the game and find
all of the Nash equilibria. Which one of these equilibria do you think
is most likely to be played in practice? Explain.
Now suppose that when all three stores simultaneously request
Urban Mall, the two spaces are allocated by lottery, giving each store an
equal chance of getting into Urban Mall. With such a system, each would
have a two-thirds probability (or a 66.67% chance) of getting into Urban
Mall when all three had requested space there, and a one-third probability (33.33% chance) of being alone in the Rural Mall.
(b) Draw the game table for this new version of the simultaneous-play
mall location game. Find all of the Nash equilibria of the game.
Which one of these equilibria do you think is most likely to be
played in practice? Explain.
(c) Compare and contrast the equilibria found in part (b) with the equilibria found in part (a). Do you get the same Nash equilibria? Why or
why not?
6841D CH06 UG.indd 207
12/18/14 3:11 PM
2 0 8 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
S12.
Return to the game of Monica and Nancy in Exercise S10 of Chapter 5.
Assume that Monica and Nancy choose their effort levels sequentially instead of simultaneously. Monica commits to her choice of effort first, and
on observing this decision, Nancy commits to her own effort.
(a) What is the subgame-perfect equilibrium to the game where the
joint profits are 4m 1 4n 1 mn, the effort costs to Monica and
Nancy are m2 and n2, respectively, and Monica commits to an effort
level first?
(b) Compare the payoffs of Monica and Nancy with those found in
Exercise S10 of Chapter 5. Does this game have a first-mover or a
second-mover advantage? Explain.
S13.
Extending Exercise S12, Monica and Nancy need to decide which (if either) of them will commit to an effort level first. To do this, each of them
simultaneously writes on a separate slip of paper whether or not she will
commit first. If they both write “yes” or they both write “no,” they choose
effort levels simultaneously, as in Exercise S10 in Chapter 5. If Monica
writes “yes” and Nancy writes “no,” then Monica commits to her move
first, as in Exercise S12. If Monica writes “no” and Nancy writes “yes,”
then Nancy commits to her move first.
(a) Use the payoffs to Monica and Nancy in Exercise S12 above as well
as in Exercise S10 in Chapter 5 to construct the game table for the
first-stage paper-slip decision game. (Hint: Note the symmetry of
the game.)
(b) Find the pure-strategy Nash equilibria of this first-stage game.
UNSOLVED EXERCISES
U1.
6841D CH06 UG.indd 208
Consider a game in which there are two players, A and B. Player A moves
first and chooses either Up or Down. If A chooses Up, the game is over,
and each player gets a payoff of 2. If A moves Down, then B gets a turn
and chooses between Left and Right. If B chooses Left, both players get 0;
if B chooses Right, A gets 3 and B gets 1.
(a) Draw the tree for this game, and find the subgame-perfect
equilibrium.
(b) Show this sequential-play game in strategic form, and find all of the
Nash equilibria. Which is or are subgame-perfect? Which is or are
not? If any are not, explain why.
(c) What method of solution could be used to find the subgame-perfect
equilibrium from the strategic form of the game? (Hint: Refer to the
last two paragraphs of Section 4.)
12/18/14 3:11 PM
e x e rc i s e s 2 0 9
6841D CH06 UG.indd 209
U2.
Return to the two-player game tree in part (a) of Exercise U2 in Chapter 3.
(a) Write the game in strategic form, making Albus the row player and
Minerva the column player. Find all of the Nash equilibria.
(b) For the equilibria you found in part (a) of this exercise that are not
subgame-perfect, identify the credibility problems.
U3.
Return to the two-player game tree in part (b) of Exercise U2 in Chapter 3.
(a) Write the game in strategic form. Find all of the Nash equilibria.
(b) For the equilibria you found in part (a) that are not subgame-perfect,
identify the credibility problems.
U4.
Return to the two-player game tree in part (c) of Exercise U2 in Chapter 3.
(a) Draw the game table. Make Albus the row player, Minerva the
column player, and Severus the page player. Find all of the Nash
equilibria.
(b) For the equilibria you found in part (a) that are not subgame-perfect,
identify the credibility problems.
U5.
Consider the cola industry, in which Coke and Pepsi are the two dominant
firms. (To keep the analysis simple, just forget about all the others.) The
market size is $8 billion. Each firm can choose whether to advertise. Advertising costs $1 billion for each firm that chooses it. If one firm advertises
and the other doesn’t, then the former captures the whole market. If both
firms advertise, they split the market 50:50 and pay for the advertising. If
neither advertises, they split the market 50:50 but without the expense of
advertising.
(a) Write the payoff table for this game, and find the equilibrium when
the two firms move simultaneously.
(b) Write the game tree for this game (assume that it is played sequentially), with Coke moving first and Pepsi following.
(c) Is either equilibrium in parts (a) and (b) better from the joint perspective of Coke and Pepsi? How could the two firms do better?
U6.
Along a stretch of a beach are 500 children in five clusters of 100 each.
(Label the clusters A, B, C, D, and E in that order.) Two ice-cream vendors
are deciding simultaneously where to locate. They must choose the exact
location of one of the clusters.
If there is a vendor in a cluster, all 100 children in that cluster will buy
an ice cream. For clusters without a vendor, 50 of the 100 children are
willing to walk to a vendor who is one cluster away, only 20 are willing to
walk to a vendor two clusters away, and no children are willing to walk
the distance of three or more clusters. The ice cream melts quickly, so the
walkers cannot buy for the nonwalkers.
12/18/14 3:11 PM
2 1 0 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
If the two vendors choose the same cluster, each will get a 50% share
of the total demand for ice cream. If they choose different clusters, then
those children (locals or walkers) for whom one vendor is closer than the
other will go to the closer one, and those for whom the two are equidistant will split 50% each. Each vendor seeks to maximize her sales.
(a) Construct the five-by-five payoff table for the vendor location
game; the entries stated here will give you a start and a check on
your calculations:
If both vendors choose to locate at A, each sells 85 units.
If the first vendor chooses B and the second chooses C, the first
sells 150 and the second sells 170.
If the first vendor chooses E and the second chooses B, the first
sells 150 and the second sells 200.
(b) Eliminate dominated strategies as far as possible.
(c) In the remaining table, locate all pure-strategy Nash equilibria.
(d) If the game is altered to one with sequential moves, where the first
vendor chooses her location first and the second vendor follows,
what are the locations and the sales that result from the subgameperfect equilibrium? How does the change in the timing of moves
here help players resolve the coordination problem in part (c)?
U7.
Return to the game among the three lions in the Roman Colosseum in
Exercise S8 in Chapter 3.
(a) Write out this game in strategic form. Make Lion 1 the row player,
Lion 2 the column player, and Lion 3 the page player.
(b) Find the Nash equilibria for the game. How many did you find?
(c) You should have found Nash equilibria that are not subgame-perfect.
For each of those equilibria, which lion is making a noncredible
threat? Explain.
U8.
Now assume that the mall location game (from Exercises S9 in Chapter 3
and S10 in this chapter) is played sequentially but with a different order
of play: Big Giant, then Titan, then Frieda’s.
(a) Draw the new game tree.
(b) What is the subgame-perfect equilibrium of the game? How does
it compare to the subgame-perfect equilibrium for Exercise S9 in
Chapter 3?
(c) Now write the strategic form for this new version of the game.
(d) Find all of the Nash equilibria of the game. How many are there?
How does this compare with the number of equilibria from Exercise
S10 in this chapter?
6841D CH06 UG.indd 210
12/18/14 3:11 PM
e x e rc i s e s 2 1 1
U9.
Return to the game of Monica and Nancy in Exercise U10 of Chapter 5.
Assume that Monica and Nancy choose their effort levels sequentially
instead of simultaneously. Monica commits to her choice of effort first.
On observing this decision, Nancy commits to her own effort.
(a) What is the subgame-perfect equilibrium to the game where the
joint profits are 5m 1 4n 1 mn, the effort costs to Monica and
Nancy are m2 and n2, respectively, and Monica commits to an effort
level first?
(b) Compare the payoffs of Monica and Nancy with those found in
Exercise U10 of Chapter 5. Does this game have a first-mover or
second-mover advantage?
(c) Using the same joint profit function as in part (a), find the subgameperfect equilibrium for the game where Nancy must commit first to
an effort level.
U10.
In an extension of Exercise U9, Monica and Nancy need to decide which (if
either) of them will commit to an effort level first. To do this, each of them
simultaneously writes on a separate slip of paper whether or not she will
commit first. If they both write “yes” or they both write “no,” they choose
effort levels simultaneously, as in Exercise U10 in Chapter 5. If Monica
writes “yes” and Nancy writes “no,” they play the game in part (a) of Exercise U9 above. If Monica writes “no” and Nancy writes “yes,” they play the
game in part (c).
(a) Use the payoffs to Monica and Nancy in parts (b) and (c) in Exercise
U9 above, as well as those in Exercise U10 in Chapter 5, to construct
the game table for the first-stage paper-slip decision game.
(b) Find the pure-strategy Nash equilibria of this first-stage game.
U11.
In the faraway town of Saint James two firms, Bilge and Chem, compete
in the soft-drink market (Coke and Pepsi aren’t in this market yet). They
sell identical products, and since their good is a liquid, they can easily
choose to produce fractions of units. Since they are the only two firms
in this market, the price of the good (in dollars), P, is determined by P 5
(30 2 QB 2 QC), where QB is the quantity produced by Bilge and QC is the
quantity produced by Chem (each measured in liters). At this time both
firms are considering whether to invest in new bottling equipment that
will lower their variable costs.
(i) If firm j decides not to invest, its cost will be C j 5 Q j22, where j
stands for either B (Bilge) or C (Chem).
(ii) If a firm decides to invest, its cost will be C j = 20 + Qj26, where j
stands for either B (Bilge) or C (Chem). This new cost function reflects the fixed cost of the new machines (20) as well as the lower
variable costs.
6841D CH06 UG.indd 211
12/18/14 3:11 PM
2 1 2 [ C h . 6 ] C o m b i n i n g s e q u e n t i a l a n d s i m u lta n e o u s m o v e s
The two firms make their investment choices simultaneously, but
the payoffs in this investment game will depend on the subsequent duopoly games that arise. The game is thus really a two-stage game: decide
to invest, and then play a duopoly game.
(a) Suppose both firms decide to invest. Write the profit functions in
terms of QB and QC for the two firms. Use these to find the Nash
equilibrium of the quantity-setting game. What are the equilibrium
quantities and profits for both firms? What is the market price?
(b) Now suppose both firms decide not to invest. What are the equilibrium quantities and profits for both firms? What is the market price?
(c) Now suppose that Bilge decides to invest, and Chem decides not
to invest. What are the equilibrium quantities and profits for both
firms? What is the market price?
(d) Write out the two-by-two game table of the investment game between the two firms. Each firm has two strategies: Investment and No
Investment. The payoffs are simply the profits found in parts (a),
(b), and (c). (Hint: Note the symmetry of the game.)
(e) What is the subgame-perfect equilibrium of the overall two-stage
game?
U12.
6841D CH06 UG.indd 212
Two French aristocrats, Chevalier Chagrin and Marquis de Renard, fight
a duel. Each has a pistol loaded with one bullet. They start 10 steps apart
and walk toward each other at the same pace, 1 step at a time. After each
step, either may fire his gun. When one shoots, the probability of scoring
a hit depends on the distance. After k steps it is k5, and so it rises from
0.2 after the first step to 1 (certainty) after 5 steps, at which point they
are right up against one another. If one player fires and misses while the
other has yet to fire, the walk must continue even though the bulletless
one now faces certain death; this rule is dictated by the code of the aristocracy. Each gets a payoff of 21 if he himself is killed and 1 if the other is
killed. If neither or both are killed, each gets 0.
This is a game with five sequential steps and simultaneous moves
(shoot or not shoot) at each step. Find the rollback (subgame-perfect)
equilibrium of this game.
Hint: Begin at step 5, when the duelists are right up against one another. Set up the two-by-two table for the simultaneous-move game at this
step, and find its Nash equilibrium. Now move back to step 4, where the
probability of scoring a hit is 45, or 0.8, for each. Set up the two-by-two
table for the simultaneous-move game at this step, correctly specifying
in the appropriate cell what happens in the future. For example, if one
shoots and misses, but the other does not shoot, then the other will wait
until step 5 and score a sure hit. If neither shoots, then the game will go
12/18/14 3:11 PM
e x e rc i s e s 2 1 3
to the next step, for which you have already found the equilibrium. Using
all this information, find the payoffs in the two-by-two table of step 4,
and find the Nash equilibrium at this step. Work backward in the same
way through the rest of the steps to find the Nash equilibrium strategies
of the full game.
U13.
6841D CH06 UG.indd 213
Describe an example of business competition that is similar in structure
to the duel in Exercise U12.
12/18/14 3:11 PM
7
7
■
Simultaneous-Move Games:
Mixed Strategies
I
of simultaneous-move games in Chapter 4, we came across a
class of games that the solution methods described there could not solve; in
fact, games in that class have no Nash equilibria in pure strategies. To predict
outcomes for such games, we need an extension of our concepts of strategies
and equilibria. This is to be found in the randomization of moves, which is the
focus of this chapter.
Consider the tennis-point game from the end of Chapter 4. This game is zero
sum; the interests of the two tennis players are exactly opposite. Evert wants to
hit her passing shot to whichever side—down the line (DL) or crosscourt (CC)—
is not covered by Navratilova, whereas Navratilova wants to cover the side to
which Evert hits her shot. In Chapter 4, we pointed out that in such a situation,
any systematic choice by Evert will be exploited by Navratilova to her own advantage and therefore to Evert’s disadvantage. Conversely, Evert can exploit any
systematic choice by Navratilova. To avoid being thus exploited, each player
wants to keep the other guessing, which can be done by acting unsystematically
or randomly.
However, randomness doesn’t mean choosing each shot half the time or alternating between the two. The latter would itself be a systematic action open
to exploitation, and a 60–40 or 75–25 random mix may be better than 50–50 depending on the situation. In this chapter, we develop methods for calculating
the best mix and discuss how well this theory helps us understand actual play in
such games.
n our study
214
6841D CH07 UG.indd 214
12/18/14 3:12 PM
W h at i s a m i x e d s t r at e g y 2 1 5
Our method for calculating the best mix can also be applied to non-zero-sum
games. However, in such games the players’ interests can partially coincide,
so when player B exploits A’s systematic choice to her own advantage, it is not
necessarily to A’s disadvantage. Therefore, the logic of keeping the other player
guessing is weaker or even absent altogether in non-zero-sum games. We will
discuss whether and when mixed-strategy equilibria make sense in such games.
We start this chapter with a discussion of mixing in two-by-two games
and with the most direct method for calculating best responses and finding a
mixed-strategy equilibrium. Many of the concepts and methods we develop in
Section 2 continue to be valid in more general games, and Sections 6 and 7 extend these methods to games where players may have more than two pure strategies. We conclude with some general observations about how to mix strategies
in practice and with some evidence on whether mixing is observed in reality.
1 WHAT IS A MIXED STRATEGY?
When players choose to act unsystematically, they pick from among their pure
strategies in some random way. In the tennis-point game, Navratilova and Evert
each choose from two initially given pure strategies, DL and CC. We call a random mixture of these two pure strategies a mixed strategy.
Such mixed strategies cover a whole continuous range. At one extreme, DL
could be chosen with probability 1 (for sure), meaning that CC is never chosen
(probability 0); this “mixture” is just the pure strategy DL. At the other extreme,
DL could be chosen with probability 0 and CC with probability 1; this “mixture”
is the same as pure CC. In between is the whole set of possibilities: DL chosen
with probability 75% (0.75) and CC with probability 25% (0.25); or both chosen
with probabilities 50% (0.5) each; or DL with probability 13 (33.33 . . . %) and
CC with probability 23 (66.66 . . . %); and so on.1
The payoffs from a mixed strategy are defined as the corresponding
probability-weighted averages of the payoffs from its constituent pure strategies.
For example, in the tennis game of Section 7 of Chapter 4, against Navratilova’s
DL, Evert’s payoff from DL is 50 and from CC is 90. Therefore, the payoff of Evert’s
1
When a chance event has just two possible outcomes, people often speak of the odds in favor of or
against one of the outcomes. If the two possible outcomes are labeled A and B, and the probability of
A is p so that the probability of B is (1 2 p), then the ratio p(1 2 p ) gives the odds in favor of A, and
the reverse ratio (1 − p )p gives the odds against A. Thus, when Evert chooses CC with probability
0.25 (25%), the odds against her choosing CC are 3 to 1, and the odds in favor of it are 1 to 3. This
terminology is often used in betting contexts, so those of you who misspent your youth in that way
will be more familiar with it. However, this usage does not readily extend to situations in which three
or more outcomes are possible, so we avoid its use here.
6841D CH07 UG.indd 215
12/18/14 3:12 PM
2 1 6 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
mixture (0.75 DL, 0.25 CC) against Navratilova’s DL is 0.75 3 50 1 0.25 3 90 5
37.5 1 22.5 5 60. This is Evert’s expected payoff from this particular mixed
strategy.2
The probability of choosing one or the other pure strategy is a continuous
variable that ranges from 0 to 1. Therefore, mixed strategies are just special kinds
of continuously variable strategies like those we studied in Chapter 5. Each pure
strategy is an extreme special case where the probability of choosing that pure
strategy equals 1.
The notion of Nash equilibrium also extends easily to include mixed strategies. Nash equilibrium is defined as a list of mixed strategies, one for each player,
such that the choice of each is her best choice, in the sense of yielding the highest expected payoff for her, given the mixed strategies of the others. Allowing
for mixed strategies in a game solves the problem of possible nonexistence of
Nash equilibrium, which we encountered for pure strategies, automatically and
almost entirely. Nash’s celebrated theorem shows that, under very general circumstances (which are broad enough to cover all the games that we meet in this
book and many more besides), a Nash equilibrium in mixed strategies exists.
At this broadest level, therefore, incorporating mixed strategies into our
analysis does not entail anything different from the general theory of continuous strategies developed in Chapter 5. However, the special case of mixed strategies does bring with it several special conceptual as well as methodological matters and therefore deserves separate study.
2 MIXING MOVES
We begin with the tennis example of Section 7 of Chapter 4, which did not have
a Nash equilibrium in pure strategies. We show how the extension to mixed
strategies remedies this deficiency, and we interpret the resulting equilibrium as
one in which each player keeps the other guessing.
A. The Benefit of Mixing
We reproduce in Figure 7.1 the payoff matrix of Figure 4.14. In this game, if Evert
always chooses DL, Navratilova will then cover DL and hold Evert’s payoff down
2
Game theory assumes that players will calculate and try to maximize their expected payoffs when
probabilistic mixtures of strategies or outcomes are included. We consider this further in the appendix to this chapter, but for now we proceed to use it, with just one important note. The word
expected in “expected payoff” is a technical term from probability and statistics. It merely denotes
a probability-weighted average. It does not mean this is the payoff that the player should expect in
the sense of regarding it as her right or entitlement.
6841D CH07 UG.indd 216
12/18/14 3:12 PM
m i x i n g m o v e s 2 1 7
NAVRATILOVA
EVERT
DL
CC
DL
50, 50
80, 20
CC
90, 10
20, 80
FIGURE 7.1 No Equilibrium in Pure Strategies
to 50. Similarly, if Evert always chooses CC, Navratilova will choose to cover CC
and hold Evert down to 20. If Evert can only choose one of her two basic (pure)
strategies and Navratilova can predict that choice, Evert’s better (or less bad)
pure strategy will be DL, yielding her a payoff of 50.
But suppose Evert is not restricted to using only pure strategies and can
choose a mixed strategy, perhaps one in which the probability of playing DL on
any one occasion is 75%, or 0.75; this makes her probability of playing CC 25%,
or 0.25. Using the method outlined in Section 1, we can calculate Navratilova’s
expected payoff against this mixture as
0.75 3 50 1 0.25 3 10 5 37.5 1 2.5 5 40 if she covers DL, and
0.75 3 20 1 0.25 3 80 5 15 1 20 5 35 if she covers CC.
If Evert chooses this 75–25 mixture, the expected payoffs show that Navratilova
can best exploit it by covering DL.
When Navratilova chooses DL to best exploit Evert’s 75–25 mix, her choice
works to Evert’s disadvantage because this is a zero-sum game. Evert’s expected
payoffs are
0.75 3 50 1 0.25 3 90 5 37.5 1 22.5 5 60 if Navratilova covers DL, and
0.75 3 80 1 0.25 3 20 5 60 1 5 5 65 if Navratilova covers CC.
By choosing DL, Navratilova holds Evert down to 60 rather than 65. But notice
that Evert’s payoff with the mixture is still better than the 50 she would get by
playing purely DL or the 20 she would get by playing purely CC.3
The 75–25 mix, while improving Evert’s expected payoff relative to her pure
strategies, does leave Evert’s strategy open to some exploitation by Navratilova.
By choosing to cover DL she can hold Evert down to a lower expected payoff
than when she chooses CC. Ideally, Evert would like to find a mix that would
3
Not every mixed strategy will perform better than the pure strategies. For example, if Evert mixes
50–50 between DL and CC, Navratilova can hold Evert’s expected payoff down to 50, exactly the
same as from pure DL. And a mixture that attaches a probability of less than 30% to DL will be worse
for Evert than pure DL. We ask you to verify these statements as a useful exercise to acquire the skill
of calculating expected payoffs and comparing strategies.
6841D CH07 UG.indd 217
12/18/14 3:12 PM
2 1 8 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
be exploitation proof—a mix that would leave Navratilova no obvious choice of
pure strategy to use against it. Evert’s exploitation-proof mixture must have the
property that Navratilova gets the same expected payoff against it by covering
DL or CC; it must keep Navratilova indifferent between her two pure strategies.
We call this the opponent’s indifference property; it is the key to mixed-strategy
equilibria in non-zero-sum games, as we see later in this chapter.
To find the exploitation-proof mix requires taking a more general approach
to describing Evert’s mixed strategy so that we can solve algebraically for the appropriate mixture probabilities. For this approach, we denote the probability of
Evert choosing DL by the algebraic symbol p, so the probability of choosing CC
is 1 2 p . We refer to this mixture as Evert’s p -mix for short.
Against the p-mix, Navratilova’s expected payoffs are
50p 1 10(1 2 p) if she covers DL, and
20p 1 80(1 2 p) if she covers CC.
For Evert’s strategy, her p-mix, to be exploitation proof, these two expected
payoffs for Navratilova should be equal. That implies 50p 1 10(1 2 p) 5
20p 1 80(1 2 p); or 30p 5 70(1 2 p); or 100p 5 70; or p 5 0.7. Thus, Evert’s
exploitation-proof mix uses DL with probability 70% and CC with probability 30%. With these mixture probabilities, Navratilova gets the same expected
payoff from each of her pure strategies and therefore cannot exploit any one
of them to her advantage (or Evert’s disadvantage in this zero-sum game). And
Evert’s expected payoff from this mixed strategy is
50 3 0.7 1 90 3 0.3 5 35 1 27 5 62 if Navratilova covers DL, and also
80 3 0.7 1 20 3 0.3 5 56 1 6 5 62 if Navratilova covers CC.
This expected payoff is better than the 50 that Evert would get if she used
the pure strategy DL and better than the 60 from the 75–25 mixture. We now
know this mixture is exploitation proof, but is it Evert’s optimal or equilibrium
mixture?
B. Best Responses and Equilibrium
To find the equilibrium mixtures in this game, we return to the method of bestresponse analysis originally described in Chapter 4 and extended to games with
continuous strategies in Chapter 5. Our first task is to identify Evert’s best response to—her best choice of p for—each of Navratilova’s possible strategies.
Since those strategies can also be mixed, they are similarly described by the
probability with which she covers DL. Label this q, so 1 2 q is the probability
that Navratilova covers CC. We refer to Navratilova’s mixed strategy as her q-mix
and now look for Evert’s best choice of p at each of Navratilova’s possible choices
of q.
6841D CH07 UG.indd 218
12/18/14 3:12 PM
m i x i n g m o v e s 2 1 9
Using Figure 7.1, we see that Evert’s p-mix gets her the expected payoff
50p 1 90(1 2 p) if Navratilova chooses DL, and
80p 1 20(1 2 p) if Navratilova chooses CC.
Therefore against Navratilova’s q-mix, Evert’s expected payoff is
[50p 1 90(1 2 p)]q 1 [80p 1 20(1 2 p)] (1 2 q) .
Rearranging the terms, Evert’s expected payoff becomes
[50q 1 80(1 2 q)]p 1 [90q 1 20(1 2 q)] (1 2 p)
5 [90q 1 20(1 2 q)] 1 [50q 1 80(1 2 q) 2 90q 2 20(1 2 q)]p
5 [20 1 70q] 1 [60 2 100q]p
and we use this expected payoff to help us find Evert’s best response values of p.
We are trying to identify the p that maximizes Evert’s payoff at each value of
q, so the key question is how her expected payoff expression varies with p. What
matters is the coefficient on p: [60 2 100q ]. Specifically, it matters whether that
coefficient is positive (in which case Evert’s expected payoff increases as p increases) or negative (in which case Evert’s expected payoff decreases as p increases). Clearly, the sign of the coefficient depends on q, the critical value of q
being the one that makes 60 2 100q 5 0. That q value is 0.6.
When Navratilova’s q , 0.6, [60 2 100q] is positive, Evert’s expected payoff
increases as p increases, and her best choice is p 5 1, or the pure strategy DL.
Similarly, when Navratilova’s q . 0.6, Evert’s best choice is p 5 0, or the purestrategy CC. If Navratilova’s q 5 0.6, Evert gets the same expected payoff regardless of p, and any mixture between DL and CC is just as good as any other; any p
from 0 to 1 can be a best response. We summarize this for future reference:
If q , 0.6, best response is p 5 1 (pure DL).
If q 5 0.6, any p-mix is a best response.
If q . 0.6, best response is p 5 0 (pure CC).
As a quick confirmation of intuition, observe that when q is low (Navratilova is
sufficiently unlikely to cover DL), Evert should choose DL, and when q is high
(Navratilova is sufficiently likely to cover DL), Evert should choose CC. The exact
sense of “sufficiently,” and therefore the switching point q 5 0.6, of course depends on the specific payoffs in the example.4
We said earlier that mixed strategies are just a special kind of continuous
strategy, with the probability being the continuous variable. Now we have found
Evert’s best p corresponding to each of Navratilova’s choices of q. In other words,
4
If, in some numerical problem you are trying to solve, the expected payoff lines for the pure strategies do not intersect, that would indicate that one pure strategy was best for all of the opponent’s
mixtures. Then this player’s best response would always be that pure strategy.
6841D CH07 UG.indd 219
12/18/14 3:12 PM
2 2 0 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
p
q
q
1
1
1
0.6
0
0
0.6
1 q
0
0
0.7
1 p
0
0
0.7
1 p
FIGURE 7.2 Best Responses and Equilibrium in the Tennis Point
we have found Evert’s best-response rule, and we can graph it exactly as we did
in Chapter 5.
We show this graph in the left-hand panel of Figure 7.2, with q on the horizontal axis and p on the vertical axis. Both are probabilities, limited to the range
from 0 to 1. For q less than 0.6, p is at its upper limit of 1; for q greater than 0.6,
p is at its lower limit of 0. At q 5 0.6, all values of p between 0 and 1 are equally
“best” for Evert; therefore the best response is the vertical line between 0 and 1.
This is a new flavor of best-response graph; unlike the steadily rising or falling
lines or curves of Chapter 5, it is flat over two intervals of q and jumps down in a
step at the point where the two intervals meet. But conceptually it is just like any
other best-response graph.
Similarly, Navratilova’s best-response rule—her best q-mix corresponding to
each of Evert’s p-mixes—can be calculated; we leave this for you to do so you
can consolidate your understanding of the idea and the algebra. You should also
check the intuition of Navratilova’s choices as we did for Evert. We just state the
result:
If p , 0.7, best response is q 5 0 (pure CC).
If p 5 0.7, any q-mix is a best response.
If p . 0.7, best response is q 5 1 (pure DL).
This best-response rule for Navratilova is graphed in the middle panel of Figure 7.2.
The right-hand panel in Figure 7.2 combines the other two panels by reflecting the left graph across the diagonal (p 5 q line) so that p is on the horizontal
axis and q on the vertical axis and then superimposing this graph on the middle
graph. Now the blue and black curves meet at exactly one point, where p 5 0.7
and q 5 0.6. Here each player’s mixture choice is a best response to the other’s
choice, so the pair constitutes a Nash equilibrium in mixed strategies.
This representation of best-response rules includes pure strategies as special cases corresponding to the extreme values of p and q. So we can see that the
6841D CH07 UG.indd 220
12/18/14 3:12 PM
n a s h e q u i l i b r i u m a s a s y s t e m o f b e l i e f s a n d r e s p o n s e s 2 2 1
best-response curves do not have any points in common at any of the sides of
the square where each value of p and q equals either 0 or 1; this shows us that
the game does not have any pure-strategy equilibria, as we checked directly in
Section 7 of Chapter 4. The mixed-strategy equilibrium in this example is the
unique Nash equilibrium in the game.
You can also calculate Navratilova’s exploitation-proof choice of q using the
same method as we used in Section 2.A for finding Evert’s exploitation-proof p.
You will get the answer q 5 0.6. Thus, the two exploitation-proof choices are indeed best responses to each other, and they are the Nash equilibrium mixtures
for the two players.
In fact, if all you want to do is to find a mixed-strategy equilibrium of a
zero-sum game where each player has just two pure strategies, you don’t have to
go through the detailed construction of best-response curves, graph them, and
look for their intersection. You can write down the exploitation-proofness equations from Section 2.A for each player’s mixture and solve them. If the solution
has both probabilities between 0 and 1, you have found what you want. If the
solution includes a probability that is negative, or greater than 1, then the game
does not have a mixed-strategy equilibrium; you should go back and look for a
pure-strategy equilibrium. For games where a player has more than two pure
strategies, we examine solution techniques in Sections 6 and 7.
3 NASH EQUILIBRIUM AS A SYSTEM OF BELIEFS AND RESPONSES
When the moves in a game are simultaneous, neither player can respond to
the other’s actual choice. Instead, each takes her best action in light of what
she thinks the other might be choosing at that instant. In Chapter 4, we called
such thinking a player’s belief about the other’s strategy choice. We then interpreted Nash equilibrium as a configuration where such beliefs are correct, so
each chooses her best response to the actual actions of the other. This concept
proved useful for understanding the structures and outcomes of many important types of games, most notably the prisoners’ dilemma, coordination games,
and chicken.
However, in Chapter 4 we considered only pure-strategy Nash equilibria.
Therefore, a hidden assumption went almost unremarked—namely, that each
player was sure or confident in her belief that the other would choose a particular pure strategy. Now that we are considering more general mixed strategies,
the concept of belief requires a corresponding reinterpretation.
Players may be unsure about what others might be doing. In the coordination game in Chapter 4, in which Harry wanted to meet Sally, he might be unsure whether she would go to Starbucks or Local Latte, and his belief might
6841D CH07 UG.indd 221
12/18/14 3:12 PM
2 2 2 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
be that there was a 50–50 chance that she would go to either one. And in the
tennis example, Evert might recognize that Navratilova was trying to keep her
(Evert) guessing and would therefore be unsure of which of the available actions
Navratilova would play. In Chapter 2, Section 4, we labeled this as strategic uncertainty, and in Chapter 4 we mentioned that such uncertainty can give rise to
mixed-strategy equilibria. Now we develop this idea more fully.
It is important, however, to distinguish between being unsure and having
incorrect beliefs. For example, in the tennis example, Navratilova cannot be sure
of what Evert is choosing on any one occasion. But she can still have correct
beliefs about Evert’s mixture—namely, about the probabilities with which Evert
chooses between her two pure strategies. Having correct beliefs about mixed actions means knowing or calculating or guessing the correct probabilities with
which the other player chooses from among her underlying basic or pure actions. In the equilibrium of our example, it turned out that Evert’s equilibrium
mixture was 70% DL and 30% CC. If Navratilova believes that Evert will play DL
with 70% probability and CC with 30% probability, then her belief, although uncertain, will be correct in equilibrium.
Thus, we have an alternative and mathematically equivalent way to define
Nash equilibrium in terms of beliefs: each player forms beliefs about the probabilities of the mixture that the other is choosing and chooses her own best response to this. A Nash equilibrium in mixed strategies occurs when the beliefs
are correct, in the sense just explained.
In the next section, we consider mixed strategies and their Nash equilibria
in non-zero-sum games. In such games, there is no general reason that the other
player’s pursuit of her own interests should work against your interests. Therefore, it is not in general the case that you would want to conceal your intentions
from the other player, and there is no general argument in favor of keeping the
other player guessing. However, because moves are simultaneous, each player
may still be subjectively unsure of what action the other is taking and therefore
may have uncertain beliefs that in turn lead her to be unsure about how she
should act. This can lead to mixed-strategy equilibria, and their interpretation in
terms of subjectively uncertain but correct beliefs proves particularly important.
4 MIXING IN NON-ZERO-SUM GAMES
The same mathematical method used to find mixed-strategy equilibria in zerosum games—namely, exploitation-proofness or the opponent’s indifference
property—can be applied to non-zero-sum games as well, and it can reveal
mixed-strategy equilibria in some of them. However, in such games the players’
interests may coincide to some extent. Therefore, the fact that the other player
6841D CH07 UG.indd 222
12/18/14 3:12 PM
m i x i n g i n n o n - z e r o - s u m g a m e s 2 2 3
will exploit your systematic choice of strategy to her advantage need not work
out to your disadvantage, as was the case with zero-sum interactions. In a coordination game of the kind we studied in Chapter 4, for example, the players
are better able to coordinate if each can rely on the other’s acting systematically; random actions only increase the risk of coordination failure. As a result,
mixed-strategy equilibria have a weaker rationale, and sometimes no rationale at
all, in non-zero-sum games. Here we examine mixed-strategy equilibria in some
prominent non-zero-sum games and discuss their relevance or lack thereof.
A. Will Harry Meet Sally? Assurance, Pure Coordination, and Battle of the Sexes
We illustrate mixing in non-zero-sum games by using the assurance version of
the meeting game. For your convenience, we reproduce its table (Figure 4.11) as
Figure 7.3. We consider the game from Sally’s perspective first. If she is confident
that Harry will go to Starbucks, she also should go to Starbucks. If she is confident that Harry will go to Local Latte, so should she. But if she is unsure about
Harry’s choice, what is her own best choice?
To answer this question, we must give a more precise meaning to the uncertainty in Sally’s mind. (The technical term for this uncertainty, in the theory
of probability and statistics, is her subjective uncertainty. In the context where
the uncertainty is about another player’s action in a game, it is also strategic
uncertainty; recall the distinctions we discussed in Chapter 2, Section 2.D.) We
gain precision by stipulating the probability with which Sally thinks Harry will
choose one café or the other. The probability of Harry’s choosing Local Latte can
be any real number between 0 and 1 (that is, between 0% and 100%). We cover
all possible cases by using algebra, letting the symbol p denote the probability
(in Sally’s mind) that Harry chooses Starbucks; the variable p can take on any
real value between 0 and 1. Then (1 2 p) is the probability (again in Sally’s mind)
that Harry chooses Local Latte. In other words, we describe Sally’s strategic uncertainty as follows: she thinks that Harry is using a mixed strategy, mixing the
two pure strategies, Starbucks and Local Latte, in proportions or probabilities p
and (1 2 p ), respectively. We call this mixed strategy Harry’s p-mix, even though
for the moment it is purely an idea in Sally’s mind.
SALLY
Starbucks
Local Latte
Starbucks
1, 1
0, 0
Local Latte
0, 0
2, 2
HARRY
FIGURE 7.3 Assurance
6841D CH07 UG.indd 223
12/18/14 3:12 PM
2 2 4 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
Given her uncertainty, Sally can calculate the expected payoffs from her actions when they are played against her belief about Harry’s p-mix. If she chooses
Starbucks, it will yield her 1 3 p 1 0 3 (1 2 p) 5 p. If she chooses Local Latte, it
will yield her 0 3 p 1 2 3 (1 2 p) 5 2 3 (1 2 p). When p is high, p . 2(1 2 p ); so
if Sally is fairly sure that Harry is going to Starbucks, then she does better by also
going to Starbucks. Similarly, when p is low, p , 2(1 2 p ); if Sally is fairly sure
that Harry is going to Local Latte, then she does better by going to Local Latte. If
p 5 2(1 2 p), or 3p 5 2, or p 5 23, the two choices give Sally the same expected
payoff. Therefore, if she believes that p 5 23, she might be unsure about her
own choice, so she might dither between the two.
Harry can figure this out, and that makes him unsure about Sally’s choice.
Thus, Harry also faces subjective strategic uncertainty. Suppose in his mind
Sally will choose Starbucks with probability q and Local Latte with probability
(1 2 q). Similar reasoning shows that Harry should choose Starbucks if q . 23
and Local Latte if q , 23. If q 5 23, he will be indifferent between the two actions and unsure about his own choice.
Now we have the basis for a mixed-strategy equilibrium with p 5 23 and
q 5 23. In such an equilibrium, these p and q values are simultaneously the
actual mixture probabilities and the subjective beliefs of each player about the
other’s mixture probabilities. The correct beliefs sustain each player’s own indifference between the two pure strategies and therefore each player’s willingness
to mix between the two. This matches exactly the concept of a Nash equilibrium
as a system of self-fulfilling beliefs and responses described in Section 3.
The key to finding the mixed-strategy equilibrium is that Sally is willing to
mix between her two pure strategies only if her subjective uncertainty about
Harry’s choice is just right—that is, if the value of p in Harry’s p-mix is just right.
Algebraically, this idea is borne out by solving for the equilibrium value of p
by using the equation p 5 2(1 2 p), which ensures that Sally gets the same expected payoff from her two pure strategies when each is matched against Harry’s p-mix. When the equation holds in equilibrium, it is as if Harry’s mixture
probabilities are doing the job of keeping Sally indifferent. We emphasize the
“as if” because in this game, Harry has no reason to keep Sally indifferent; the
outcome is merely a property of the equilibrium. Still, the general idea is worth
remembering: in a mixed-strategy Nash equilibrium, each person’s mixture
probabilities keep the other player indifferent between her pure strategies. We
derived this opponent’s indifference property in the zero-sum discussion above,
and now we see that it remains valid even in non-zero-sum games.
However, the mixed-strategy equilibrium has some very undesirable properties in the assurance game. First, it yields both players rather low expected
payoffs. The formulas for Sally’s expected payoffs from her two actions, p and
2(1 2 p), both equal 23 when p 5 23. Similarly, Harry’s expected payoffs
against Sally’s equilibrium q-mix for q 5 23 are also both 23. Thus, each player
6841D CH07 UG.indd 224
12/18/14 3:12 PM
m i x i n g i n n o n - z e r o - s u m g a m e s 2 2 5
gets 23 in the mixed-strategy equilibrium. In Chapter 4, we found two purestrategy equilibria for this game; even the worse of them (both choosing Starbucks) yields the players 1 each, and the better one (both choosing Local Latte)
yields them 2 each.
The reason the two players fare so badly in the mixed-strategy equilibrium
is that when they choose their actions independently and randomly, they create
a significant probability of going to different places; when that happens, they
do not meet, and each gets a payoff of 0. Harry and Sally fail to meet if one goes
to Starbucks and the other goes to Local Latte or vice versa. The probability of
this happening when both are using their equilibrium mixtures is 2 3 (23) 3
(13) 5 49.5 Similar problems exist in the mixed-strategy equilibria of most
non-zero-sum games.
A second undesirable property of the mixed-strategy equilibrium here
is that it is very fragile. If either player departs ever so slightly from the exact
values p 5 23 or q 5 23, the best choice of the other tips to one pure strategy. Once one player chooses a pure strategy, then the other also does better by
choosing the same pure strategy, and play moves to one of the two pure-strategy
equilibria. This instability of mixed-strategy equilibria is also common to many
non-zero-sum games. However, some important non-zero-sum games do have
mixed-strategy equilibria that are not so fragile. One example considered later
in this chapter and in Chapter 12 is the mixed-strategy equilibrium in the game
chicken, which has an interesting evolutionary interpretation.
Given the analysis of the mixed-strategy equilibrium in the assurance version of the meeting game, you can now probably guess the mixed-strategy
equilibria for the related non-zero-sum meeting games. In the pure-coordination
version (see Figure 4.10), the payoffs from meeting in the two cafés are the
same, so the mixed-strategy equilibrium will have p 5 12 and q 5 12. In the
battle-of-the-sexes variant (see Figure 4.12), Sally prefers to meet at Local Latte
because her payoff is 2 rather than the 1 that she gets from meeting at Starbucks.
Her decision hinges on whether her subjective probability of Harry’s going to
Starbucks is greater than or less than 23. (Sally’s payoffs here are similar to
those in the assurance version, so the critical p is the same.) Harry prefers to
meet at Starbucks, so his decision hinges on whether his subjective probability of Sally’s going to Starbucks is greater than or less than 13. Therefore, the
mixed-strategy Nash equilibrium has p 5 23 and q 5 13.
5
The probability that each chooses Starbucks in equilibrium is 23. The probability that each chooses
Local Latte is 13. The probability that one chooses Starbucks while the other chooses Local Latte is
(23) 3 (13). But that can happen two different ways (once when Harry chooses Starbucks and Sally
chooses Local Latte, and again when the choices are reversed) so the total probability of not meeting
is 2 3 (23) 3 (13). See the appendix to this chapter for more details on the algebra of probabilities.
6841D CH07 UG.indd 225
12/18/14 3:12 PM
2 2 6 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
B. Will James Meet Dean? Chicken
The non-zero-sum game of chicken also has a mixed-strategy equilibrium that
can be found using the same method developed above, although its interpretations are slightly different. Recall that this is a game between James and Dean,
who are trying to avoid a meeting; the game table, originally introduced in
Figure 4.13, is reproduced here as Figure 7.4.
If we introduce mixed strategies, James’s p-mix will entail a probability p of
swerving and a probability 1 2 p of going straight. Against that p-mix, Dean gets
0 3 p 2 1 3 (1 2 p) 5 p 2 1 if he chooses Swerve and 1 3 p 2 2 3 (1 2 p) 5 3p
2 2 if he chooses Straight. Comparing the two, we see that Dean does better by
choosing swerve when p 2 1 . 3p 2 2, or when 2p , 1, or when p , 12, that is,
when p is low and James’s is more likely to choose Straight. Conversely, when p is
high and James is more likely to choose Swerve, then Dean does better by choosing Straight. If James’ p-mix has p exactly equal to 12, then Dean is indifferent
between his two pure actions; he is therefore equally willing to mix between the
two. Similar analysis of the game from James’s perspective when considering
his options against Dean’s q-mix yields the same results. Therefore, p 5 12 and
q 5 12 is a mixed-strategy equilibrium of this game.
The properties of this equilibrium have some similarities but also some differences when compared with the mixed-strategy equilibria of the meeting game.
Here, each player’s expected payoff in the mixed-strategy equilibrium is low
(212). This is bad, as was the case in the meeting game, but unlike in that game,
the mixed-strategy equilibrium payoff is not worse for both players than either of
the two pure-strategy equilibria. In fact, because player interests are somewhat
opposed here, each player will do strictly better in the mixed-strategy equilibrium
than in the pure-strategy equilibrium that entails his choosing Swerve.
This mixed-strategy equilibrium is again unstable, however. If James increases his probability of choosing Straight to just slightly above 12, this
change tips Dean’s choice to pure Swerve. Then (Straight, Swerve) becomes the
pure-strategy equilibrium. If James instead lowers his probability of choosing
DEAN
Swerve (Chicken) Straight (Tough)
JAMES
Swerve (Chicken)
0, 0
–1, 1
Straight (Tough)
1, –1
–2, –2
FIGURE 7.4 Chicken
6841D CH07 UG.indd 226
12/18/14 3:12 PM
G e n e r a l d i s c u s s i o n o f m i x e d - s t r at e g y e q u i l i b r i a 2 2 7
Straight slightly below 12, Dean chooses Straight, and the game goes to the
other pure-strategy equilibrium.6
In this section, we found mixed-strategy equilibria in several non-zero-sum
games by solving the equations that come from the opponent’s indifference
property. We already know from Chapter 4 that these games also have other equilibria in pure strategies. Best-response curves can give a comprehensive picture,
displaying all Nash equilibria at once. As you already know all of the equilibria
from the two separate analyses, we do not spend time and space graphing the
best-response curves here. We merely note that when there are two pure-strategy
equilibria and one mixed-strategy equilibrium, as in the examples above, you
will find that the best-response curves cross in three different places, one for
each of the Nash equilibria. We also invite you to graph best-response curves for
similar games at the end of this chapter, with full analyses presented (as usual) in
the solutions to the solved exercises.
5 GENERAL DISCUSSION OF MIXED-STRATEGY EQUILIBRIA
Now that we have seen how to find mixed-strategy equilibria in both zero-sum
and non-zero-sum games, it is worthwhile to consider some additional features
of these equilibria. In particular, we highlight in this section some general properties of mixed-strategy equilibria. We also introduce you to some results that
seem counterintuitive at first, until you fully analyze the game in question.
A. Weak Sense of Equilibrium
The opponent’s indifference property described in Section 2 implies that in a
mixed-strategy equilibrium, each player gets the same expected payoff from
each of her two pure strategies, and therefore also gets the same expected payoff
from any mixture between them. Thus, mixed-strategy equilibria are Nash equilibria only in a weak sense. When one player is choosing her equilibrium mix, the
other has no positive reason to deviate from her own equilibrium mix. But she
would not do any worse if she chose another mix or even one of her pure strategies. Each player is indifferent between her pure strategies, or indeed between
any mixture of them, so long as the other player is playing her correct (equilibrium) mix.
6
In Chapter 12, we consider a different kind of stability, namely evolutionary stability. The question
in the evolutionary context is whether a stable mix of Straight and Swerve choosers can arise and
persist in a population of chicken players. The answer is yes, and the proportions of the two types
are exactly equal to the probabilities of playing each action in the mixed-strategy equilibrium. Thus,
we derive a new and different motivation for that equilibrium in this game.
6841D CH07 UG.indd 227
12/18/14 3:12 PM
2 2 8 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
This seems to undermine the basis for mixed-strategy Nash equilibria as the
solution concept for games. Why should a player choose her appropriate mixture
when the other player is choosing her own? Why not just do the simpler thing by
choosing one of her pure strategies? After all, the expected payoff is the same.
The answer is that to do so would not be a Nash equilibrium; it would not be a
stable outcome, because then the other player would deviate from her mixture.
If Evert says to herself “When Navratilova is choosing her best mix (q 5 0.6), I get
the same payoff from DL, CC, or any mixture. So why bother to mix; why don’t I
just play DL?” then Navratilova can do better by switching to her pure strategy of
covering DL. Similarly, if Harry chooses pure Starbucks in the assurance meeting game, then Sally can get a higher payoff in equilibrium (1 instead of 23) by
switching from her 50–50 mix to her pure Starbucks as well.
B. Counterintuitive Changes in Mixture Probabilities in Zero-SumGames
Games with mixed-strategy equilibria may exhibit some features that seem
counterintuitive at first glance. The most interesting of them is the change in the
equilibrium mixes that follow a change in the structure of a game’s payoffs. To illustrate, we return to Evert and Navratilova and their tennis point.
Suppose that Navratilova works on improving her skills covering down the
line to the point where Evert’s success using her DL strategy against Navratilova’s
covering DL drops to 30% from 50%. This improvement in Navratilova’s skill alters the payoff table, including the mixed strategies for each player, from that
illustrated in Figure 7.1. We present the new table in Figure 7.5.
The only change from the table in Figure 7.1 has occurred in the upper-lefthand cell, where our earlier 50 for Evert is now a 30 and the 50 for Navratilova is
now a 70. This change in the payoff table does not lead to a game with a purestrategy equilibrium because the players still have opposing interests; Navratilova still wants their choices to coincide, and Evert still wants their choices to
differ. We still have a game in which mixing will occur.
But how will the equilibrium mixes in this new game differ from those calculated in Section 2? At first glance, many people would argue that Navratilova
should cover DL more often now that she has gotten so much better at doing so.
Thus, the assumption is that her equilibrium q-mix should be more heavily
NAVRATILOVA
EVERT
DL
CC
DL
30, 70
80, 20
CC
90, 10
20, 80
FIGURE 7.5 Changed Payoffs in the Tennis Point
6841D CH07 UG.indd 228
12/18/14 3:12 PM
G e n e r a l d i s c u s s i o n o f m i x e d - s t r at e g y e q u i l i b r i a 2 2 9
weighted toward DL, and her equilibrium q should be higher than the 0.6 calculated before.
But when we calculate Navratilova’s q-mix by using the condition of Evert’s
indifference between her two pure strategies, we get 30q 1 80(1 2 q) 5 90q 1
20(1 2 q), or q 5 0.5. The actual equilibrium value for q, 50%, has exactly the opposite relation to the original q of 60% than what many people’s intuition predicts.
Although the intuition seems reasonable, it misses an important aspect of
the theory of strategy: the interaction between the two players. Evert will also
be reassessing her equilibrium mix after the change in payoffs, and Navratilova
must take the new payoff structure and Evert’s behavior into account when determining her new mix. Specifically, because Navratilova is now so much better
at covering DL, Evert uses CC more often in her mix. To counter that, Navratilova covers CC more often, too.
We can see this more explicitly by calculating Evert’s new mixture. Her equilibrium p must equate Navratilova’s expected payoff from covering DL, 30p 1 90
(1 2 p), with her expected payoff from covering CC, 80p 1 20(1 2 p). So we have
30p 1 90(1 2 p) 5 80p 1 20(1 2 p), or 90 2 60p 5 20 1 60p, or 120p 5 70. Thus,
Evert’s p must be 712, which is 0.583, or 58.3%. Comparing this new equilibrium
p with the original 70% calculated in Section 2 shows that Evert has significantly
decreased the number of times she sends her shot DL in response to Navratilova’s
improved skills. Evert has taken into account the fact that she is now facing an opponent with better DL coverage, and so she does better to play DL less frequently
in her mixture. By virtue of this behavior, Evert makes it better for Navratilova
also to decrease the frequency of her DL play. Evert would now exploit any other
choice of mix by Navratilova, in particular a mix heavily favoring DL.
So is Navratilova’s skill improvement wasted? No, but we must judge it
properly—not by how often one strategy or the other gets used but by the resulting payoffs. When Navratilova uses her new equilibrium mix with q 5 0.5, Evert’s
success percentage from either of her pure strategies is (30 3 0.5) 1 (80 3 0.5) 5
(90 3 0.5) 1 (20 3 0.5) 5 55. This is less than Evert’s success percentage of 62 in
the original example. Thus, Navratilova’s average payoff also rises from 38 to 45,
and she does benefit by improving her DL coverage.
Unlike the counterintuitive result that we saw when we considered Navratilova’s strategic response to the change in payoffs, we see here that her response is
absolutely intuitive when considered in light of her expected payoff. In fact, players’ expected payoff responses to changed payoffs can never be counterintuitive,
although strategic responses, as we have seen, can be.7 The most interesting
7
For a general theory of the effect that changing the payoff in a particular cell has on the equilibrium mixture and the expected payoffs in equilibrium, see Vincent Crawford and Dennis Smallwood, “Comparative Statics of Mixed-Strategy Equilibria in Noncooperative Games,” Theory and
Decision, vol. 16 (May 1984), pp. 225–32.
6841D CH07 UG.indd 229
12/18/14 3:12 PM
2 3 0 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
aspect of this counterintuitive outcome in players’ strategic responses is the
message that it sends to tennis players and to strategic game players more generally. The result here is equivalent to saying that Navratilova should improve
her down-the-line coverage so that she does not have to use it so often.
Next, we present an even more general and more surprising result about
changes in mixture probabilities. The opponent’s indifference condition means
that each player’s equilibrium mixture probabilities depend only on the other
player’s payoffs, not on her own. Consider the assurance game of Figure 7.3.
Suppose Sally’s payoff from meeting in Local Latte increases from 2 to 3, while
all other payoffs remain unchanged. Now, against Harry’s p-mix, Sally gets 1 3
p 1 0 3 (1 2 p) 5 p if she chooses Starbucks, and 0 3 p 1 3 3 (1 2 p) 5 3 2 3p
if she chooses Local Latte. Her indifference condition is p 5 3 2 3p, or 4p 5 3,
or p 5 34, compared with the value of 23 we found earlier for Harry’s p-mix
in the original game. The calculation of Harry’s indifference condition is unchanged and yields q 5 23 for Sally’s equilibrium strategy. The change in Sally’s
payoffs changes Harry’s mixture probabilities, not Sally’s! In Exercise S13, you
will have the opportunity to prove that this is true quite generally: my equilibrium mixing proportions do not change with my own payoffs, only with my opponent’s payoffs.
C. Risky and Safe Choices in Zero-SumGames
In sports, some strategies are relatively safe; they do not fail disastrously even if
anticipated by the opponent but do not do very much better even if unanticipated. Other strategies are risky; they do brilliantly if the other side is not prepared for them but fail miserably if the other side is ready. In American football, on third down with a yard to go, a run up the middle is safe and a long pass
is risky. An interesting question arises because some third-and-one situations
have more at stake than others. For example, making the play from your opponent’s 10-yard line has a much greater impact on a possible score than making
the play from your own 20-yard line. The question is, when the stakes are higher,
should you play the risky strategy more or less often than when the stakes are
lower?
To make this concrete, consider the success probabilities shown in Figure
7.6. (Note that, while in the tennis game we used percentages between 0 and
100, here we use probabilities between 0 and 1.) The offense’s safe play is the
run; the probability of a successful first down is 60% if the defense anticipates
a run versus 70% if the defense anticipates a pass. The offense’s risky play is the
pass because the success probability depends much more on what the defense
does; the probability of success is 80% if the defense anticipates a run and only
30% if it anticipates a pass.
6841D CH07 UG.indd 230
12/18/14 3:12 PM
G e n e r a l d i s c u s s i o n o f m i x e d - s t r at e g y e q u i l i b r i a 2 3 1
DEFENSE EXPECTS
OFFENSE PLAYS
FIGURE 7.6
Run
Pass
Run
0.6
0.7
Pass
0.8
0.3
Probability of Offense’s Success on Third Down with One Yard to Go
Suppose that when the offense succeeds with its play, it earns a payoff equal
to V, and if the play fails the payoff is 0. The payoff V could be some number of
points, such as three for a field-goal situation or seven for a touchdown situation. Alternatively, it could represent some amount of status or money that the
team earns, perhaps V 5 100 for succeeding in a game-winning play in an ordinary game or V 5 1,000,000 for clinching victory in the Super Bowl.8
The actual game table between Offense and Defense, illustrated in Figure
7.7, contains expected payoffs to each player. Those expected payoffs average
between the success payoff of V and the failure payoff of 0. For example, the expected payoff to the Offense of playing Run when the Defense expects Run is:
0.6 3 V 1 0.4 3 0 5 0.6V. The zero-sum nature of the game means the Defense’s
payoff in that cell is 20.6V. You can similarly compute the expected payoffs for
each other cell of the table to verify that the payoffs shown below are correct.
In the mixed-strategy equilibrium, Offense’s probability p of choosing Run
is determined by the opponent’s indifference property. The correct p therefore
satisfies:
p[20.6V ] 1 (1 2 p)[20.8V ] 5 p[20.7V ] 1 (1 2 p)[20.3V ].
Notice that we can divide both sides of this equation by V to eliminate V entirely
from the calculation for p.9 Then the simplified equation becomes 20.6p 2 0.8
(1 2 p) 5 20.7p 2 0.3 (1 2 p), or 0.1p 5 0.5 (1 2 p). Solving this reduced equation yields p 5 56, so Offense will play Run with high probability in its optimal
mixture. This safer play is often called the “percentage play” because it is the
normal play in such situations. The risky play (Pass) is played only occasionally
to keep the opponent guessing or, in football commentators’ terminology, “to
keep the defense honest.”
8
Note that V is not necessarily a monetary amount; it can be an amount of utility that captures aversion to risk. We investigate issues pertaining to risk in great detail in Chapter 8 and attitudes toward
risk and expected utility in the appendix to that chapter.
9
This result comes from the fact that we can eliminate V entirely from the opponent’s indifference
equation, so it does not depend on the particular success probabilities specified in Figure 7.6. The
result is therefore quite general for mixed-strategy games where each payoff equals a success probability times a success value.
6841D CH07 UG.indd 231
12/18/14 3:12 PM
2 3 2 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
DEFENSE
OFFENSE
FIGURE 7.7
Run
Pass
Run
0.6V, –0.6V
0.7V, –0.7V
Pass
0.8V, –0.8V
0.3V, –0.3V
The Third-and-One Game
The interesting part of this result is that the expression for p is completely
independent of V. That is, the theory says that you should mix the percentage
play and the risky play in exactly the same proportions on a big occasion as you
would on a minor occasion. This result runs against the intuition of many people. They think that the risky play should be engaged in less often when the occasion is more important. Throwing a long pass on third down with a yard to go
may be fine on an ordinary Sunday afternoon in October, but doing so in the
Super Bowl is too risky.
So which is right: theory or intuition? We suspect that readers will be divided
on this issue. Some will think that the sports commentators are wrong and will
be glad to have found a theoretical argument to refute their claims. Others will
side with the commentators and argue that bigger occasions call for safer play.
Still others may think that bigger risks should be taken when the prizes are bigger, but even they will find no support in the theory, which says that the size of
the prize or the loss should make no difference to the mixture probabilities.
On many previous occasions when discrepancies between theory and intuition arose, we argued that the discrepancies were only apparent, that they
were the result of failing to make the theory sufficiently general or rich enough
to capture all the features of the situation that created the intuition, and that
improving the theory removed the discrepancy. This one is different: the problem is fundamental to the calculation of payoffs from mixed strategies as
probability-weighted averages or expected payoffs. And almost all of existing
game theory has this starting point.10
10
Vincent P. Crawford, “Equilibrium Without Independence,” Journal of Economic Theory, vol. 50,
no. 1 (February 1990), pp. 127–54; and James Dow and Sergio Werlang, “Nash Equilibrium Under
Knightian Uncertainty,” Journal of Economic Theory, vol. 64, no. 2 (December 1994), pp. 305–24, are
among the few research papers that suggest alternative foundations for game theory. And our exposition of this problem in the first edition of this book inspired an article that uses such new methods on it: Simon Grant, Atsushi Kaji, and Ben Polak, “Third Down and a Yard to Go: Recursive Expected Utility and the Dixit-Skeath Conundrum,” Economic Letters, vol. 73, no. 3 (December 2001),
pp. 275–86. Unfortunately, it uses more advanced concepts than those available at the introductory
level of this book.
6841D CH07 UG.indd 232
12/18/14 3:12 PM
M i x i n g W h e n o n e p l ay e r h a s t h r e e o r m o r e p u r e s t r at e g i e s 2 3 3
6 MIXING WHEN ONE PLAYER HAS THREE
OR MORE PURE STRATEGIES
Our discussion of mixed strategies to this point has been confined to games in
which each player has only two pure strategies, as well as mixes between them.
In many strategic situations, each player has available a larger number of pure
strategies, and we should be ready to calculate equilibrium mixes for those cases
as well. However, these calculations get complicated quite quickly. For truly complex games, we would turn to a computer to find the mixed-strategy equilibrium.
But for some small games, it is possible to calculate equilibria by hand quite
easily. The calculation process gives us a better understanding of how the equilibrium works than can be obtained just from looking at a computer-generated
solution. Therefore, in this section and the next one, we solve some larger
games.
Here we consider zero-sum games in which one of the players has only two
pure strategies, whereas the other has more. In such games, we find that the
player who has three (or more) pure strategies typically uses only two of them in
equilibrium. The others do not figure in his mix; they get zero probabilities. We
must determine which ones are used and which ones are not.11
Our example is that of the tennis-point game augmented by giving Evert a
third type of return. In addition to going down the line or crosscourt, she now
can consider using a lob (a slower but higher and longer return). The equilibrium depends on the payoffs of the lob against each of Navratilova’s two defensive stances. We begin with the case that is most likely to arise and then consider
a coincidental or exceptional case.
A. A General Case
Evert now has three pure strategies in her repertoire: DL, CC, and Lob. We leave
Navratilova with just two pure strategies, Cover DL or Cover CC. The payoff table
for this new game can be obtained by adding a Lob row to the table in Figure 7.1.
The result is shown in Figure 7.8. We have assumed that Evert’s payoffs from the
Lob are between the best and the worst she can get with DL and CC, and not too
different against Navratilova’s covering DL or CC. We have shown not only the payoffs from the pure strategies, but also those for Evert’s three pure strategies against
11
Even when a player has only two pure strategies, he may not use one of them in equilibrium. The
other player then generally finds one of his strategies to be better against the one that the first player
does use. In other words, the equilibrium “mixtures” collapse to the special case of pure strategies.
But when one or both players have three or more strategies, we can have a genuinely mixed-strategy
equilibrium where some of the pure strategies go unused.
6841D CH07 UG.indd 233
12/18/14 3:12 PM
2 3 4 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
NAVRATILOVA
EVERT
DL
CC
q-mix
DL
50, 50
80, 20
50q +
50q +
80(1 – q), 20(1 – q)
CC
90, 10
20, 80
90q +
10q +
20(1 – q), 80(1 – q)
Lob
70, 30
60, 40
70q +
30q +
60(1 – q), 40(1 – q)
FIGURE 7.8 Payoff Table for Tennis Point with Lob
Navratilova’s q-mix. [We do not show a row for Evert’s p-mix because we don’t need it.
It would require two probabilities, say p1 for DL and p2 for CC, and then that for the
Lob would be (1 2 p1 2 p2). We show you how to solve for equilibrium mixtures of
this type in the following section.]
Technically, before we begin looking for a mixed-strategy equilibrium, we
should verify that there is no pure-strategy equilibrium. This is easy to do, however, so we leave it to you and turn to mixed strategies.
We will use the logic of best responses to consider Navratilova’s optimal
choice of q. In Figure 7.9 we show Evert’s expected payoffs (success percentages)
from playing each of her pure strategies DL, CC, and Lob as the q in Navratilova’s
q-mix varies over its full range from 0 to 1. These graphs are just those of Evert’s
payoff expressions in the right-hand column of Figure 7.8. For each q, if Navratilova were to choose that q-mix in equilibrium, Evert’s best response would be
to choose the strategy that gives her (Evert) the highest payoff. We show this set
of best-response outcomes for Evert with the thicker lines in Figure 7.9; in mathematical jargon this is the upper envelope of the three payoff lines. Navratilova
wants to choose her own best possible q—the q that makes her own payoff as
large as possible (thereby making Evert’s payoff as low as possible)—from this
set of Evert’s best responses.
To be more precise about Navratilova’s optimal choice of q, we must calculate the coordinates of the kink points in the line showing her worst-case (Evert’s
best-case) outcomes. The value of q at the leftmost kink in this line makes Evert
indifferent between DL and Lob. That q must equate the two payoffs from DL
and Lob when used against the q-mix. Setting those two expressions equal gives
us 50q 1 80(1 2 q) 5 70q 1 60(1 2 q ), or q 5 2040 5 12 5 50%. Evert’s expected payoff at this point is 50 3 0.5 1 80 3 0.5 5 70 3 0.5 1 60 3 0.5 5 65. At
the second (rightmost) kink, Evert is indifferent between CC and Lob. Thus, the
q value at this kink is the one that equates the CC and Lob payoff expressions.
Setting 90q 1 20(1 2 q) 5 70q 1 60(1 2 q), we find q 5 4060 5 23 5 66.7%.
6841D CH07 UG.indd 234
12/18/14 3:12 PM
M i x i n g W h e n o n e p l ay e r h a s t h r e e o r m o r e p u r e s t r at e g i e s 2 3 5
Evert’s
success (%)
When Evert
plays
DL, Lob, and CC
80
90
70
60
50
20
0
0.5
0.6 0.667
1
Navratilova’s q-mix
FIGURE 7.9 Diagrammatic Solution for Navratilova’s q-Mix
Here, Evert’s expected payoff is 90 3 0.667 1 20 3 0.333 5 70 3 0.667 1 60 3
0.333 5 66.67. Therefore, Navratilova’s best (or least bad) choice of q is at the left
kink, namely q 5 0.5. Evert’s expected payoff is 65, so Navratilova’s is 35.
When Navratilova chooses q 5 0.5, Evert is indifferent between DL and Lob,
and either of these choices gives her a better payoff than does CC. Therefore,
Evert will not use CC at all in equilibrium. CC will be an unused strategy in her
equilibrium mix.
Now we can proceed with the equilibrium analysis as if this were a game with
just two pure strategies for each player: DL and CC for Navratilova, and DL and
Lob for Evert. We are back in familiar territory. Therefore, we leave the calculation
to you and just tell you the result. Evert’s optimal mixture in this game entails her
using DL with probability 0.25 and Lob with probability 0.75. Evert’s expected payoff from this mixture, taken against Navratilova’s DL and CC, respectively, is 50 3
0.25 1 70 3 0.75 5 80 3 0.25 1 60 3 0.75 5 65, as of course it should be.
We could not have started our analysis with this two-by-two game because
we did not know in advance which of her three strategies Evert would not use.
But we can be confident that in the general case, there will be one such strategy. When the three expected payoff lines take the most general positions, they
intersect pair by pair rather than all crossing at a single point. Then the upper
envelope has the shape that we see in Figure 7.9. Its lowest point is defined by
the intersection of the payoff lines associated with two of the three strategies.
The payoff from the third strategy lies below the intersection at this point, so the
player choosing among the three strategies does not use that third one.
6841D CH07 UG.indd 235
12/18/14 3:12 PM
2 3 6 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
B. Exceptional Cases
The positions and intersections of the three lines of Figure 7.9 depend on the
payoffs specified for the pure strategies. We chose the payoffs for that particular
game to show a general configuration of the lines. But if the payoffs stand in very
specific relationships to each other, we can get some exceptional configurations
with different results. We describe the possibilities here but leave it to you to redraw the diagrams for these cases.
First, if Evert’s payoffs from Lob against Navratilova’s DL and CC are equal,
then the line for Lob is horizontal, and a whole range of q-values make Navratilova’s mixture exploitation-proof. For example, if the two payoffs in the Lob
row of the table in Figure 7.8 are 70 each, then it is easy to calculate that the
left kink in a revised Figure 7.9 would be at q 5 13 and the right kink at q 5
57. For any q in the range from 13 to 57, Evert’s best response is Lob, and we
get an unusual equilibrium in which Evert plays a pure strategy and Navratilova
mixes. Further, Navratilova’s equilibrium mixture probabilities are indeterminate within the range from q 5 13 to q 5 57.
Second, if Evert’s payoffs from Lob against Navratilova’s DL and CC are
lower than those of Figure 7.8 by just the right amounts (or those of the other
two strategies are higher by just the right amounts), all three lines can meet in
one point. For example, if the payoffs of Evert’s Lob are 66 and 56 against Navratilova’s DL and CC, respectively, instead of 70 and 60, then for q 5 0.6, Evert’s expected payoff from the Lob becomes 66 3 0.6 1 56 3 0.4 5 39.6 1 22.6 5 62, the
same as that from DL and CC when q 5 0.6. Then Evert is indifferent among all
three of her strategies when q 5 0.6 and is willing to mix among all three.
In this special case, Evert’s equilibrium mixture probabilities are not fully
determinate. Rather, a whole range of mixtures, including some where all three
strategies are used, can do the job of keeping Navratilova indifferent between
her DL and CC and therefore willing to mix. However, Navratilova must use the
mixture with q 5 0.6. If she does not, Evert’s best response will be to switch to
one of her pure strategies, and this will work to Navratilova’s detriment. We do
not dwell on the determination of the precise range over which Evert’s equilibrium mixtures can vary, because this case can only arise for exceptional combinations of the payoff numbers and is therefore relatively unimportant.
Note that Evert’s payoffs from using her Lob against Navratilova’s DL and CC
could be even lower than the values that make all three lines intersect at one
point (for example, if the payoffs from Lob were 75 and 30 instead of 70 and 60
as in Figure 7.8). Then Lob is never the best response for Evert even though it is
not dominated by either DL or CC. This case of Lob being dominated by a mixture of DL and CC is explained in the online appendix to this chapter.
6841D CH07 UG.indd 236
12/18/14 3:12 PM
m i x i n g w h e n b o t h p l ay e r s h av e t h r e e s t r at e g i e s 2 3 7
7 MIXING WHEN BOTH PLAYERS HAVE THREE STRATEGIES
When we consider games in which both players have three pure strategies and are
considering mixing among all three, we need two variables to specify each mix.12
The row player’s p-mix would put probability p1 on his first pure strategy and
probability p2 on his second pure strategy. Then the probability of using the third
pure strategy must equal 1 minus the sum of the probabilities of the other two.
The same would be true for the column player’s q-mix. So when both players have
three strategies, we cannot find a mixed-strategy equilibrium without doing twovariable algebra. In many cases, however, such algebra is still manageable.
A. Full Mixture of All Strategies
Consider a simplified representation of a penalty kick in soccer. Suppose a
right-footed kicker has just three pure strategies: kick to the left, right, or center. (Left and right refer to the goalie’s left or right. For a right-footed kicker,
the most natural motion would send the ball to the goalie’s right.) Then he
can mix among these strategies, with probabilities denoted by pL, pR, and
pC, respectively. Any two of them can be taken to be the independent variables and the third expressed in terms of them. If pL and pR are made the two
independent-choice variables, then pC 5 1 2 pL 2 pR. The goalie also has
three pure strategies—namely, move to the kicker’s left (the goalie’s own
right), move to the kicker’s right, or continue to stand in the center—and can
mix among them with probabilities qL, qR, and qC, two of which can be chosen
independently.
As in Section 6.A, a best-response diagram for this game would require more
than two dimensions. [Four, to be exact. The goalie would choose his two independent variables, say (qL, qR), as his best response to the kicker’s two, (pL, pR),
and vice versa.] Instead, we again use the principle of the opponent’s indifference to focus on the mixture probabilities for one player at a time. Each player’s
probabilities should be such that the other player is indifferent among all the
pure strategies that constitute his mixture. This gives us a set of equations that
can be solved for the mixture probabilities. In the soccer example, the kicker’s
(pL, pR) would satisfy two equations expressing the requirement that the goalie’s expected payoff from using his left should equal that from using his right
and that the goalie’s expected payoff from using his right should equal that from
using his center. (Then the equality of expected payoffs from left and center follows automatically and is not a separate equation.) With more pure strategies,
12
More generally, if a player has N pure strategies, then her mix has (N 2 1) independent variables,
or “degrees of freedom of choice.”
6841D CH07 UG.indd 237
12/18/14 3:12 PM
2 3 8 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
GOALIE
KICKER
Left
Center
Right
Left
45, 55
90, 10
90, 10
Center
85, 15
Right
95, 5
0, 100
95, 5
85, 15
60, 40
FIGURE 7.10 Soccer Penalty Kick Game
the number of the probabilities to be solved for and the number of equations
that they must satisfy also increase.
Figure 7.10 shows the game table for the interaction between Kicker and
Goalie, with success percentages as payoffs for each player. (Unlike the evidence
we present on European soccer later in this chapter, these are not real data but
similar rounded numbers to simplify calculations.) Because the kicker wants to
maximize the percentage probability that he successfully scores a goal and the
goalie wants to minimize the probability that he lets the goal through, this is a
zero-sum game. For example, if the kicker kicks to his left while the goalie moves
to the kicker’s left (the top-left-corner cell), we suppose that the kicker still succeeds (in scoring) 45% of the time and the goalie therefore succeeds (in saving a
goal) 55% of the time. But if the kicker kicks to his right and the goalie goes to
the kicker’s left, then the kicker has a 90% chance of scoring; we suppose a 10%
probability that he might kick wide or too high so the goalie is still “successful”
10% of the time. You can experiment with different payoff numbers that you
think might be more appropriate.
It is easy to verify that the game has no equilibrium in pure strategies. So suppose the kicker is mixing with probabilities pL, pR, and pC 5 1 2 pL 2 pR. For each
of the goalie’s pure strategies, this mixture yields the goalie the following payoffs:
Left:
Center:
Right:
55pL 1 15pC 1 5pR 5 55pL 1 15(1 2 pL 2 pR) 1 5pR
10pL 1 100pC 1 5pR 5 10pL 1 100(1 2 pL 2 pR) 1 5pR
10pL 1 15pC 1 40pR 5 10pL 1 15(1 2 pL 2 pR) 1 40pR.
The opponent’s indifference rule says that the kicker should choose pL and pR so
that all three of these expressions are equal in equilibrium.
Equating the Left and Right expressions and simplifying, we have 45pL 5
35pR, or pR 5 (97)pL. Next, equate the Center and Right expressions and simplify by using the link between pL and pR just obtained. This gives
10p L 1 100 [1 2 p L 2 (9p L7)] 1 5(9p L7) 5 10pL 1 15[1 2 p L 2 (9p L7)] 1 40(9p L7),
or [85 1 120(97)] p L 5 85, which yields pL 5 0.355.
6841D CH07 UG.indd 238
12/18/14 3:12 PM
m i x i n g w h e n b o t h p l ay e r s h av e t h r e e s t r at e g i e s 2 3 9
Then we get pR 5 0.355(97) 5 0.457, and finally pC 5 1 2 0.355 2 0.457 5 0.188.
The goalie’s payoff from any of his pure strategies against this mixture can then
be calculated by using any of the preceding three payoff lines; the result is 24.6.
The goalie’s mixture probabilities can be found by writing down and solving the equations for the kicker’s indifference among his three pure strategies
against the goalie’s mixture. We will do this in detail for a slight variant of the
same game in Section 7.B, so we omit the details here and just give you the answer: qL 5 0.325, qR 5 0.561, and qC 5 0.113. The kicker’s payoff from any of his
pure strategies when played against the goalie’s equilibrium mixture is 75.4.
That answer is, of course, consistent with the goalie’s payoff of 24.6 that we calculated before.
Now we can interpret the findings. The kicker does better with his pure Right
than his pure Left, both when the goalie guesses correctly (60 . 45) and when he
guesses incorrectly (95 . 90). (Presumably the kicker is left-footed and can kick
harder to his right.) Therefore, the kicker chooses Right with greater probability
and, to counter that, the goalie chooses Right with the highest probability, too.
However, the kicker should not and does not choose his pure-strategy Right; if
he did so, the goalie would then choose his own pure-strategy Right, too, and
the kicker’s payoff would be only 60, less than the 75.4 that he gets in the mixedstrategy equilibrium.
B. EquilibriumMixtures with Some Strategies Unused
In the preceding equilibrium, the probabilities of using Center in the mix are
quite low for each player. The (Center, Center) combination would result in
a sure save and the kicker would get a really low payoff—namely, 0. Therefore, the kicker puts a low probability on this choice. But then the goalie also
should put a low probability on it, concentrating on countering the kicker’s
more likely choices. But if the kicker gets a sufficiently high payoff from
choosing Center when the goalie chooses Left or Right, then the kicker will
choose Center with some positive probability. If the kicker’s payoffs in the
Center row were lower, he might then choose Center with zero probability;
if so, the goalie would similarly put zero probability on Center. The game
would reduce to one with just two basic pure strategies, Left and Right, for
each player.
We show such a variant of the soccer game in Figure 7.11. The only difference in payoffs between this variant and the original game of Figure 7.10
is that the kicker’s payoffs from (Center, Left) and (Center, Right) have been
lowered even further, from 85 to 70. This might be because this kicker has the
habit of kicking too high and therefore missing the goal when aiming for the
center. Let us try to calculate the equilibrium here by using the same methods as in Section 7.A. This time we do it from the goalie’s perspective: we try
6841D CH07 UG.indd 239
12/18/14 3:12 PM
2 4 0 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
GOALIE
Left
KICKER
Left
45, 55
Center
70, 30
Right
95, 5
Center
Right
90, 10
90, 10
0, 100
95, 5
70, 30
60, 40
FIGURE 7.11 Variant of Soccer Penalty Kick Game
to find his mixture probabilities qL, qR, and qC by using the condition that the
kicker should be indifferent among all three of his pure strategies when played
against this mixture.
The kicker’s payoffs from his pure strategies are
Left:
Center:
Right:
45qL 1 90qC 1 90qR 5 45qL 1 90(1 2 qL 2 qR) 1 90qR
5 45qL 1 90(1 2 qL)
70qL 1 0qC 1 70qR 5 70qL 1 70qR
95qL 1 95qC 1 60qR 5 95qL 1 95(1 2 qL 2 qR) 1 60 qR
5 95(1 2 qR) 1 60qR.
Equating the Left and Right expressions and simplifying, we have 90 2 45qL 5
95 2 35qR, or 35qR 5 5 1 45qL. Next, equate the Left and Center expressions and
simplify to get 90 2 45qL 5 70qL 1 70qR, or 115qL 1 70qR 5 90. Substituting for
qR from the first of these equations (after multiplying through by 2 to get 70qR 5
10 1 90qL ) into the second yields 205qL 5 80, or qL 5 0.390. Then, using this
value for qL in either of the equations gives qR 5 0.644. Finally, we use both of
these values to obtain qC 5 1 2 0.390 2 0.644 5 20.034. Because probabilities
cannot be negative, something has obviously gone wrong.
To understand what happens in this example, start by noting that Center
is now a poorer strategy for the kicker than it was in the original version of the
game, where his probability of choosing it was already quite low. But the logic
of the opponent’s indifference, expressed in the equations that led to the solution, means that the kicker has to be kept willing to use this poor strategy. That
can happen only if the goalie is using his best counter to the kicker’s Center—
namely, the goalie’s own Center—sufficiently infrequently. And in this example,
that logic has to be carried so far that the goalie’s probability of Center has to
become negative.
As pure algebra, the solution that we derived may be fine, but it violates the
requirement of probability theory and real-life randomization that probabilities
be nonnegative. The best that can be done in reality is to push the goalie’s probability of choosing Center as low as possible—namely, to zero. But that leaves
6841D CH07 UG.indd 240
12/18/14 3:12 PM
m i x i n g w h e n b o t h p l ay e r s h av e t h r e e s t r at e g i e s 2 4 1
the kicker unwilling to use his own Center. In other words, we get a situation in
which each player is not using one of his pure strategies in his mixture—that is,
each is using it with zero probability.
Can there then be an equilibrium in which each player is mixing between
his two remaining strategies—namely, Left and Right? If we regard this reduced
two-by-two game in its own right, we can easily find its mixed-strategy equilibrium. With all the practice that you have had so far, it is safe to leave the details
to you and to state the result:
Kicker’s mixture probabilities: pL 5 0.4375, pR 5 0.5625
Goalie’s mixture probabilities: qL 5 0.3750, qR 5 0.6250
Kicker’s expected payoff (success percentage): 73.13
Goalie’s expected payoff (success percentage): 26.87.
We found this result by simply removing the two players’ Center strategies
from consideration on intuitive grounds. But we must check that it is a genuine
equilibrium of the full three-by-three game. That is, we must check that neither
player finds it desirable to bring in his third strategy, given the mixture of two
strategies chosen by the other player.
When the goalie is choosing this particular mixture, the kicker’s payoff from
pure Center is 0.375 3 70 1 0.625 3 70 5 70. This payoff is less than the 73.13
that he gets from either of his pure Left or pure Right or any mixture between
the two, so the kicker does not want to bring his Center strategy into play. When
the kicker is choosing the two-strategy mixture with the preceding probabilities, the goalie’s payoff from pure Center is 0.4375 3 10 1 0.5625 3 5 5 7.2. This
number is (well) below the 26.87 that the goalie would get using his pure Left or
pure Right or any mixture of the two. Thus, the goalie does not want to bring his
Center strategy into play either. The equilibrium that we found for the two-bytwo game is indeed an equilibrium of the three-by-three game.
To allow for the possibility that some strategies may go unused in an equilibrium mixture, we must modify or extend the “opponent’s indifference” principle. Each player’s equilibrium mix should be such that the other player is indifferent among all the strategies that are actually used in his equilibrium mix. The
other player is not indifferent between these and his unused strategies; he prefers the ones used to the ones unused. In other words, against the opponent’s
equilibrium mix, all of the strategies used in your own equilibrium mix should
give you the same expected payoff, which in turn should be higher than what
you would get from any of your unused strategies.
Which strategies will go unused in equilibrium? Answering that requires
much trial and error as in our calculation above, or leaving it all to a computer
program, and once you have understood the concept, it is safe to do the latter.
For the general theory of mixed-strategy equilibria when players can have any
number of possible strategies, see the online appendix to this chapter.
6841D CH07 UG.indd 241
12/18/14 3:12 PM
2 4 2 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
8 HOW TO USE MIXED STRATEGIES IN PRACTICE
There are several important things to remember when finding or using a mixed
strategy in a zero-sum game. First, to use a mixed strategy effectively in such
a game, a player needs to do more than calculate the equilibrium percentages
with which to use each of her actions. Indeed, in our tennis-point game, Evert
cannot simply play DL seven-tenths of the time and CC three-tenths of the time
by mechanically rotating seven shots down the line and three shots crosscourt.
Why not? Because mixing your strategies is supposed to help you benefit from
the element of surprise against your opponent. If you use a recognizable pattern
of plays, your opponent is sure to discover it and exploit it to her advantage.
The lack of a pattern means that, after any history of choices, the probability
of choosing DL or CC on the next turn is the same as it always was. If a run of
several successive DLs happens by chance, there is no sense in which CC is now
“due” on the next turn. In practice, many people mistakenly think otherwise,
and therefore they alternate their choices too much compared with what a truly
random sequence of choices would require: they produce too few runs of identical successive choices. However, detecting a pattern from observed actions is a
tricky statistical exercise that the opponents may not be able to perform while
playing the game. As we will see in Section 9, analysis of data from grand-slam
tennis finals found that servers alternated their serves too much, but receivers
were not able to detect and exploit this departure from true randomization.
The importance of avoiding predictability is clearest in ongoing interactions
of a zero-sum nature. Because of the diametrically opposed interests of the players in such games, your opponent always benefits from exploiting your choice
of action to the greatest degree possible. Thus, if you play the same game against
each other on a regular basis, she will always be on the lookout for ways to break
the code that you are using to randomize your moves. If she can do so, she has
a chance to improve her payoffs in future plays of the game. But even in singlemeet (sometimes called one-shot) zero-sum games, mixing remains beneficial
because of the benefit of tactical surprise.
Daniel Harrington, a winner of the World Series of Poker and author with
Bill Robertie of an excellent series of books on how to play Texas Hold ’em tournaments, notes the importance of randomizing your strategy in poker in order
to prevent opponents from reading what cards you’re holding and exploiting
your behavior.13 Because humans often have trouble being unpredictable, he
13
Poker is a game of incomplete information because each player holds private information about
her cards. While we do not analyze the details of such games until Chapter 8, they may involve
mixed-strategy equilibria (called semiseparating equilibria) where the random mixtures are specifically designed to prevent other players from using your actions to infer your private information.
6841D CH07 UG.indd 242
12/18/14 3:12 PM
h o w t o u s e m i x e d s t r at e g i e s i n p r a c t i c e 2 4 3
gives the following advice about how to implement a mixture between the pure
strategies of calling and raising:
It’s hard to remember exactly what you did the last four or five times a given
situation appeared, but fortunately you don’t have to. Just use the little random number generator that you carry around with you all day. What’s that?
You didn’t know you had one? It’s the second hand on your watch. If you
know that you want to raise 80 percent of the time with a premium pair in
early position and call the rest, just glance down at your watch and note the
position of the second hand. Since 80 percent of 60 is 48, if the second hand
is between 0 and 48, you raise, and if it’s between 48 and 60 you just call. The
nice thing about this method is that even if someone knew exactly what you
were doing, they still couldn’t read you!14
Of course, in using the second hand of a watch to implement a mixed strategy, it
is important that your watch not be so accurate and synchronized that your opponent can use the same watch and figure out what you are going to do!
So far, we have assumed that you are interested in implementing a mixed
strategy in order to avoid possible exploitation by your opponent. But if your
opponent is not playing his equilibrium strategy, you may want to try to exploit
his mistake. A simple example is illustrated using an episode of The Simpsons
in which Bart and Lisa play a game of rock-paper-scissors with each other. (In
Exercise S10, we give a full description of this three-by-three game, and you will
derive each player’s equilibrium mixture.) Just before they choose their strategies, Bart thinks to himself, “Good ol’ Rock. Nothing beats Rock,” while Lisa
thinks to herself, “Poor Bart. He always plays Rock.” Clearly, Lisa’s best response
is the pure strategy Paper against this naive opponent; she need not use her
equilibrium mix.
We have observed a more subtle example of exploitation when pairs of students play a best-of-100 version of the tennis game in this chapter. As with professional tennis players, our students often switch strategies too often, apparently thinking that playing five DLs in a row doesn’t look “random” enough. To
exploit this behavior, a Navratilova player could predict that after playing three
DLs in a row, an Evert player is likely to switch to CC, and she can exploit this by
switching to CC herself. She should do this more often than if she were randomizing independently each round, but ideally not so often that the Evert player
notices and starts learning to repeat her strategy in longer runs.
Finally, players must understand and accept the fact that the use of mixed
strategies guards you against exploitation and gives the best possible expected
payoff against an opponent who is making her best choices, but that it is only a
14
Daniel Harrington and Bill Robertie, Harrington on Hold ’em: Expert Strategies for No-Limit Tournaments, Volume 1: Strategic Play (Henderson, Nev.: Two Plus Two Publishing, 2004), p. 53.
6841D CH07 UG.indd 243
12/18/14 3:12 PM
2 4 4 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
probabilistic average. On particular occasions, you can get poor outcomes. For
example, the long pass on third down with a yard to go, intended to keep the defense honest, may fail on any specific occasion. If you use a mixed strategy in a
situation in which you are responsible to a higher authority, therefore, you may
need to plan ahead for this possibility. You may need to justify your use of such a
strategy ahead of time to your coach or your boss, for example. They need to understand why you have adopted your mixture and why you expect it to yield you
the best possible payoff on average, even though it might yield an occasional low
payoff as well. Even such advance planning may not work to protect your “reputation,” though, and you should prepare yourself for criticism in the face of a bad
outcome.
9 EVIDENCE ON MIXING
A. Zero-SumGames
Early researchers who performed laboratory experiments were generally dismissive of mixed strategies. To quote Douglas Davis and Charles Holt, “Subjects in
experiments are rarely (if ever) observed flipping coins, and when told ex post
that the equilibrium involves randomization, subjects have expressed surprise
and skepticism.”15 When the predicted equilibrium entails mixing two or more
pure strategies, experimental results do show some subjects in the group pursuing one of the pure strategies and others pursuing another, but this does not
constitute true mixing by an individual player. When subjects play zero-sum
games repeatedly, individual players often choose different pure strategies over
time. But they seem to mistake alternation for randomization—that is, they
switch their choices more often than true randomization would require.
Later research has found somewhat better evidence for mixing in zero-sum
games. When laboratory subjects are allowed to acquire a lot of experience, they
do appear to learn mixing in zero-sum games. However, departures from equilibrium predictions remain significant. Averaged across all subjects, the empirical probabilities are usually rather close to those predicted by equilibrium, but
many individual subjects play proportions far from those predicted by equilibrium. To quote Colin Camerer, “The overall picture is that mixed equilibria do
not provide bad guesses about how people behave, on average.”16
15
Douglas D. Davis and Charles A. Holt, Experimental Economics (Princeton: Princeton University
Press, 1993), p. 99.
16
For a detailed account and discussion, see Chapter 3 of Colin F. Camerer, Behavioral Game Theory
(Princeton: Princeton University Press, 2003). The quote is from p. 468 of this book.
6841D CH07 UG.indd 244
12/18/14 3:12 PM
E v i d e n c e o n m i x i n g 2 4 5
An instance of randomization in practice comes from Malaya in the late
1940s.17 The British army escorted convoys of food trucks to protect the trucks
from communist terrorist attacks. The terrorists could either launch a largescale attack or create a smaller sniping incident intended to frighten the truck
drivers and keep them from serving again. The British escort could be either
concentrated or dispersed throughout the convoy. For the army, concentration
was better to counter a full-scale attack, and dispersal was better against sniping. For the terrorists, a full-scale attack was better if the army escort was dispersed, and sniping was better if the escort was concentrated. This zero-sum
game has only a mixed-strategy equilibrium. The escort commander, who had
never heard of game theory, made his decision as follows. Each morning, as
the convoy was forming, he took a blade of grass and concealed it in one of his
hands, holding both hands behind his back. Then he asked one of his troops to
guess which hand held the blade, and he chose the form of the convoy according to whether the man guessed correctly. Although the precise payoff numbers
are difficult to judge and therefore we cannot say whether 50–50 was the right
mixture, the officer had correctly figured out the need for true randomization
and the importance of using a fresh randomization procedure every day to avoid
falling into a pattern or making too much alternation between the choices.
The best evidence in support of mixed strategies in zero-sum games comes
from sports, especially from professional sports, in which players accumulate a
great deal of experience in such games, and their intrinsic desire to win is buttressed by large financial gains from winning.
Mark Walker and John Wooders examined the serve-and-return play of toplevel players at Wimbledon.18 They model this interaction as a game with two
players, the server and the receiver, in which each player has two pure strategies.
The server can serve to the receiver’s forehand or backhand, and the receiver
can guess to which side the serve will go and move that way. Because serves are
so fast at the top levels of men’s singles, the receiver cannot react after observing
the actual direction of the serve; rather, the receiver must move in anticipation
of the serve’s direction. Thus, this game has simultaneous moves. Further, because the receiver wants to guess correctly and the server wants to wrong-foot
the receiver, this interaction has a mixed-strategy equilibrium. It is impossible
to observe the receiver’s strategy on a videotape (on which foot is he resting his
weight?), so one cannot easily reconstruct the entire matrix of payoffs to test
whether players are mixing according to the equilibrium predictions. However,
an important prediction of the theory can be tested by calculating the server’s
frequency of winning the point for each of his possible serving strategies.
17
R. S. Beresford and M. H. Peston, “A Mixed Strategy in Action,” Operations Research, vol. 6, no. 4
(December 1955), pp. 173–76.
18
Mark Walker and John Wooders, “Minimax Play at Wimbledon,” American Economic Review, vol.
91, no. 5 (December 2001), pp. 1521–38.
6841D CH07 UG.indd 245
12/18/14 3:12 PM
2 4 6 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
If the tennis players are using their equilibrium mixtures in the serve-andreturn game, the server should win the point with the same probability whether
he serves to the receiver’s forehand or backhand. An actual tennis match contains a hundred or more points played by the same two players; thus there is
enough data to test whether this implication holds for each match. Walker and
Wooders tabulated the results of serves in 10 matches. Each match contains four
kinds of serve-and-return combinations: A serving to B and vice versa, combined with service from the right or the left side of the court (Deuce or Ad side).
Thus, they had data on 40 serving situations and found that in 39 of them the
server’s success rates with forehand and backhand serves were equal to within
acceptable limits of statistical error.
The top-level players must have had enough general experience playing the
game, as well as particular experience playing against the specific opponents, to
have learned the general principle of mixing and the correct proportions to mix
against the specific opponents. However, in one respect the servers’ choices departed from true mixing. To achieve the necessary unpredictability, there should
be no pattern of any kind in a sequence of serves: the choice of side for each
serve should be independent of what has gone before. As we said in reference
to the practice of mixed strategies, players can alternate too much, not realizing that alternation is a pattern just as much as repeating the same action a few
times would be a pattern. And indeed, the data show that the tennis servers alternated too much. But the data also indicate that this departure from true mixing was not enough for the opponents to pick up and exploit.
As we showed in Section 8, penalty kicks in soccer are another excellent
context in which to study mixed strategies. The advantage to analyzing penalty
kicks is that one can actually observe the strategies of both the kicker and the
goalkeeper: not only where the kicker aims but also which direction the keeper
dives. This means one can compute the actual mixing probabilities and compare them to the theoretical prediction. The disadvantage, relative to tennis, is
that no two players ever face each other more than a few times in a season. Instead of analyzing specific matchups of players, one must aggregate across all
kickers and shooters in order to get enough data. Two studies using exactly this
kind of data find firm support for predictions of the theory.
Using a large data set from professional soccer leagues in Europe, Ignacio
Palacios-Huerta constructed the payoff table of the kicker’s average success probabilities shown in Figure 7.12.19 Because the data include both right- and leftfooted kickers, and therefore the natural direction of kicking differs between
them, they refer to any kicker’s natural side as “Right.” (Kickers usually kick with
the inside of the foot. A right-footed kicker naturally kicks to the goalie’s right
19
See “Professionals Play Minimax,” by Ignacio Palacios-Huerta, Review of Economics Studies, vol.
70, no. 20 (2003), pp. 395–415.
6841D CH07 UG.indd 246
12/18/14 3:12 PM
E v i d e n c e o n m i x i n g 2 4 7
GOALIE
KICKER
Left
Right
Left
58
95
Right
93
70
FIGURE 7.12 Soccer Penalty Kick Success Probabilities in European Major Leagues
and a left-footed kicker to the goalie’s left.) The choices are Left and Right for
each player. When the goalie chooses Right, it means covering the kicker’s natural
side.
Using the opponent’s indifference property, it is easy to calculate that the
kicker should choose Left 38.3% of the time and Right 61.7% of the time. This
mixture achieves a success rate of 79.6% no matter what the goalie chooses. The
goalie should choose the probabilities of covering her Left and Right to be 41.7
and 58.3, respectively; this mixture holds the kicker down to a success rate of
79.6%.
What actually happens? Kickers choose Left 40.0% of the time, and goalies
choose Left 41.3% of the time. These values are startlingly close to the theoretical predictions. The chosen mixtures are almost exploitation proof. The kicker’s
mix achieves a success rate of 79.0% against the goalie’s Left and 80% against the
goalie’s Right. The goalie’s mix holds kickers down to 79.3% if they choose Left
and 79.7% if they choose Right.
In an earlier paper, Pierre-André Chiappori, Timothy Groseclose, and
Steven Levitt used similar data and found similar results.20 They also analyzed
the whole sequence of choices of each kicker and goalie and did not even find
too much alternation. One reason for this last result could be that most penalty
kicks take place as isolated incidents across many games by contrast with the
rapidly repeated points in tennis, so players may find it easier to ignore what
happened on the previous kick. Nevertheless, these findings suggest that behavior in soccer penalty kicks is even closer to true mixing than behavior in the tennis serve-and-return game.
With such strong empirical confirmation of the theory, one might ask
whether the mixed-strategy skills that players learn in soccer carry over to other
game contexts. One study indicated that the answer is yes (Spanish professional soccer players played exactly according to the equilibrium predictions in
20
Pierre-André Chiappori, Timothy Groseclose, and Steven Levitt, “Testing Mixed Strategy Equilibria When Players are Heterogeneous: The Case of Penalty Kicks in Soccer,” American Economic Review, vol. 92, no. 4 (September 2002), pp. 1138–51.
6841D CH07 UG.indd 247
12/18/14 3:12 PM
2 4 8 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
laboratory experiments with two-by-two and four-by-four zero-sum matrix
games). But a second study failed to replicate these results. That study examined American Major League Soccer players as well as participants in the World
Series of Poker (who, as noted in Section 8 above, also have professional reasons to prevent exploitation by mixing), finding that the professionals’ behavior in abstract matrix games was just as far from equilibrium as that of students. Consistent with the results on professional chess players we discussed in
Chapter 3, experience leads professional players to mix according to equilibrium theory in their jobs, but this experience does not automatically lead players to equilibrium in new and unfamiliar games. 21
B. Non-Zero-SumGames
Laboratory experiments on games with mixed strategies in non-zero-sum
games yield even more negative results than experiments involving mixing in
zero-sum games. This is not surprising. As we have seen, in such games the
property that each player’s equilibrium mixture keeps her opponent indifferent among her pure strategies is a logical property of the equilibrium. Unlike in
zero-sum games, in general each player in a non-zero-sum game has no positive
or purposive reason to keep the other players indifferent. Then the reasoning
underlying the mixture calculations is more difficult for players to comprehend
and learn. This shows up in their behavior.
In a group of experimental subjects playing a non-zero-sum game, we may
see some pursuing one pure strategy and others pursuing another. This type of
mixing in the population, although it does not fit the theory of mixed-strategy
equilibria, does have an interesting evolutionary interpretation, which we examine in Chapter 12.
As we saw in Section 5.B above, each player’s mixture probabilities should
not change when the player’s own payoffs change. But in fact they do: players
tend to choose an action more when their own payoff to that action increases.22
The players do change their actions from one round to the next in repeated trials with different partners, but not in accordance with equilibrium predictions.
The overall conclusion is that you should interpret and use mixed-strategy
equilibria in non-zero-sum games with, at best, considerable caution.
21
The first study referenced is Ignacio Palacios-Huerta and Oskar Volij, “Experientia Docet: Professionals Play Minimax in Laboratory Experiments,” Econometrica, vol. 76, no. 1 (January 2008),
pp. 71–115. The second is Steven D. Levitt, John A. List, and David H. Reiley, “What Happens in the
Field Stays in the Field: Exploring Whether Professionals Play Minimax in Laboratory Experiments,”
Econometrica, vol. 78, no. 4 (July 2010), pp. 1413–34.
22
Jack Ochs, “Games with Unique Mixed-Strategy Equilibria: An Experimental Study,” Games and
Economic Behavior, vol. 10, no. 1 (July 1995), pp. 202–17.
6841D CH07 UG.indd 248
12/18/14 3:12 PM
K e y t e r m s 2 4 9
SUMMARY
Zero-sum games in which one player prefers a coincidence of actions and the
other prefers the opposite often have no Nash equilibrium in pure strategies. In
these games, each player wants to be unpredictable and thus uses a mixed strategy that specifies a probability distribution over her set of pure strategies. Each
player’s equilibrium mixture probabilities are calculated using the opponent’s
indifference property, namely that the opponent should get equal expected
payoffs from all her pure strategies when facing the first player’s equilibrium
mix. Best-response-curve diagrams can be used to show all mixed-strategy (as
well as pure-strategy) equilibria of a game.
Non-zero-sum games can also have mixed-strategy equilibria that can be
calculated from the opponent’s indifference property and illustrated using
best-response curves. But here the motivation for keeping the opponent indifferent is weaker or missing; therefore such equilibria have less appeal and
are often unstable.
Mixed strategies are a special case of continuous strategies but have additional matters that deserve separate study. Mixed-strategy equilibria can be interpreted as outcomes in which each player has correct beliefs about the probabilities with which the other player chooses from among her underlying pure actions.
And mixed-strategy equilibria may have some counterintuitive properties when
payoffs for players change.
If one player has three pure strategies and the other has only two, the player
with three available strategies will generally use only two in her equilibrium mix.
If both players have three (or more) pure strategies, equilibrium mixtures may put
positive probability on all pure strategies or only a subset. All strategies that are
actively used in the mixture yield equal expected payoff against the opponent’s
equilibrium mix; all the unused ones yield lower expected payoff. In these large
games, equilibrium mixtures may also be indeterminate in some exceptional
cases.
When using mixed strategies, players should remember that their system
of randomization should not be predictable in any way. Most important, they
should avoid excessive alternation of actions. Laboratory experiments show only
weak support for the use of mixed strategies. But mixed-strategy equilibria give
good predictions in many zero-sum situations in sports played by experienced
professionals.
KEY TERMS
expected payoff (216)
6841D CH07 UG.indd 249
opponent’s indifference property (218)
12/18/14 3:12 PM
2 5 0 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
sOLVED EXERCISES
S1.
Consider the following game:
COLIN
Safe
Risky
Safe
4, 4
4, 1
Risky
1, 4
6, 6
ROWENA
(a) Which game does this most resemble: tennis, assurance, or chicken?
Explain.
(b) Find all of this game’s Nash equilibria.
S2.
The following table illustrates the money payoffs associated with a
two-person simultaneous-move game:
COLIN
Left
Right
Up
1, 16
4, 6
Down
2, 20
3, 40
ROWENA
(a) Find the Nash equilibrium in mixed strategies for this game.
(b) What are the players’ expected payoffs in this equilibrium?
(c) Rowena and Colin jointly get the most money when Rowena plays
Down. However, in the equilibrium, she does not always play
Down. Why not? Can you think of ways in which a more cooperative outcome can be sustained?
S3.
Recall Exercise S7 from Chapter 4, about an old lady looking for help
crossing the street and two players simultaneously deciding whether to
offer help. If you did that exercise, you also found all of the pure-strategy
Nash equilibria of the game. Now find the mixed-strategy equilibrium
of the game.
S4.
Revisit the tennis game in Section 2.A of this chapter. Recall that the
mixed-strategy Nash equilibrium found in that section had Evert playing
DL with probability 0.7, while Navratilova played DL with probability 0.6.
Now suppose that Evert injures herself later in the match, so her DL shots
are much slower and easier for Navratilova to defend. The payoffs are now
given by the following table:
6841D CH07 UG.indd 250
12/18/14 3:12 PM
E x e r c i s e s 2 5 1
NAVRATILOVA
EVERT
DL
CC
DL
30, 70
60, 40
CC
90, 10
20, 80
(a) Relative to the game before her injury (see Figure 7.1), the strategy
DL seems much less attractive to Evert than before. Would you expect Evert to play DL more, less, or the same amount in a new mixedstrategy equilibrium? Explain.
(b) Find each player’s equilibrium mixture for this game. What is the expected value of the game to Evert?
(c) How do the equilibrium mixtures found in part (b) compare with
those of the original game and with your answer to part (a)? Explain
why each mixture has or has not changed.
S5.
Exercise S7 in Chapter 6 introduced a simplified version of baseball, and
part (e) pointed out that the simultaneous-move game has no Nash equilibrium in pure strategies. This is because pitchers and batters have conflicting goals. Pitchers want to get the ball past batters, but batters want
to connect with pitched balls. The game table is as follows:
PITCHER
Throw fastball Throw curve
BATTER
Anticipate fastball
0.30, 0.70
0.20, 0.80
Anticipate curve
0.15, 0.85
0.35, 0.65
(a) Find the mixed-strategy Nash equilibrium to this simplified baseball
game.
(b) What is each player’s expected payoff for the game?
(c) Now suppose that the pitcher tries to improve his expected payoff in the mixed-strategy equilibrium by slowing down his fastball,
thereby making it more similar to a curve ball. This changes the payoff to the hitter in the “anticipate fastball/throw fastball” cell from
0.30 to 0.25, and the pitcher’s payoff adjusts accordingly. Can this
modification improve the pitcher’s expected payoff as desired? Explain your answer carefully and show your work. Also, explain why
slowing the fastball can or cannot improve the pitcher’s expected
payoff in the game.
6841D CH07 UG.indd 251
12/18/14 3:12 PM
2 5 2 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
S6.
Undeterred by their experiences with chicken so far (see Section 4.B),
James and Dean decide to increase the excitement (and the stakes) by
starting their cars farther apart. This way they can keep the crowd in suspense longer, and they’ll be able to accelerate to even higher speeds before they may or may not be involved in a much more serious collision.
The new game table thus has a higher penalty for collision.
DEAN
Swerve
JAMES
Straight
Swerve
0, 0
–1, 1
Straight
1, –1
–10, –10
(a) What is the mixed-strategy Nash equilibrium for this more dangerous version of chicken? Do James and Dean play Straight more or
less often than in the game shown in Figure 7.4?
(b) What is the expected payoff to each player in the mixed-strategy
equilibrium found in part (a)?
(c) James and Dean decide to play the chicken game repeatedly (say, in
front of different crowds of reckless youths). Moreover, because they
don’t want to collide, they collude and alternate between the two
pure-strategy equilibria. Assuming they play an even number of
games, what is the average payoff to each of them when they collude in this way? Is this better or worse than they can expect from
playing the mixed-strategy equilibrium? Why?
(d) After several weeks of not playing chicken as in part (c), James and
Dean agree to play again. However, each of them has entirely forgotten which pure-strategy Nash equilibrium they played last time and
neither realizes this until they’re revving their engines moments before starting the game. Instead of playing the mixed-strategy Nash
equilibrium, each of them tosses a separate coin to decide which
strategy to play. What is the expected payoff to James and Dean
when each mixes 50–50 in this way? How does this compare with
their expected payoffs when they play their equilibrium mixtures?
Explain why these payoffs are the same or different from those
found in part (c).
S7.
6841D CH07 UG.indd 252
Section 2.B illustrates how to graph best-response curves for the
tennis-point game. Section 4.B notes that when there are multiple
equilibria, they can be identified from multiple intersections of the
best-response curves. For the battle-of-the-sexes game in Figure 4.12
12/18/14 3:12 PM
E x e r c i s e s 2 5 3
from Chapter 4, graph the best responses of Harry and Sally on a p-q coordinate plane. Label all of the Nash equilibria.
S8.
Consider the following game:
COLIN
Yes
No
Yes
x, x
0, 1
No
1, 0
1, 1
ROWENA
(a) For what values of x does this game have a unique Nash equilibrium? What is that equilibrium?
(b) For what values of x does this game have a mixed-strategy Nash
equilibrium? With what probability, expressed in terms of x, does
each player play Yes in this mixed-strategy equilibrium?
(c) For the values of x found in part (b), is the game an example of an
assurance game, a game of chicken, or a game similar to tennis?
Explain.
(d) Let x 5 3. Graph the best-response curves of Rowena and Colin on
a p-q coordinate plane. Label all the Nash equilibria in pure and
mixed strategies.
(e) Let x 5 1. Graph the best-response curves of Rowena and Colin on
a p-q coordinate plane. Label all the Nash equilibria in pure and
mixed strategies.
S9.
Consider the following game:
PROFESSOR PLUM
MRS. PEACOCK
Revolver
Knife
Wrench
Conservatory
1, 3
2, –2
0, 6
Ballroom
3, 1
1, 4
5, 0
(a) Graph the expected payoffs from each of Professor Plum’s strategies
as a function of Mrs. Peacock’s p-mix.
(b) Over what range of p does Revolver yield a higher expected payoff
for Professor Plum than Knife?
(c) Over what range of p does Revolver yield a higher expected payoff
than Wrench?
6841D CH07 UG.indd 253
12/18/14 3:12 PM
2 5 4 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
(d) Which pure strategies will Professor Plum use in his equilibrium
mixture? Why?
(e) What is the mixed-strategy Nash equilibrium of this game?
S10.
Many of you will be familiar with the children’s game rock-paper-scissors.
In rock-paper-scissors, two people simultaneously choose either “rock,”
“paper,” or “scissors,” usually by putting their hands into the shape of
one of the three choices. The game is scored as follows. A player choosing Scissors beats a player choosing Paper (because scissors cut paper). A
player choosing Paper beats a player choosing Rock (because paper covers rock). A player choosing Rock beats a player choosing Scissors (because rock breaks scissors). If two players choose the same object, they
tie. Suppose that each individual play of the game is worth 10 points. The
following matrix shows the possible outcomes in the game:
LISA
BART
Rock
Scissors
Paper
Rock
0, 0
10, –10
–10, 10
Scissors
–10, 10
0, 0
10, –10
–10, 10
0, 0
Paper
10, –10
(a) Derive the mixed-strategy equilibrium of the rock-paper-scissors
game.
(b) Suppose that Lisa announced that she would use a mixture in which
her probability of choosing Rock would be 40%, her probability of
choosing Scissors would be 30%, and her probability of choosing
Paper would be 30%. What is Bart’s best response to this strategy
choice by Player 2? Explain why your answer makes sense, given
your knowledge of mixed strategies.
S11.
6841D CH07 UG.indd 254
Recall the game between ice-cream vendors on a beach from Exercise U6
in Chapter 6. In that game, we found two asymmetric pure-strategy equilibria. There is also a symmetric mixed-strategy equilibrium to the game.
(a) Write down the five-by-five table for the game.
(b) Eliminate dominated strategies, and explain why they should not be
used in the equilibrium.
(c) Use your answer to part (b) to help you find the mixed-strategy
equilibrium to the game.
12/18/14 3:12 PM
E x e r c i s e s 2 5 5
S12.
Suppose that the soccer penalty-kick game of Section 7.A in this chapter
is expanded to include a total of six distinct strategies for the kicker: to
shoot high and to the left (HL), low and to the left (LL), high and in the
center (HC), low and in the center (LC), high right (HR), and low right
(LR). The goalkeeper continues to have three strategies: to move to the
kicker’s left (L) or right (R) or to stay in the center (C). The players’ success percentages are shown in the following table:
GOALIE
KICKER
L
C
R
HL
0.50, 0.50
0.85, 0.15
0.85, 0.15
LL
0.40, 0.60
0.95, 0.05
0.95, 0.05
HC
0.85, 0.15
0, 0
0.85, 0.15
LC
0.70, 0.30
0, 0
0.70, 0.30
HR
0.85, 0.15
0.85, 0.15
0.50, 0.50
LR
0.95, 0.05
0.95, 0.05
0.40, 0.60
In this problem, you will verify that the mixed-strategy equilibrium
of this game entails the goalie using L and R each 42.2% of the time and C
15.6% of the time, while the kicker uses LL and LR each 37.8% of the time
and HC 24.4% of the time.
(a) Given the goalie’s proposed mixed strategy, compute the expected
payoff to the kicker for each of her six pure strategies. (Use only
three significant digits to keep things simple.)
(b) Use your answer to part (a) to explain why the kicker’s proposed
mixed strategy is a best response to the goalie’s proposed mixed
strategy.
(c) Given the kicker’s proposed mixed strategy, compute the expected
payoff to the goalie for each of her three pure strategies. (Again, use
only three significant digits to keep things simple.)
(d) Use your answer to part (a) to explain why the goalie’s proposed
mixed strategy is a best response to the kicker’s proposed mixed
strategy.
(e) Using your previous answers, explain why the proposed strategies
are indeed a Nash equilibrium.
(f) Compute the equilibrium payoff to the kicker.
6841D CH07 UG.indd 255
12/18/14 3:12 PM
2 5 6 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
S13. (Optional) In Section 5.B, we demonstrated for the assurance game
that changing Sally’s payoffs does not change her equilibrium mixing
proportions—only Harry’s payoffs determine her equilibrium mixture. In
this exercise, you will prove this as a general result for the mixed-strategy
equilibria of all two-by-two games. Consider a general two-by-two
non-zero-sum game with the payoff table shown below:
COLIN
Left
Right
Up
a, A
b, B
Down
c, C
d, D
ROW
(a) Suppose the game has a mixed-strategy equilibrium. As a function of
the payoffs in the table, solve for the probability that Rowena plays Up
in equilibrium.
(b) Solve for the probability that Colin plays Left in equilibrium.
(c) Explain how your results show that each player’s equilibrium mixtures
depend only on the other player’s payoffs.
(d) What conditions must be satisfied by the payoffs in order to guarantee
that the game does indeed have a mixed-strategy equilibrium?
S14. (Optional) Recall Exercise S13 of Chapter 4, which was based on the bar
scene from the film A Beautiful Mind. Here we consider the mixed-strategy
equilibria of that game when played by n . 2 young men.
(a) Begin by considering the symmetric case in which each of the n young
men independently goes after the solitary blonde with some probability P. This probability is determined by the condition that each young
man should be indifferent between the pure strategies Blonde and
Brunette, given that everyone else is mixing. What is the condition that
guarantees the indifference of each player? What is the equilibrium
value of P in this game?
(b) There are also some asymmetric mixed-strategy equilibria in this
game. In these equilibria, m , n young men each go for the blonde
with probability Q, and the remaining n 2 m young men go after the
brunettes. What is the condition that guarantees that each of the m
young men is indifferent, given what everyone else is doing? What
condition must hold so that the remaining n 2 m players don’t want
to switch from the pure strategy of choosing a brunette? What is the
equilibrium value of Q in the asymmetric equilibrium?
6841D CH07 UG.indd 256
12/18/14 3:12 PM
E x e r c i s e s 2 5 7
UNsOLVED EXERCISES
U1.
In football the offense can either run the ball or pass the ball, whereas
the defense can either anticipate (and prepare for) a run or anticipate
(and prepare for) a pass. Assume that the expected payoffs (in yards) for
the two teams on any given down are as follows:
DEFENSE
OFFENSE
Anticipate Run
Anticipate Pass
Run
1, –1
5, –5
Pass
9, –9
–3, 3
(a) Show that this game has no pure-strategy Nash equilibrium.
(b) Find the unique mixed-strategy Nash equilibrium to this game.
(c) Explain why the mixture used by the offense is different from the
mixture used by the defense.
(d) How many yards is the offense expected to gain per down in
equilibrium?
U2.
On the eve of a problem-set due date, a professor receives an e-mail from
one of her students who claims to be stuck on one of the problems after
working on it for more than an hour. The professor would rather help the
student if he has sincerely been working, but she would rather not render aid if the student is just fishing for hints. Given the timing of the request, she could simply pretend not to have read the e-mail until later.
Obviously, the student would rather receive help whether or not he has
been working on the problem. But if help isn’t coming, he would rather
be working instead of slacking, since the problem set is due the next day.
Assume the payoffs are as follows:
STUDENT
Work and ask for help Slack and fish for hints
Help student
3, 3
–1, 4
Ignore e-mail
–2, 1
0, 0
PROFESSOR
(a) What is the mixed-strategy Nash equilibrium to this game?
(b) What is the expected payoff to each of the players?
6841D CH07 UG.indd 257
12/18/14 3:12 PM
2 5 8 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
U3.
Exercise S12 in Chapter 4 introduced the game “Evens or Odds,” which
has no Nash equilibrium in pure strategies. It does have an equilibrium
in mixed strategies.
(a) If Anne plays 1 (that is, she puts in one finger) with probability p,
what is the expected payoff to Bruce from playing 1, in terms of p?
What is his expected payoff from playing 2?
(b) What level of p will make Bruce indifferent between playing 1 and
playing 2?
(c) If Bruce plays 1 with probability q, what level of q will make Anne
indifferent between playing 1 and playing 2?
(d) Write the mixed-strategy equilibrium of this game. What is the expected payoff of the game to each player?
U4.
Return again to the tennis rivals Evert and Navratilova, discussed in
Section 2.A. Months later, they meet again in a new tournament. Evert
has healed from her injury (see Exercise S4), but during that same time
Navratilova has worked very hard on improving her defense against DL
serves. The payoffs are now as follows:
NAVRATILOVA
EVERT
DL
CC
DL
25, 75
80, 20
CC
90, 10
20, 80
(a) Find each player’s equilibrium mixture for the game above.
(b) What happened to Evert’s p-mixture compared to the game presented in Section 2.A? Why?
(c) What is the expected value of the game to Evert? Why is it different
from the expected value of the original game in Section 2.A?
U5.
6841D CH07 UG.indd 258
Section 4.A of this chapter discussed mixing in the battle-of-the-sexes
game between Harry and Sally.
(a) What do you expect to happen to the equilibrium values of p and q
found in the chapter if Sally decides she really likes Local Latte a lot
more than Starbucks, so that the payoffs in the (Local Latte, Local
Latte) cell are now (1, 3)? Explain your reasoning.
(b) Now find the new mixed-strategy equilibrium values of p and q. How
do they compare with those of the original game?
(c) What is the expected payoff to each player in the new
mixed-strategy equilibrium?
12/18/14 3:12 PM
E x e r c i s e s 2 5 9
(d) Do you think Harry and Sally might play the mixed-strategy equilibrium in this new version of the game? Explain why or why not.
U6.
Consider the following variant of chicken, in which James’s payoff from
being “tough” when Dean is “chicken” is 2, rather than 1:
DEAN
JAMES
Swerve
Straight
Swerve
0, 0
–1, 1
Straight
2, –1
–2, –2
(a) Find the mixed-strategy equilibrium in this game, including the expected payoffs for the players.
(b) Compare the results with those of the original game in Section 4.B of
this chapter. Is Dean’s probability of playing Straight (being tough)
higher now than before? What about James’s probability of playing
Straight?
(c) What has happened to the two players’ expected payoffs? Are these
differences in the equilibrium outcomes paradoxical in light of the
new payoff structure? Explain how your findings can be understood
in light of the opponent’s indifference principle.
U7.
For the chicken game in Figure 4.13 from Chapter 4, graph the best responses of James and Dean on a p-q coordinate plane. Label all of the
Nash equilibria.
U8.
(a) Find all pure-strategy Nash equilibria of the following game:
COLIN
L
M
N
R
Up
1, 1
2, 2
3, 4
9, 3
Down
2, 5
3, 3
1, 2
7, 1
ROWENA
(b) Now find a mixed-strategy equilibrium of the game. What are the
players’ expected payoffs in the equilibrium?
6841D CH07 UG.indd 259
12/18/14 3:12 PM
2 6 0 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
U9.
Consider a revised version of the game from Exercise S9:
PROFESSOR PLUM
MRS. PEACOCK
Revolver
Knife
Wrench
Conservatory
1, 3
2, –2
0, 6
Ballroom
3, 2
1, 4
5, 0
(a) Graph the expected payoffs from each of Professor Plum’s strategies
as a function of Mrs. Peacock’s p-mix.
(b) Which strategies will Professor Plum use in his equilibrium mixture?
Why?
(c) What is the mixed-strategy Nash equilibrium of this game?
(d) Note that this game is only slightly different from the game in Exercise S9. How are the two games different? Explain why you intuitively think the equilibrium outcome has changed from Exercise S9.
U10.
Consider a modified version of rock-paper-scissors in which Bart gets a
bonus when he wins with Rock. If Bart picks Rock while Lisa picks Scissors, Bart wins twice as many points as when either player wins in any
other way. The new payoff matrix is:
LISA
BART
Rock
Scissors
Paper
Rock
0, 0
20, –20
–10, 10
Scissors
–10, 10
0, 0
10, –10
–10, 10
0, 0
Paper
10, –10
(a) What is the mixed-strategy equilibrium in this version of the game?
(b) Compare your answer here with your answer for the mixed-strategy
equilibrium in Exercise S10. How can you explain the differences in
the equilibrium strategy choices?
6841D CH07 UG.indd 260
12/18/14 3:12 PM
E x e r c i s e s 2 6 1
U11.
Consider the following game:
MACARTHUR
Air
Sea
Land
Air
0, 3
2, 0
1, 7
Sea
2, 4
0, 6
2, 0
Land
1, 3
2, 4
0, 3
PATTON
(a) Does this game have a pure-strategy Nash equilibrium? If so, what is
it?
(b) Find a mixed-strategy equilibrium to this game.
(c) Actually, this game has two mixed-strategy equilibria. Find the one
you didn’t find in part (b). (Hint: In one of these equilibria, one of
the players plays a mixed strategy, whereas the other plays a pure
strategy.)
U12.
The recalcitrant James and Dean are playing their more dangerous variant of chicken again (see Exercise S6). They’ve noticed that their payoff for being perceived as “tough” varies with the size of the crowd. The
larger the crowd, the more glory and praise each receives from driving
straight when his opponent swerves. Smaller crowds, of course, have the
opposite effect. Let k . 0 be the payoff for appearing “tough.” The game
may now be represented as:
DEAN
JAMES
Swerve
Straight
Swerve
0, 0
–1, k
Straight
k, –1
–10, –10
(a) Expressed in terms of k, with what probability does each driver play
Swerve in the mixed-strategy Nash equilibrium? Do James and Dean
play Swerve more or less often as k increases?
(b) In terms of k, what is the expected value of the game to each player
in the mixed-strategy Nash equilibrium found in part (a)?
(c) At what value of k do both James and Dean mix 50–50 in the
mixed-strategy equilibrium?
(d) How large must k be for the average payoff to be positive under the
alternating scheme discussed in part (c) of Exercise S6?
6841D CH07 UG.indd 261
12/18/14 3:12 PM
2 6 2 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
U13.
(Optional) Recall the game from Exercise S11 in Chapter 4, where Larry,
Moe, and Curly can choose to buy tickets toward a prize worth $30. We
found six pure-strategy Nash equilibria in that game. In this problem,
you will find a symmetric equilibrium in mixed strategies.
(a) Eliminate the weakly dominated strategy for each player. Explain
why a player would never use this weakly dominated strategy in his
equilibrium mixture.
(b) Find the equilibrium in mixed strategies.
U14.
(Optional) Exercises S4 and U4 demonstrate that in zero-sum games
such as the Evert-Navratilova tennis rivalry, changes in a player’s payoffs
can sometimes lead to unexpected or unintuitive changes to her equilibrium mixture. But what happens to the expected value of the game?
Consider the following general form of a two-player zero-sum game:
COLIN
L
R
U
a, –a
b, –b
D
c, –c
d, –d
ROWENA
Assume that there is no Nash equilibrium in pure strategies, and assume
that a, b, c, and d are all greater than or equal to 0. Can an increase in any
one of a, b, c, or d lead to a lower expected value of the game for Rowena?
If not, prove why not. If so, provide an example.
6841D CH07 UG.indd 262
12/18/14 3:12 PM
App e n d i x : P r o b a b i l i t y a n d e x p e c t e d U t i l i t y 2 6 3
■
Appendix:
Probability and Expected Utility
To calculate the expected payoffs and mixed-strategy equilibria of games in this
chapter, we had to do some simple manipulation of probabilities. Some simple
rules govern calculations involving probabilities. Many of you may be familiar
with them, but we give a brief statement and explanation of the basics here by
way of reminder or remediation, as appropriate. We also state how to calculate
expected values of random numerical values.
THE BASIC ALGEBRA OF PROBABILITIES
The basic intuition about the probability of an event comes from thinking about
the frequency with which this event occurs by chance among a larger set of possibilities. Usually, any one element of this larger set is just as likely to occur by
chance as any other, so finding the probability of the event in which we are interested is simply a matter of counting the elements corresponding to that event
and dividing by the total number of elements in the whole large set.23
In any standard deck of 52 playing cards, for instance, there are four suits
(clubs, diamonds, hearts, and spades) and 13 cards in each suit (ace through 10
and the face cards—jack, queen, king). We can ask a variety of questions about
the likelihood that a card of a particular suit or value—or suit and value—might
be drawn from this deck of cards: How likely are we to draw a spade? How likely
are we to draw a black card? How likely are we to draw a 10? How likely are we to
draw the queen of spades? and so on. We would need to know something about
the calculation and manipulation of probabilities to answer such questions.
If we had two decks of cards, one with blue backs and one with green backs, we
23
When we say “by chance,” we simply mean that a systematic order cannot be detected in the outcome or that it cannot be determined by using available scientific methods of prediction and calculation. Actually, the motions of coins and dice are fully determined by laws of physics, and highly
skilled people can manipulate decks of cards, but for all practical purposes, coin tosses, rolls of dice,
or card shuffles are devices of chance that can be used to generate random outcomes. However, randomness can be harder to achieve than you think. For example, a perfect shuffle, where a deck of
cards is divided exactly in half and then interleaved by dropping cards one at a time alternately from
each, may seem a good way to destroy the initial order of the deck. But Cornell mathematician Persi
Diaconis has shown that, after eight of the shuffles, the original order is fully restored. For slightly
imperfect shuffles that people carry out in reality, he finds that some order persists through six, but
randomness suddenly appears on the seventh! See “How to Win at Poker, and Other Science Lessons,” The Economist, October 12, 1996. For an interesting discussion of such topics, see Deborah J.
Bennett, Randomness (Cambridge, Mass.: Harvard University Press, 1998), chs. 6–9.
6841D CH07 UG.indd 263
12/18/14 3:12 PM
2 6 4 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
could ask even more complex questions (“How likely are we to draw one card
from each deck and have them both be the jack of diamonds?”), but we would
still use the algebra of probabilities to answer them.
In general, a probability measures the likelihood of a particular event or
set of events occurring. The likelihood that you draw a spade from a deck of
cards is just the probability of the event “drawing a spade.” Here the large set
has 52 elements—the total number of equally likely possibilities—and the
event “drawing a spade” corresponds to a subset of 13 particular elements.
Thus, you have 13 chances out of the 52 to get a spade, which makes the probability of getting a spade in a single draw equal to 1352 5 14 5 25%. To see
this another way, consider the fact that there are four suits of 13 cards each,
so your chance of drawing a card from any particular suit is one out of four,
or 25%. If you made a large number of such draws (each time from a complete deck), then out of 52 times you will not always draw exactly 13 spades;
by chance you may draw a few more or a few less. But the chance averages out
over different such occasions—over different sets of 52 draws. Then the probability of 25% is the average of the frequencies of spades drawn in a large number of observations.24
The algebra of probabilities simply develops such ideas in general terms
and obtains formulas that you can then apply mechanically instead of having
to do the thinking from scratch every time. We will organize our discussion
of these probability formulas around the types of questions that one might
ask when drawing cards from a standard deck (or two: blue backed and green
backed).25 This method will allow us to provide both specific and general formulas for you to use later. You can use the card-drawing analogy to help you reason
out other questions about probabilities that you encounter in other contexts.
One other point to note: In ordinary language, it is customary to write probabilities as percentages, but the algebra requires that they be written as fractions or
decimals; thus instead of 25%, the mathematics works with 1352, or 0.25. We
will use one or the other, depending on the occasion; be aware that they mean
the same thing.
A. The Addition Rule
The first questions that we ask are: If we were to draw one card from the blue
deck, how likely are we to draw a spade? And how likely are we to draw a card
that is not a spade? We already know that the probability of drawing a spade is
25% because we determined that earlier. But what is the probability of drawing
24
Bennett, Randomness, chs. 4 and 5, offers several examples of such calculations of probabilities.
If you want a more detailed exposition of the following addition and multiplication rules, as well
as more exercises to practice these rules, we recommend David Freeman, Robert Pisani, and Robert
Purves, Statistics, 4th ed. (New York: W. W. Norton & Company, 2007), chs. 13 and 14.
25
6841D CH07 UG.indd 264
12/18/14 3:12 PM
App e n d i x : P r o b a b i l i t y a n d e x p e c t e d U t i l i t y 2 6 5
a card that is not a spade? It is the same likelihood of drawing a club or a diamond or a heart instead of a spade. It should be clear that the probability in
question should be larger than any of the individual probabilities of which it is
formed; in fact, the probability is 1352 (clubs) 1 1352 (diamonds) 1 1352
(hearts) 5 0.75. The or in our verbal interpretation of the question is the clue
that the probabilities should be added together, because we want to know the
chances of drawing a card from any of those three suits.
We could more easily have found our answer to the second question by noting that not getting a spade is what happens the other 75% of the time. Thus,
the probability of drawing “not a spade” is 75% (100% 2 25%) or, more formally,
1 2 0.25 5 0.75. As is often the case with probability calculations, the same result can be obtained here by two different routes, entailing different ways of
thinking about the event for which we are trying to find the probability. We will
see other examples of this later in this appendix, where it will become clear that
the different methods of calculation can sometimes require vastly different
amounts of effort. As you develop experience, you will discover and remember
the easy ways or shortcuts. In the meantime, be comforted that each of the different routes, when correctly followed, leads to the same final answer.
To generalize our preceding calculation, we note that, if you divide the set
of events, X, in which you are interested into some number of subsets, Y, Z, . . . ,
none of which overlap (in mathematical terminology, such subsets are said to be
disjoint), then the probabilities of each subset occurring must sum to the probability of the full set of events; if that full set of events includes all possible outcomes, then its probability is 1. In other words, if the occurrence of X requires
the occurrence of any one of several disjoint Y, Z, . . . , then the probability of X
is the sum of the separate probabilities of Y, Z, . . . . Using Prob(X) to denote the
probability that X occurs and remembering the caveats on X (that it requires any
one of Y, Z, . . . ) and on Y, Z, . . . (that they must be disjoint), we can write the
addition rule in mathematical notation as Prob(X) 5 Prob(Y ) 1 Prob(Z ) 1 … .
E x e r c i s e Use the addition rule to find the probability of drawing two
cards, one from each deck, such that the two cards have identical faces.
B. The Multiplication Rule
Now we ask: What is the likelihood that when we draw two cards, one from each
deck, both of them will be spades? This event occurs if we draw a spade from
the blue deck and a spade from the green deck. The switch from or to and in
our interpretation of what we are looking for indicates a switch in mathematical
operations from addition to multiplication. Thus, the probability of two spades,
one from each deck, is the product of the probabilities of drawing a spade from
6841D CH07 UG.indd 265
12/18/14 3:12 PM
2 6 6 [ C h . 7 ] s i m u lta n e o u s - m o v e g a m e s : m i x e d s t r at e g i e s
each deck, or (1352) 3 (1352) 5 116 5 0.0625, or 6.25%. Not surprisingly, we
are much less likely to get two spades than we were in the previous section to
get one spade. (Always check to make sure that your calculations accord in this
way with your intuition regarding the outcome.)
In much the same way as the addition rule requires events to be disjoint,
the multiplication rule requires them to be independent: if we break down a set
of events, X, into some number of subsets Y, Z, . . . , those subsets are independent if the occurrence of one does not affect the probability of the other. Our
events—a spade from the blue deck and a spade from the green deck—satisfy
this condition of independence; that is, drawing a spade from the blue deck
does nothing to alter the probability of getting a spade from the green deck. If
we were drawing both cards from the same deck, however, then after we had
drawn a spade (with a probability of 1352), the probability of drawing another
spade would no longer be 1352 (in fact, it would be 1251); drawing one spade
and then a second spade from the same deck are not independent events.
The formal statement of the multiplication rule tells us that, if the occurrence of X requires the simultaneous occurrence of all the several independent
Y, Z, . . . , then the probability of X is the product of the separate probabilities of
Y, Z, . . . : Prob(X) 5 Prob(Y) 3 Prob(Z) 3 … .
E x e r c i s e Use the multiplication rule to find the probability of drawing
two cards, one from each deck, and getting a red card from the blue deck and
a face card from the green deck.
C. ExpectedValues
If a numerical magnitude (such as money winnings or rainfall) is subject to
chance and can take on any one of n possible values X1, X2, . . . , Xn with respective probabilities p1, p2, . . . , pn, then the expected value is defined as the
weighted average of all its possible values using the probabilities as weights;
that is, as p1X1 1 p2X2 1 … 1 pnXn. For example, suppose you bet on the toss of
two fair coins. You win $5 if both coins come up heads, $1 if one shows heads
and the other tails, and nothing if both come up tails. Using the rules for manipulating probabilities discussed earlier in this section, you can see that the probabilities of these events are, respectively, 0.25, 0.50, and 0.25. Therefore, your expected winnings are (0.25 3 $5) 1 (0.50 3 $1) 1 (0.25 3 $0) 5 $1.75.
In game theory, the numerical magnitudes that we need to average in this
way are payoffs, measured in numerical ratings, or money, or, as we will see
later in the appendix to Chapter 8, utilities. We will refer to the expected values in each context appropriately, for example, as expected payoffs or expected
utilities.
6841D CH07 UG.indd 266
12/18/14 3:12 PM
App e n d i x : P r o b a b i l i t y a n d e x p e c t e d U t i l i t y 2 6 7
SUMMARY
The probability of an event is the likelihood of its occurrence by chance from
among a larger set of possibilities. Probabilities can be combined by using some
rules. The addition rule says that the probability of any one of a number of disjoint events occurring is the sum of the probabilities of these events. According
to the multiplication rule, the probability that all of a number of independent
events will occur is the product of the probabilities of these events. Probabilityweighted averages are used to compute expected payoffs in games.
KEY TERMS
addition rule (265)
disjoint (265)
expected value (266)
6841D CH07 UG.indd 267
independent events (266)
multiplication rule (266)
probability (264)
12/18/14 3:12 PM
6841D CH07 UG.indd 268
12/18/14 3:12 PM
PART THREE
■
Some Broad
Classes of Games
and Strategies
6841D CH08 UG.indd 269
12/18/14 3:12 PM
6841D CH08 UG.indd 270
12/18/14 3:12 PM
8
■
Uncertainty and Information
I
C hapter 2, we mentioned different ways in which uncertainty can arise
in a game (external and strategic) and ways in which players can have limited information about aspects of the game (imperfect and incomplete,
symmetric and asymmetric). We have already encountered and analyzed
some of these. Most notably, in simultaneous-move games, each player does not
know the actions the other is taking; this is strategic uncertainty. In Chapter 6,
we saw that strategic uncertainty gives rise to asymmetric and imperfect information, because the different actions taken by one player must be lumped into
one infor­mation set for the other player. In Chapters 4 and 7, we saw how such
strategic uncertainty is handled by having each player formulate beliefs about
the other’s action (including beliefs about the probabilities with which different actions may be taken when mixed strategies are played) and by applying the
concept of Nash equilibrium, in which such beliefs are confirmed. In this chapter we focus on some further ways in which uncertainty and informational limitations arise in games.
We begin by examining various strategies that individuals and societies can
use for coping with the imperfect information generated by external uncertainty
or risk. Recall that external uncertainty is about matters outside any player’s control but affecting the payoffs of the game; weather is a simple example. Here we
show the basic ideas behind diversification, or spreading, of risk by an individual
player and pooling of risk by multiple players. These strategies can benefit everyone, although the division of total gains among the participants can be unequal;
therefore, these situations contain a mixture of common interest and conflict.
n
271
6841D CH08 UG.indd 271
12/18/14 3:12 PM
2 7 2 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
We then consider the informational limitations that often exist in situations
with strategic interdependence. Information in a game is complete only if all of
the rules of the game—the strategies available to all players and the payoffs of
each player as functions of the strategies of all players—are fully known by all
players and, moreover, are common knowledge among them. By this exacting
standard, most games in reality have incomplete information. Moreover, the
incompleteness is usually asymmetric: each player knows his own capabilities
and payoffs much better than he knows those of other players. As we pointed
out in Chapter 2, manipulation of the information becomes an important dimension of strategy in such games. In this chapter, we will discuss when information can or cannot be communicated verbally in a credible manner. We will
also examine other strategies designed to convey or conceal one’s own information and to elicit another player’s information. We spoke briefly of some such
strategies—namely, screening and signaling—in Chapters 1 and 2; here, we
study those in more detail.
Of course, players in many games would also like to manipulate the actions
of others. Managers would like their workers to work hard and well; insurance
companies would like their policyholders to exert care to reduce the risk that
is being insured. If information were perfect, the actions would be observable.
Workers’ pay could be made contingent on the quality and quantity of their effort; payouts to insurance policyholders could be made contingent on the care
they exercised. But in reality these actions are difficult to observe; that creates
a situation of imperfect asymmetric information, commonly called moral hazard. Thus, the counterparties in these games have to devise various indirect
methods to give incentives to influence others’ actions in the right direction.
The study of the topic of information and its manipulation in games has
been very active and important in recent decades. It has shed new light on many
previously puzzling matters in economics, such as the nature of incentive contracts, the organization of companies, markets for labor and for durable goods,
government regulation of business, and myriad others.1 More recently, political
scientists have used the same concepts to explain phenomena such as the relation of tax- and expenditures-policy changes to elections, as well as the delegation of legislation to committees. These ideas have also spread to biology,
where evolutionary game theory explains features such as the peacock’s large
and ornate tail as a signal. Perhaps even more important, you will recognize the
important role that signaling and screening play in your daily interaction with
family, friends, teachers, coworkers, and so on, and you will be able to improve
your strategies in these games.
1
The pioneers of the theory of asymmetric information in economics—George Akerlof,
Michael Spence, and Joseph Stiglitz—received the 2001 Nobel Prize in economics for these
contributions.
6841D CH08 UG.indd 272
12/18/14 3:12 PM
i m p e r f e c t i n f o r m at i o n : d e a l i n g w i t h r i s k 2 7 3
Although the study of information clearly goes well beyond consideration of
external uncertainty and the basic concepts of signaling and screening, we focus
only on those few topics in this chapter. We will return to the analysis of information and its manipulation in Chapter 13, however. There we will use the methods
developed here to study the design of mechanisms to provide incentives to and
elicit information from other players who have some private information.
1 IMPERFECT INFORMATION: DEALING WITH RISK
Imagine that you are a farmer subject to the vagaries of weather. If the weather
is good for your crops, you will have an income of $160,000. If it is bad, your
income will be only $40,000. The two possibilities are equally likely (probability
12, or 0.5, or 50% each). Therefore, your average or expected income is $100,000
(5 12 3 160,000 1 12 3 40,000), but there is considerable risk around this
average value.
What can you do to reduce the risk that you face? You might try a crop that
is less subject to the vagaries of weather, but suppose you have already done
all such things that are under your individual control. Then you might be able
to reduce your income risk further by getting someone else to accept some of
the risk. Of course, you must give the other person something else in exchange.
This quid pro quo usually takes one of two forms: cash payment or a mutual exchange or sharing of risks.
A. Sharing of Risk
We begin with an analysis of the possibility of risk sharing for mutual benefit.
Suppose you have a neighbor who faces a similar risk but gets good weather exactly when you get bad weather and vice versa. (Suppose you live on opposite
sides of an island, and rain clouds visit one side or the other but not both.) In
technical jargon, correlation is a measure of alignment between any two uncertain quantities—in this discussion, between one person’s risk and another’s.
Thus, we would say that your neighbor’s risk is totally negatively correlated with
yours. The combined income of you and your neighbor is $200,000, no matter
what the weather: it is totally risk free. You can enter into a contract that gets
each of you $100,000 for sure: you promise to give him $60,000 in years when
you are lucky, and he promises to give you $60,000 in years when he is lucky. You
have eliminated your risks by combining them.
Currency swaps provide a good example of negative correlation of risk in
real life. A U.S. firm exporting to Europe gets its revenues in euros, but it is interested in its dollar profits, which depend on the fluctuating euro-dollar exchange
6841D CH08 UG.indd 273
12/18/14 3:12 PM
2 7 4 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
rate. Conversely, a European firm exporting to the United States faces similar
uncertainty about its profits in euros. When the euro falls relative to the dollar, the U.S. firm’s euro revenues convert into fewer dollars, and the European
firm’s dollar revenues convert into more euros. The opposite happens when the
euro rises relative to the dollar. Thus, fluctuations in the exchange rate generate
negatively correlated risks for the two firms. Both can reduce these risks by contracting for an appropriate swap of their revenues.
Even without such perfect negative correlation, risk sharing has some benefit. Return to your role as an island farmer and suppose you and your neighbor
face risks that are independent from each other, as if the rain clouds could toss
a separate coin to decide which one of you to visit. Then there are four possible
outcomes, each with a probability of 14. The incomes you and your neighbor
earn in these four cases are illustrated in panel a of Figure 8.1. However, suppose
the two of you were to make a contract to share and share alike; then your incomes would be those shown in panel b of Figure 8.1. Although your average
(expected) income in each table is $100,000, without the sharing contract, you
each would have $160,000, or $40,000 with probabilities of 12 each. With the
contract, you each would have $160,000 with probability 14, $100,000 with
probability 12, and $40,000 with probability 14. Thus, for each of you, the
contract has reduced the probabilities of the two extreme outcomes from 12 to
14 and increased the probability of the middle outcome from 0 to 12. In other
words, the contract has reduced the risk for each of you.
In fact, as long as your incomes are not totally positively correlated—that
is, as long as your luck does not move in perfect tandem—you can both reduce
your risks by sharing them. And if there are more than two of you with some
degree of independence in your risks, then the law of large numbers makes
possible even greater reduction in the risk of each. That is exactly what insurance companies do: by combining the similar but independent risks of many
people, an insurance company is able to compensate any one of them when he
suffers a large loss. It is also the basis of portfolio diversification: by dividing your
NEIGHBOR
Lucky
Not
Lucky
160,000,
160,000
160,000,
40,000
Not
40,000,
160,000
40,000,
40,000
YOU
(a) Without sharing
NEIGHBOR
Lucky
Not
Lucky
160,000,
160,000
100,000,
100,000
Not
100,000,
100,000
40,000,
40,000
YOU
(b) With sharing
FIGURE 8.1 Sharing Income Risk
6841D CH08 UG.indd 274
12/18/14 3:12 PM
i m p e r f e c t i n f o r m at i o n : d e a l i n g w i t h r i s k 2 7 5
wealth among many different assets with different kinds and degrees of risk, you
can reduce your total exposure to risk.
However, such arrangements for risk sharing depend on public observability of outcomes and enforcement of contracts. Otherwise, each farmer has
the temptation to pretend to have suffered bad luck or simply to renege on the
deal and refuse to share when he has good luck. An insurance company may
similarly falsely deny claims, but its desire to maintain its reputation in ongoing
business may check such reneging.
Here we consider another issue. In the discussion above, we simply assumed
that sharing meant equal shares. That seems natural, because you and your
farmer-neighbor are in identical situations. But you may have different strategic
skills and opportunities, and one may be able to do better than the other in bargaining or contracting.
To understand this, we must recognize the basic reason that farmers want
to make such sharing arrangements, namely, that they are averse to risk. As we
explain in the appendix to this chapter, attitudes toward risk can be captured
by using nonlinear scales to convert money incomes into “utility” numbers. The
square root function is a simple example of such a scale that reflects risk aversion, and we apply it here.
When you bear the full risk of getting $160,000 or $40,000 with probabilities
12 each, your expected (probability-weighted average) utility is
12 3 160,000 1 12 3 
40,000 5 12 3 400 1 12 3 200 5 300.
The riskless income that will give you the same utility is the number whose
square root is 300, that is, $90,000. This is less than the average money income
you have, namely $100,000. The difference, $10,000, is the maximum money
sum you would be willing to pay as a price for eliminating the risk in your income
entirely. Your neighbor faces a risk of equal magnitude, so if he has the same
utility scale, he is also willing to pay the same maximum amount to eliminate all
of his risk.
Consider the situation where your risks are perfectly negatively correlated,
so that the sum of your two incomes is $200,000 no matter what. You make your
neighbor the following offer: I will pay you $90,001 2 $40,000 5 $50,001 when
your luck is bad, if you pay me $160,000 2 $90,001 5 $69,999 when your luck is
good. That leaves your neighbor with $90,001 whether his luck is good or bad
($160,000 2 $69,999 in the former situation and $40,000 1 $50,001 in the latter
situation). He prefers this situation to facing the risk. When his luck is good,
yours is bad; you have $40,000 of your own but receive $69,999 from him for a
total of $109,999. When his luck is bad, yours is good; you have $160,000 of your
own but pay him $50,001, leaving you with $109,999. You have also eliminated
your own risk. Both of you are made better off by this deal, but you have collared
almost all the gain.
6841D CH08 UG.indd 275
12/18/14 3:12 PM
2 7 6 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
Of course, your neighbor could have made you the opposite offer. And a
whole range of intermediate offers, involving more equitable sharing of the gains
from risk sharing, is also conceivable. Which of these will prevail? That depends
on the parties’ bargaining power, as we will see in more detail in Chapter 17; the
full range of mutually beneficial risk-sharing outcomes will correspond to the efficient frontier of negotiation in the bargaining game between the players.
B. Paying to Reduce Risk
Now we consider the possibility of trading of risks for cash. Suppose you are the
farmer facing the same risk as before. But now your neighbor has a sure income
of $100,000. You face a lot of risk, and he faces none. He may be willing to take
a little of your risk for a price that is agreeable to both of you. We just saw that
$10,000 is the maximum “insurance premium” you would be willing to pay to
get rid of your risk completely. Would your neighbor accept this as payment
for eliminating your risk? In effect, he is taking over control of his riskless income plus your risky income, that is, $100,000 1 $160,000 5 $260,000 if your
luck is good and $100,000 1 $40,000 5 $140,000 if your luck is bad. He gives you
$90,000 in either eventuality, thus leaving him with $170,000 or $50,000 with
equal probabilities. His expected utility is then
12 3 170,000 1 12 3 
50,000 5 12 3 412.31 1 12 3 223.61 5 317.96.
His utility if he did not trade with you would be 
100,000 5 316.23, so the trade
makes him just slightly better off. The range of mutually beneficial deals in this
case is very narrow, so the outcome is almost determinate, but there is not much
scope for mutual benefit if you aim to trade all of your risk away.
What about a partial trade? Suppose you pay him x if your luck is good, and
he pays you y if your luck is bad. For this to raise expected utilities for both of
you, we need both of the following inequalities to hold:
12 3 
160,000 2 x 1 12 
40,000 1 y . 300,
12 3 
100,000 1 x 1 12 3 
100,000 2 y . 100,000.
As an example, suppose y 5 10,000. Then the second inequality yields x .
10,526.67, and the first yields x , 18,328.16. The first value for x is the minimum
payment he requires from you to be willing to make the trade, and the second
value for x is the maximum you are willing to pay to him to have him assume
your risk. Thus, there is a substantial range for mutually beneficial trade and
bargaining.
What if your neighbor is risk neutral, that is, concerned solely with expected
monetary magnitudes? Then the deal must satisfy
12 3 (100,000 1 x) 1 12 3 (100,000 2 y) . 100,000,
6841D CH08 UG.indd 276
12/18/14 3:12 PM
i m p e r f e c t i n f o r m at i o n : d e a l i n g w i t h r i s k 2 7 7
or simply x . y, to be acceptable to him. Almost-full insurance, where you
pay him $60,001 if your luck is good and he pays you $59,999 if your luck is
bad, is possible. This is the situation where you reap all the gain from the trade
in risks.
If your “neighbor” is actually an insurance company, the company can be
close to risk neutral because it is combining numerous such risks and is owned
by well-diversified investors for each of whom this business is only a small part
of their total risk. Then the fiction of a friendly, risk-neutral, good neighbor can
become a reality. And if insurance companies compete for your business, the
insurance market can offer you almost-complete insurance at a price that leaves
almost all of the gain with you.
Common to all such arrangements is the idea that mutually beneficial deals
can be struck whereby, for a suitable price, someone facing less risk takes some
of the risk off the shoulders of someone else who faces more. In fact, the idea
that a price and a market for risk exist is the basis for almost all of the financial
arrangements in a modern economy. Stocks and bonds, as well as all of the complex financial instruments, such as derivatives, are just ways of spreading risk to
those who are willing to bear it for the lowest asking price. Many people think
these markets are purely forms of gambling. In a sense, they are. But those who
start out with the least risk take the gambles, perhaps because they have already
diversified in the way that we saw earlier. And the risk is sold or shed by those
who are initially most exposed to it. This enables the latter to be more adventurous in their enterprises than they would be if they had to bear all of the risk
themselves. Thus, financial markets promote entrepreneurship by facilitating
risk trading.
Here we have only considered sharing of a given total risk. In practice, people may be able to take actions to reduce that total risk: a farmer can guard crops
against frosts, and a car owner can drive more carefully to reduce the risk of an
accident. If such actions are not publicly observable, the game will be one of imperfect information, raising the problem of moral hazard that we mentioned in
the introduction: people who are well insured will lack the incentive to reduce
the risk they face. We will look at such problems, and the design of mechanisms
to cope with them, in Chapter 13.
C. Manipulating Risk in Contests
The farmers above faced risk due to the weather rather than from any actions of
their own or of other farmers. If the players in a game can affect the risk they or
others face, then they can use such manipulation of risk strategically. A prime
example is contests such as research and development races between companies to develop and market new information technology or biotech products;
many sports contests have similar features.
6841D CH08 UG.indd 277
12/18/14 3:12 PM
2 7 8 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
The outcome of sports and related contests is determined by a mixture of
skill and chance. You win if
Your skill 1 your luck . rival’s skill 1 rival’s luck
or
Your luck ­2 rival’s luck . rival’s skill 2 your skill.
Denote the left-hand side by the symbol L; it measures your “luck surplus.” L is an
uncertain magnitude; suppose its probability distribution is a normal, or bell,
curve, as illustrated by the black curve in Figure 8.2. At any point on the horizontal axis, the height of the curve represents the probability that L takes on that
value. Thus, the area under this curve between any two points on the horizontal
axis equals the probability that L lies between those points. Suppose your rival
has more skill, so you are an underdog. Your “skill deficit,” which equals the difference between your rival’s skill and your skill, is therefore positive, as shown by
the point S. You win if your luck surplus, L, exceeds your skill deficit, S. Therefore,
the area under the curve to the right of the point S, which is shaded in gray in
Figure 8.2, represents your probability of winning. If you make the situation
chancier, the bell curve will be flatter, like the blue curve in Figure 8.2, because
the probability of relatively high and low values of L increases while the probability of moderate values decreases. Then the area under the curve to the right
of S also increases. In Figure 8.2, the area under the original bell curve is shown
by gray shading, and the larger area under the flatter bell curve by the blue
hatching. As the underdog, you should therefore adopt a strategy that flattens
the curve. Conversely, if you are the favorite, you should try to reduce the element of chance in the contest.
Probability
density
S
Luck surplus (L)
FIGURE 8.2 The Effect of Greater Risk on the Chances of Winning
6841D CH08 UG.indd 278
12/18/14 3:12 PM
a s y m m e t r i c i n f o r m at i o n : B a s i c I d e a s 2 7 9
Thus, we should see underdogs or those who have fallen behind in a long
race try unusual or risky strategies: it is their only chance to get level or ahead.
In contrast, favorites or those who have stolen a lead will play it safe. A practical
piece of advice based on this principle: if you want to challenge someone who is
a better player than you to a game of tennis, choose a windy day.
You may stand to benefit by manipulating not just the amount of risk in your
strategy, but also the correlation between the risks. The player who is ahead will
try to choose a correlation as high and as positive as possible: then, whether his
own luck is good or bad, the luck of his opponent will be the same and his lead
protected. Conversely, the player who is behind will try to find a risk as uncorrelated with that of his opponent as possible. It is well known that in a two-sailboat
race, the boat that is behind should try to steer differently from the boat ahead,
and the boat ahead should try to imitate all the tacks of the one behind.2
2 ASYMMETRIC INFORMATION: BASIC IDEAS
In many games, one or some of the players may have an advantage of knowing with greater certainty what has happened or what will happen. Such advantages, or asymmetries of information, are common in actual strategic situations.
At the most basic level, each player may know his own preferences or payoffs—
for example, risk tolerance in a game of brinkmanship, patience in bargaining, or peaceful or warlike intentions in international relations—quite well but
those of the other players much more vaguely. The same is true for a player’s
knowledge of his own innate characteristics (such as the skill of an employee
or the riskiness of an applicant for auto or health insurance). And sometimes
the actions available to one player—for example, the weaponry and readiness of
a country—are not fully known to other players. Finally, some actual outcomes
(such as the actual dollar value of loss to an insured homeowner in a flood or an
earthquake) may be observed by one player but not by others.
By manipulating what the other players know about your abilities and preferences, you can affect the equilibrium outcome of a game. Therefore, such
manipulation of asymmetric information itself becomes a game of strategy. You
may think that each player will always want to conceal his own information and
elicit information from the others, but that is not so. Here is a list of various possibilities, with examples. The better-informed player may want to do one of the
following:
2
Avinash Dixit and Barry Nalebuff, Thinking Strategically (New York: W. W. Norton & Company,
1991), give a famous example of the use of this strategy in sailboat racing. For a more general theoretical discussion, see Luis Cabral, “R&D Competition When the Firms Choose Variance,” Journal of
Economics and Management Strategy, vol. 12, no. 1 (Spring 2003), pp. 139–50.
6841D CH08 UG.indd 279
12/18/14 3:12 PM
2 8 0 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
1. Conceal information or reveal misleading information. When mixing
moves in a zero-sum game, you don’t want the other player to see what
you have done; you bluff in poker to mislead others about your cards.
2. Reveal selected information truthfully. When you make a strategic move,
you want others to see what you have done so that they will respond in
the way you desire. For example, if you are in a tense situation but your
intentions are not hostile, you want others to know this credibly, so that
there will be no unnecessary fight.
Similarly, the less-informed player may want to do one of the following:
1. Elicit information or filter truth from falsehood. An employer wants to
find out the skill of a prospective employee and the effort of a current
employee. An insurance company wants to know an applicant’s risk class,
the amount of a claimant’s loss, and any contributory negligence by the
claimant that would reduce its liability.
2. Remain ignorant. Being unable to know your opponent’s strategic move
can immunize you against his commitments and threats. Top-level politicians or managers often benefit from having such “credible deniability.”
In most cases, we will find that words alone do not suffice to convey credible
information; rather, actions speak louder than words. Even actions may not convey information credibly if they are too easily performed by any random player.
In general, however, the less-informed players should pay attention to what a
better-informed player does, not to what he says. And knowing that the others
will interpret actions in this way, the better-informed player should in turn try to
manipulate his actions for their information content.
When you are playing a strategic game, you may find that you have information that other players do not. You may have information that is “good”
(for yourself ) in the sense that, if the other players knew this information,
they would alter their actions in a way that would increase your payoff. You
know that you are a nonsmoker, for example, and should qualify for lower
life-insurance premiums. Or you may have “bad” information whose disclosure would cause others to act in a way that would hurt you. You cheated your
way through college, for example, and don’t deserve to be admitted to a prestigious law school. You know that others will infer your information from your
actions. Therefore, you try to think of, and take, actions that will induce them to
believe your information is good. Such actions are called signals, and the strategy of using them is called signaling. Conversely, if others are likely to conclude
that your information is bad, you may be able to stop them from making this
inference by confusing them. This strategy, called signal jamming, is typically
a mixed strategy, because the randomness of mixed strategies makes inferences
imprecise.
6841D CH08 UG.indd 280
12/18/14 3:12 PM
d i r e c t c o m m u n i c at i o n , o r “ C h e a p Ta l k ” 2 8 1
If other players know more than you do or take actions that you cannot directly observe, you can use strategies that reduce your informational
disadvantage. The strategy of making another player act so as to reveal his information is called screening, and specific methods used for this purpose are
called screening devices.3
Because a player’s private information often consists of knowledge of his
own abilities or preferences, it is useful to think of players who come to a game
possessing different private information as different types. When credible signaling works, in the equilibrium of the game the less-informed players will be
able to infer the information of the more-informed ones correctly from the actions; the law school, for example, will admit only the truly qualified applicants.
Another way to describe the outcome is to say that in equilibrium, the different types are correctly revealed or separated. Therefore, we call this a separating
equilibrium. In some cases, however, one or more types may successfully mimic
the actions of other types, so that the uninformed players cannot infer types
from actions and cannot identify the different types; insurance companies, for
example, may offer only one kind of life insurance policy. Then, in equilibrium
we say the types are pooled together, and we call this a pooling equilibrium.
When studying games of incomplete information, we will see that identifying
the kind of equilibrium that occurs is of primary importance.
3 DIRECT COMMUNICATION, OR “CHEAP TALK”
The simplest way to convey information to others would seem to be to tell them;
likewise, the simplest way to elicit information would seem to be to ask. But in a
game of strategy, players should be aware that others may not tell the truth and,
likewise, that their own assertions may not be believed by others. That is, the
credibility of mere words may be questionable. It is a common saying that talk is
cheap; indeed, direct communication has zero or negligible direct cost. However,
it can indirectly affect the outcome and payoffs of a game by changing one players’s beliefs about another player’s actions or by influencing the selection of one
equilibrium out of multiple equilibria. Direct communication that has no direct
cost has come to be called cheap talk by game theorists, and the equilibrium
achieved by using direct communication is termed a cheap talk equilibrium.
3
A word of warning: Don’t confuse screening with signal jamming. In ordinary language, the word
screening can have different meanings. The one used in game theory is that of testing or scrutinizing.
Thus, a less-informed player uses screening to find out what a better-informed player knows. For
the alternative sense of screening—that is, concealing—the game-theoretic term is signal jamming.
Thus, a better-informed player uses a signal-jamming action to prevent the less-informed player
from correctly inferring the truth from the action (that is, from screening the better-informed player).
6841D CH08 UG.indd 281
12/18/14 3:12 PM
2 8 2 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
A. Perfectly Aligned Interests
Direct communication of information works well if the players’ interests are well
aligned. The assurance game first introduced in Chapter 4 provides the most extreme example of this. We reproduce its payoff table (Figure 4.11) as Figure 8.3.
The interests of Harry and Sally are perfectly aligned in this game; they both
want to meet, and both prefer meeting in Local Latte. The problem is that the
game is played noncooperatively; they are making their choices independently,
without knowledge of what the other is choosing. But suppose that Harry is
given an opportunity to send a message to Sally (or Sally is given an opportunity to ask a question and Harry replies) before their choices are made. If Harry’s
message (or reply; we will not keep repeating this) is “I am going to Local Latte,”
Sally has no reason to think he is lying.4 If she believes him, she should choose
Local Latte, and if he believes she will believe him, it is equally optimal for him
to choose Local Latte, making his message truthful. Thus, direct communication very easily achieves the mutually preferable outcome. This is indeed the
reason that, when we considered this game in Chapter 4, we had to construct an
elaborate scenario in which such communication was infeasible; recall that the
two were in separate classes until the last minute before their meeting and did
not have their cell phones.
Let us examine the outcome of allowing direct communication in the assurance game more precisely in game-theoretic terms. We have created a two-stage
game. In the first stage, only Harry acts, and his action is his message to Sally.
In the second stage, the original simultaneous-move game is played. In the full
two-stage game, we have a rollback equilibrium where the strategies (complete
plans of action) are as follows. The second-stage action plans for both players
are: “If Harry’s first-stage message was ‘I am going to Starbucks,’ then choose
Starbucks; if Harry’s first-stage message was ‘I am going to Local Latte,’ then
SALLY
Starbucks
Local Latte
Starbucks
1, 1
0, 0
Local Latte
0, 0
2, 2
HARRY
FIGURE 8.3 Assurance
4
This reasoning assumes that Harry’s payoffs are as stated, and that this fact is common knowledge
between the two. If Sally suspects that Harry wants her to go to Local Latte so he can go to Starbucks
to meet another girlfriend, her strategy will be different! Analysis of games of asymmetric information
thus depends on how many different possible “types” of players are actually conceivable.
6841D CH08 UG.indd 282
12/18/14 3:12 PM
d i r e c t c o m m u n i c at i o n , o r “ C h e a p Ta l k ” 2 8 3
choose Local Latte.” (Remember that players in sequential games must specify
complete plans of action.) The first-stage action for Harry is to send the message
“I am going to Local Latte.” Verification that this is indeed a rollback equilibrium
of the two-stage game is easy, and we leave it to you.
However, this equilibrium where cheap talk “works” is not the only rollback
equilibrium of this game. Consider the following strategies: the second-stage
action plan for each player is to go to Starbucks regardless of Harry’s first-stage
message; and Harry’s first-stage message can be anything. We can verify that
this also is indeed a rollback equilibrium. Regardless of Harry’s first-stage message, if one player is going to Starbucks, then it is optimal for the other player to
go there also. Thus, in each of the second-stage subgames that could arise—one
after each of the two messages that Harry could send—both choosing Starbucks
is a Nash equilibrium of the subgame. Then, in the first stage, Harry, knowing
his message is going to be disregarded, is indifferent about which message he
sends.
The cheap talk equilibrium—where Harry’s message is not disregarded—
yields higher payoffs, and we might normally think that it would be the one selected as a focal point. However, there may be reasons of history or culture that
favor the other equilibrium. For example, for some reasons quite extraneous to
this particular game, Harry may have a reputation for being totally unreliable.
He might be a compulsive practical joker or just absent minded. Then people
might generally disregard his statements and, knowing this to be the usual state
of affairs, Sally might not believe this particular one.
Such problems exist in all communication games. They always have alternative equilibria where the communication is disregarded and therefore irrelevant. Game theorists call these babbling equilibria. Having noted that they
exist, however, we will focus on the cheap talk equilibria, where communication
does have some effect.
B. Totally Conflicting Interests
The credibility of direct communication depends on the degree of alignment of
players’ interests. As a dramatic contrast with the assurance game example, consider a game where the players’ interests are totally in conflict—namely, a zero-sum
game. A good example is the tennis point in Figure 4.14 from Chapter 4; we reproduce its payoff matrix as Figure 8.4. Remember that the payoffs are Evert’s success
percentages. Remember also that this game has only a mixed-strategy Nash equilibrium (derived in Chapter 7); Evert’s expected payoff in this equilibrium is 62.
Now suppose that we construct a two-stage game. In the first stage, Evert
is given an opportunity to send a message to Navratilova. In the second stage,
the simultaneous-move game of Figure 8.4 is played. What will be the rollback
equilibrium?
6841D CH08 UG.indd 283
12/18/14 3:12 PM
2 8 4 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
NAVRATILOVA
EVERT
DL
CC
DL
50, 50
80, 20
CC
90, 10
20, 80
FIGURE 8.4 Tennis Point
It should be clear that Navratilova will not believe any message she receives
from Evert. For example, if Evert’s message is “I am going to play DL,” and Navratilova believes her, then Navratilova should choose to cover DL. But if Evert thinks
that Navratilova will cover DL, then Evert’s best choice is CC. At the next level of
thinking, Navratilova should see through this and not believe the assertion of DL.
But there is more. Navratilova should not believe that Evert would do exactly
the opposite of what she says either. Suppose Evert’s message is “I am going to
play DL,” and Navratilova thinks “She is just trying to trick me, and so I will take
it that she will play CC.” This will lead Navratilova to choose to cover CC. But if
Evert thinks that Navratilova will disbelieve her in this simple way, then Evert
should choose DL after all. And Navratilova should see through this, too.
Thus, Navratilova’s disbelief should mean that she should just totally disregard Evert’s message. Then the full two-stage game has only the babbling equilibrium. The two players’ actions in the second stage will be simply those of the
original equilibrium, and Evert’s first-stage message can be anything. This is
true of all zero-sum games.
C. Partially Aligned Interests
But what about more general games in which there is a mixture of conflict and
common interest? Whether direct communication is credible in such games depends on how the two aspects of conflict and cooperation mix when players’
interests are only partially aligned. Thus, we should expect to see both cheap
talk and babbling equilibria in games of this type. More generally, the greater
the alignment of interests, the more information should be communicable. We
illustrate this intuition with an example.
Consider a situation that you may have already experienced or, if not, soon
will when you start to earn and invest. When your financial adviser recommends
an investment, he may be doing so as part of developing a long-run relationship with you for the steady commissions that your business will bring him or
he may be a fly-by-night operator who touts a loser, collects the up-front fee,
and disappears. The credibility of his recommendation depends on what type of
relationship you establish with him.
6841D CH08 UG.indd 284
12/18/14 3:12 PM
d i r e c t c o m m u n i c at i o n , o r “ C h e a p Ta l k ” 2 8 5
Suppose you want to invest $100,000 in the asset recommended by your adviser and that you anticipate three possible outcomes. The asset could be a bad
investment (B), leading to a 50% loss, or a payoff of 250 measured in thousands
of dollars. The asset could be a mediocre investment (M), yielding a 1% return,
or a payoff of 1. Finally, it could be a good investment (G), yielding a 55% return,
or a payoff of 55. If you choose to invest, you pay the adviser a 2% fee up front
regardless of the performance of the asset; this fee gives your adviser a payoff of
2 and simultaneously lowers your payoff by 2. Your adviser will also earn 20% of
any gain you make, leaving you with a payoff of 80% of the gain, but he will not
have to share in any loss.
With no specialized knowledge related to the particular asset that has been
recommended to you, you cannot judge which of the three outcomes might be
more likely. Therefore, you simply assume that all three possibilities—B, M, and
G—are equally likely: there is a one-third chance of each outcome occurring.
In this situation, in the absence of any further information, you calculate your
expected payoff from investing in the recommended asset as [(13 3 250) 1
(13 3 0.8 3 1) 1 (13 3 0.8 3 55)] 2 2 5 [13 3 (250 1 0.8 1 44)] – 2 5 [13 3
(25.2)] 2 2 5 21.73 2 2 5 23.73. This calculation indicates an expected loss of
$3,730. Therefore, you would not make the investment, and your adviser would
not get any fee. Similar calculations show that you would also choose not to invest, due to a negative expected payoff, if you believed the asset was definitely
the B type, definitely the M type, or definitely any probability-weighted combination of the B and M types alone.
Your adviser is in a different situation. He has researched the investment
and knows which of the three possibilities—B, M, or G—is the truth. We want
to determine what he will do with his information, specifically whether he will
truthfully reveal to you what he knows about the asset. We consider the various
possibilities below, assuming that you update your belief about the asset’s type
based on the information you receive from your adviser. For this example, we
assume that you simply believe what you are told: you assign probability 1 to the
asset being the type stated by your adviser.5
I. SHORT-TERM RELATIONSHIP If your adviser tells you that the recommended asset is
type B, you will choose not to invest. Why? Because your expected payoff from
that asset is 250 and investing would cost you an additional 2 (in fees to the
adviser) for a final payoff of 252. Similarly, if he tells you the asset is M, you will
also not invest. In that case, your expected payoff is 80% of the return of 1 minus
5
In the language of probability theory, the probability you assign to a particular event after having
observed, or heard, information or evidence about that event is known as the posterior probability
of the event. You thus assign posterior probability 1 to the stated quality of the asset. Bayes’ theorem,
which we explain in detail in the appendix to this chapter, provides a formal quantification of the
relationship between prior and posterior probabilities.
6841D CH08 UG.indd 285
12/18/14 3:12 PM
2 8 6 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
the 2 in fees for a total of 21.2. Only if the adviser tells you that the asset is G will
you choose to invest. In this situation, your expected payoff is 80% of the 55 return less the 2 in fees, or 42.
What will your adviser do with his knowledge then? If the truth is G, your adviser will want to tell you the truth in order to induce you to invest. But if he anticipates no long-term relationship with you, he will be tempted to tell you that
the truth is G, even when he knows the asset is either M or B. If you decide to
invest based on his statement, he simply pockets his 2% fee and flees; he has no
further need to stay in touch. Knowing that there is a possibility of getting bad
advice, or false information, from an adviser with whom you will interact only
once, you should ignore the adviser’s recommendation altogether. Therefore, in
this asymmetric information, short-term relationship game, credible communication is not possible. The only equilibrium is the babbling one in which you
ignore your adviser; there is no cheap talk equilibrium in this case.
II. LONG-TERM RELATIONSHIP: FULL REVELATION Now suppose your adviser works for a firm
that you have invested with for years: losing your future business may cost him
his job. If you invest in the asset he recommends, you can compare its actual
performance to your adviser’s forecast. That forecast could prove to have been
wrong in a small way (the forecast was M and the truth is B, or the forecast was
G and the truth is M) or in a large way (the forecast was G and the truth is B). If
you discover such misrepresentations, your adviser and his firm lose your future
business. They may also lose business from others if you bad-mouth them to
friends and acquaintances. If the adviser attaches a cost to his loss of reputation,
he is implicitly concerned about your possible losses, and therefore his interests
are partially aligned with yours. Suppose the payoff cost to his reputation of a
small misrepresentation is 2 (the monetary equivalent of a $2,000 loss) and that
of a large misrepresentation is 4 (a $4,000 loss). We can now determine whether
the partial alignment of your interests with those of your adviser is sufficient to
induce him to be truthful.
As we discussed earlier, your adviser will tell you the truth if the asset is G to
induce you to invest. We need to consider his incentives when the truth is not G,
when the asset is actually B or M. Suppose first that the asset is B. If your adviser
truthfully reveals the asset’s type, you will not invest, he will not collect any fee,
but he will also suffer no reputational cost: his payoff from reporting B when the
truth is B is 0. If he tells you the asset is M (even though it is B), you still will not
buy because your expected payoff is 21.2 as we calculated earlier. Then the adviser will still get 0, so he has no incentive to lie and tell you that a B-type asset is
really M.6 But what if he reports G? If you believe him and invest, he will get the
6
We are assuming that if you do not invest in the recommended asset, you do not find out its actual return, so the adviser can suffer no reputation cost in that case. This assumption fits nicely with
the general interpretation of “cheap talk.” Any message has no direct payoff consequences to the
sender; those arise only if the receiver acts upon the information received in the message.
6841D CH08 UG.indd 286
12/18/14 3:12 PM
D i r e c t C o m m u n i c at i o n , o r “ C h e a p ta l k ” 2 8 7
up-front fee of 2, but he will also suffer the reputational cost of the large error,
4.7 His payoff from reporting G (when the truth is B) is negative: your adviser
would do better to reveal B truthfully. Thus, in situations when the truth about
the asset is G or B, the adviser’s incentives are to reveal the type truthfully.
But what if the truth is M? Truthful revelation does not induce you to invest:
the adviser’s payoff is 0 from reporting M. If he reports G and you believe him,
you invest. The adviser gets his fee of 2, 20% of the 1 that is your return under
M, and he also suffers the reputation cost of the small misrepresentation, 2. His
payoff is 2 1 (0.2 3 1) 2 2 5 0.2 . 0. Thus, your adviser does stand to benefit by
falsely reporting G when the truth is M. Knowing this, you would not believe any
report of G.
Because your adviser has an incentive to lie when the asset he is recommending is M, full information cannot be credibly revealed in this situation.
The babbling equilibrium, where any report from the adviser is ignored, is still a
possible equilibrium. But is it the only equilibrium here or is some partial communication possible? The failure to achieve full revelation occurs because the
adviser will misreport M as G, so suppose we lump those two possibilities together into one event and label it “not-B.” Thus, the adviser asks himself what
he should report: “B or not-B?”8 Now we can consider whether your adviser will
choose to report truthfully in this case of partial communication.
III. LONG-TERM RELATIONSHIP: PARTIAL REVELATION To determine your adviser’s incentives
in the “B or not-B” situation, we need to figure out what inference you will draw
(that is, what posterior probability you will calculate) from the report “not-B,”
assuming you believe it. Your prior (original) belief was that B, M, and G were
equally likely, with probabilities 13 each. If you are told “not-B,” you are left
with the two possibilities of M and G. You regarded the two as equally likely originally, and there is no reason to change that assumption, so you now give each
a probability of 12. These are your new, posterior, probabilities, conditioned on
the information you receive from the adviser’s report. With these probabilities,
your expected payoff if you invest when the report is “not-B” is: [12 3 (0.8 3 1)]
1 [12 3 (0.8 3 55)] 2 2 5 0.4 1 22 2 2 5 20.4 . 0. This positive expected payoff
is sufficient to induce you to invest when given a report of “not-B.”
Knowing that you will invest if you are told “not-B,” we can determine
whether your adviser will have any incentive to lie. Will he want to tell you
“not-B” even if the truth is B? When the asset is actually type B and the adviser
tells the truth (reports B), his payoff is 0 as we calculated earlier. If he reports
“not-B” instead, and you believe him, he gets 2 in fees.9 He also incurs the
7
The adviser’s payoff calculation does not include a 20% share of your return here. The adviser
knows the truth to be B and so knows you will make a loss, in which he will not share.
8
Our apologies to William Shakespeare.
9
Again, the adviser’s calculation includes no portion of your gain because you will make a loss: the
truth is B and the adviser knows the truth.
6841D CH08 UG.indd 287
12/18/14 3:12 PM
2 8 8 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
reputation cost associated with misrepresentation. Because you assume that
M or G is equally likely in the “not-B” report, the expected value of the reputation cost in this case will be 12 times the cost of 2 for small misrepresentation
plus 12 times the cost of 4 for large misrepresentation: the expected reputation cost is then (12 3 2) 1 (12 3 4) 5 3. Your adviser’s net payoff from saying
“not-B” when the truth is B is 2 – 3 5 21. Therefore, he does not gain by making a false report to you. Because telling the truth is your adviser’s best strategy
here, a cheap talk equilibrium with credible partial revelation of information is
possible.
The concept of the partial-revelation cheap talk equilibrium can be made
more precise using the concept of a partition. Recall that you anticipate three
possible cases or events—B, M, and G. This set of events can be divided, or partitioned, into distinct subsets, and your adviser then reports to you the subset
containing the truth. (Of course, the verity of his report remains to be examined as part of the analysis.) Here we have a situation with a partition into two
subsets, one consisting of the singleton B, and the other consisting of the pair
of events {M, G}. In the partial-revelation equilibrium, these two subsets can be
distinguished based on the adviser’s report, but the finer distinction between M
and G, leading to the finest possible partition into three subsets each consisting
only of a singleton, cannot be made. That finer distinction would be possible
only in a case in which a full-revelation equilibrium exists.
We advisedly said earlier that a cheap talk equilibrium with credible partial
revelation of information is possible. This game is one with multiple equilibria because the babbling equilibrium also remains possible. The configuration
of strategies and beliefs where you ignore the adviser’s report, and the adviser
sends the same report (or even a random report) regardless of the truth, is still
an equilibrium. Given each player’s strategies, the other has no reason to change
his actions or beliefs. In the terminology of partitions, we can think of this babbling equilibrium as having the coarsest possible, and trivial, partition with just
one (sub)set {B, M, G} containing all three possibilities. In general, whenever
you find a non-babbling equilibrium in a cheap talk game, there will also be at
least one other equilibrium with a coarser or cruder partition of outcomes.
IV. MULTIPLE EQUILIBRIA As an example of a situation in which coarser partitions are
associated with additional equilibria, consider the case in which your adviser’s
cost of reputation is higher than assumed above. Let the reputation cost be 4
(instead of 2) for a small misrepresentation of the truth and 8 (instead of 4) for
a large misrepresentation. Our analysis above showed that your adviser will report G if the truth is G, and that he will report B if the truth is B. These results
continue to hold. Your adviser wants you to invest when the truth is G, and he
still gets the same payoff from reporting B when the truth is B as he does from
reporting M in that situation. The higher reputation cost gives him even less
6841D CH08 UG.indd 288
12/18/14 3:12 PM
D i r e c t C o m m u n i c at i o n , o r “ C h e a p ta l k ” 2 8 9
incentive to falsely report G when the truth is B. So if the asset is either B or G,
the adviser can be expected to report truthfully.
The problem for full revelation in our earlier example arose because of the
adviser’s incentive to lie when the asset is M. With our earlier numbers, his payoff from reporting G when the truth is M was higher than that from reporting
truthfully. Will that still be true with the higher reputation costs?
Suppose the truth is M and the adviser reports G. If you believe him and invest in the asset, his expected payoff is 2 (his fee) 1 0.2 3 1 (his share in the
actual return from an M-type asset) 2 4 (his reputation cost) 5 21.8 , 0. The
truth would get him 0. He no longer has the temptation to exaggerate the quality of the stock. The outcome where he always reports the truth, and you believe
him and act upon his report, is now a cheap talk equilibrium with full revelation. This has the finest possible partition consisting of three singleton subsets,
{B}, {M}, and {G}.
There are also three other equilibria in this case, each with a coarser partition than the full-revelation equilibrium. Both two-subset situations—one with
{B, M} and {G} and the other with {B} and {M, G}—and the babbling situation
with {B, M, G} are all alternative possible equilibria. We leave it to you to verify this. Which one prevails can depend on all the considerations addressed in
Chapter 4 in our discussion of games with multiple equilibria.
The biggest practical difficulty associated with attaining a non-babbling
equilibrium with credible information communication lies in the players’
knowledge about the extent to which their interests are aligned. The extent of
alignment of interest between the two players must be common knowledge between them. In the investment example, it is critical that you know from past
interactions or other credible sources (for example, a contract) that the adviser
has a large reputational concern in your investment outcome. If you did not
know to what extent his interests were aligned with yours, you would be justified in suspecting that he was exaggerating to induce you to invest for the sake
of the fee he would earn immediately.
What happens when even richer messages are possible? For example, suppose that your adviser could report a number g, representing his estimate of the
rate of growth of the stock price, and that g could range over a continuum of
values. In this situation, as long as the adviser gets some extra benefit if you buy
a bad stock that he recommends, he has some incentive to exaggerate g. Therefore, fully accurate truthful communication is no longer possible. But a partialrevelation cheap talk equilibrium may be possible. The continuous range of
growth rates may split into intervals—say, from 0% to 1%, from 1% to 2%, and
so on—such that the adviser finds it optimal to tell you truthfully into which of
these intervals the actual growth rate falls, and you find it optimal to accept this
advice and take your optimal action on its basis. The higher the adviser’s valuation of his reputation, the finer the possible partition will be—for example,
6841D CH08 UG.indd 289
12/18/14 3:12 PM
2 9 0 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
half-percentage points instead of whole or quarter-percentage points instead of
half. However, we must leave further explanation of this idea to more advanced
treatments of the subject.10
D. Formal Analysis of Cheap Talk Games
Our analysis of cheap talk games so far has been heuristic and verbal. This approach often suffices for understanding and predicting behavior, but the formal
techniques for setting up and solving games—trees and matrices—are available
and can be deployed if needed. To show how this is done and to connect the
games in this chapter with the theory of previous chapters, we now consider
the game between you and your financial adviser in this framework. For this
analysis, we assume that the “language” of communication from your adviser
distinguishes all three possibilities B, M, and G—that is, we consider the finest
possible partition of information. After reading this section, you should be able
to complete a similar analysis for the case where the adviser’s report has to be
the coarser choice between B or “not-B.”
We start by constructing the tree for this game, illustrated in Figure 8.5. The
fictitious player Nature, introduced in Chapter 3, makes the first move, producing one of three scenarios for the return on your investment, namely B, M, or G,
with equal probabilities of 13 each. Your adviser observes Nature’s move and
chooses his action, namely the report to you, which can again be B, M, or G. We
simplify the tree a little right away by noting that the adviser never has any incentive to understate the return on the investment; he will never report B when
the truth is M or G, nor will he report M when the truth is G. (You could leave
those possible actions in the tree, but they make it unnecessarily complex. Our
application of one step of rollback shows that none of them is ever optimal for
the adviser, so none could ever be part of an equilibrium.)
Finally, you are the third mover and you must choose whether to invest (I) or
not invest (N). You do not observe Nature’s move directly, however—you know
only the adviser’s report. Therefore, for you, both nodes where the adviser reports M are gathered in one information set while all three nodes where the adviser reports G are gathered in another information set: both information sets
are indicated by dotted ovals around the relevant nodes in Figure 8.5. The presence of the information sets indicates that your actions are constrained. In the
information set where the adviser has reported M, you must make the same investment choice at both nodes in the set. You must choose either I at both nodes
or N at both nodes: you cannot distinguish between the two nodes inside the
10
The seminal paper by Vincent Crawford and Joel Sobel, “Strategic Information Transmission,”
Econometrica, vol. 50, no. 6 (November 1982), pp. 1431–52, developed this theory of partial communication. An elementary exposition and survey of further work is in Joseph Farrell and Matthew
Rabin, “Cheap Talk,” Journal of Economic Perspectives, vol. 10, no. 3 (Summer 1996), pp. 103–118.
6841D CH08 UG.indd 290
12/18/14 3:12 PM
D i r e c t C o m m u n i c at i o n , o r “ C h e a p ta l k ” 2 9 1
a
N
b
0, 0
I
c
2 S, 52
d
0, 0
e
2 L, 52
f
0, 0
You
B
Adviser
M
You
N
I
G
You
B (Prob. 13)
N
M
Nature
G (Prob. 13)
Adviser
G
I
g
N
h
I
i
N
j
0, 0
I
k
13, 42
N
l
0, 0
You
M (Prob. 13)
Adviser
G
Adviser, You
2, 52
I
You
You
2.2, 1.2
0, 0
2.2 S, 1.2
FIGURE 8.5 Cheap Talk Game Tree: Financial Adviser and Investor
information set to choose I at one and N at the other. Likewise, you must choose
either I at all three nodes or N at all three nodes of the “report G” information
set.
At each terminal node, the adviser’s payoff is shown first, and your payoff is
shown second. The payoff numbers, measured in thousands of dollars, reflect
the same numeric values used in our heuristic analysis earlier. You pay your adviser a 2% fee on your $100,000 investment, and your return is 250 if you invest
in B, 1 if you invest in M, and 55 if you invest in G. Your adviser retains 20% of
any gain you earn from his recommendation. We make one change to our former model by not specifying the exact value of the adviser’s reputation cost of
misrepresentation. Instead, we use S to denote the reputation cost of a small
misrepresentation and L for that of a large misrepresentation; to be consistent
with our analysis above, we assume that both are positive and that S , L. This
approach allows us to consider both levels of reputational consideration discussed earlier.
As a sample of how each pair of payoffs is calculated, consider the node at
which Nature has produced an asset of type M, the adviser has reported G, and
you have chosen I; this node is labeled i in Figure 8.5. With these choices, your
payoff includes the up-front fee of 2 paid to your adviser along with 80% of the
investment’s return of 1 for a total of 0.8 2 2 5 21.2. The adviser gets his fee of
6841D CH08 UG.indd 291
12/18/14 3:12 PM
2 9 2 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
2 and his 20% share of the asset’s return (0.2) but suffers the reputation cost of
S, so his total payoff is 2.2 2 S. We leave it to you to confirm that all of the other
payoffs have been computed correctly.
With the help of the tree in Figure 8.5, we can now construct a payoff matrix
for this game. Technically, that matrix should include all of the strategies available to both you and your adviser. But, as in our construction of the tree, we can
eliminate some possible strategies from consideration before even putting them
into the table: any obviously poor strategies, for example, can be removed. This
process allows us to build a much smaller table, and therefore one that is much
more manageable, than would be produced if we were to include all possible
strategies.
What strategies can we leave out of consideration as equilibrium strategies? The answer is twofold. First, we can ignore strategies that are obviously
not going to be deployed. We already eliminated some such choices for your adviser (for example, “report B if truth is G”) in building the tree. We can now see
that you also have some choices that can be removed. For example, the strategy
“choose I if report is B” at terminal node a is dominated by “choose N if report
is B,” so we can ignore it. Similarly, inside the “report M” information set, your
action “choose I if report is M” is dominated by “choose N if report is M”; it is
the worst choice at both terminal nodes (c and g) and can therefore also be ignored. Second, we can remove strategies that make no difference to the search
for cheap talk equilibria. For the adviser, for example, “report B” and “report M”
both lead to your choosing N, so we remove them as well. In addition to the terminal nodes we have already eliminated in Figure 8.5 (a, c, and g), we can now
eliminate b, d, and h as well.
This simplification process leaves us only six terminal nodes to consider
as possible equilibrium outcomes of the games (e, f, i, j, k, and l ). Those nodes
arise from strategies that include the adviser reporting that the asset is G and
your choice in response to a report of G. Specifically, we are left with three interesting strategies for the adviser [“report G always (regardless of whether the
truth is B, M, or G),” “report G only when the truth is M or G,” and “report G if
and only if the truth is G”] and two for you (“choose I if report is G” and “choose
N even if report is G”). These five strategies yield the three-by-two payoff matrix
illustrated in Figure 8.6.
The payoffs for each strategy combination in Figure 8.6 are expected payoffs
calculated using the values shown at the terminal nodes of the tree that can be
reached under that strategy combination, weighted by the appropriate probabilities. As an example, consider the top-left cell of the table, where the adviser
reports G regardless of the true type of the asset, and you invest because the
report is G. This strategy combination leads to terminal nodes e, i, and k, each
with probability 13. Thus, the adviser’s expected payoff in that cell is {[13 3
(2 2 L)] 1 [13 3 (2.2 2 S)] 1 (13 3 13)} 5 13 3 (17.2 2 L 2 S ). Similarly, your
6841D CH08 UG.indd 292
12/18/14 3:12 PM
D i r e c t C o m m u n i c at i o n , o r “ C h e a p ta l k ” 2 9 3
YOU
Always G
ADVISER
I if G
N if G
(17.2 – L – S)3, –11.23
0, 0
G only if M or G
G if and only if G
(15.2 – S)3, 40.83
133, 423
0, 0
0, 0
FIGURE 8.6 Payoff Matrix for Cheap Talk Game
expected payoff in the same cell is [(13 3 252) 1 (13 3 21.2) 1 (13 3 42)] 5
13 3 (211.2). Again we leave it to you to confirm that the remaining expected
payoffs have been computed correctly.
Now that we have a complete payoff matrix, we can use the techniques developed in Chapter 4 to identify equilibria, with the caveat that the values of L
and S will play a role in our analysis. Simple best-response analysis shows that
your best response to “Always G” is “N if G,” but your best response to the adviser’s other two strategies is “I if G.” Similarly, the adviser’s best response to your
“N if G” can be any of his three choices. Thus, we have our first result: the topright cell is always a Nash equilibrium. If the adviser reports G regardless of the
truth (or for that matter sends any report that is the same in all three scenarios),
then you do better by choosing N, and given that you are choosing N, the adviser has no incentive to deviate from his choice. This equilibrium is the babbling equilibrium with no information communication that we saw earlier.
Next consider the adviser’s best response to your choice of “I if G.” The only
possible equilibria occur when he chooses “G only if M or G” or “G if and only
if G.” But whether he will pick one or the other of these, or indeed neither, depends on the specific values of L and S. For the strategy pair {“G only if M or G,”
“I if G”} to be a Nash equilibrium, it must be true that 15.2 2 S . 17.2 2 L 2 S
and that 15.2 2 S . 13. The first expression holds if L . 2; the second if S , 2.2.
So if the values of L and S meet these requirements, the middle-left cell will be a
cheap talk (Nash) equilibrium. In this equilibrium, the report G does not allow
you to infer whether the true scenario is M or G, but you know that the truth is
definitely not B. Knowing this much, you can be sure that your expected payoff
will be positive, and you choose to invest. In this situation, G really means “notB,” and the equilibrium outcome is formally equivalent to the partial-revelation
equilibrium we discussed earlier.11
11
Incidentally, this highlights a certain arbitrariness in language. It does not matter whether the report is G or “not-B,” as long as its significance is clearly understood by the parties. One can even
have upside-down conventions where “bad” means “good” and vice versa, if the translation from the
terms to meaning is common knowledge to all parties involved in the communication.
6841D CH08 UG.indd 293
12/18/14 3:12 PM
2 9 4 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
We can also check for the conditions under which the strategy pair {“G if and
only if G,” “I if G”} is a Nash equilibrium. That outcome requires both 13 . 17.2
2 L 2 S and 13 . 15.2 2 S. These are less easily handled than the pair of expressions above. Note however that the latter expression requires S . 2.2 and that
we have assumed that L . S; so L . 2.2 must hold when S . 2.2 holds. You can
now use these requirements to check whether the first expression will hold. Use
the minimum value of L and S, 2.2, and plug these into 13 . 17.2 2 L 2 S to find
13 . 12.8, which is always true. These calculations indicate that the bottom-left
cell is a cheap talk equilibrium when S . 2.2, as long as L . S. This equilibrium
is the one with full revelation that we identified at the end of our earlier analysis.
In each case described here, the babbling equilibrium exists along with either the {“G if and only if G,” “I if G”} or the {“G only if M or G,” “I if G”} equilibrium. Note that we get only the babbling equilibrium when the reputational
cost to your adviser is small (L , 2, and S , L), which is consistent with the intuition we presented earlier. Finally, if we restrict the language of messages to the
coarser partition between B and “not-B,” then an extension of the analysis here
shows that the strategy set {“ ‘not-B’ if M or G,” “I if ‘not-B’ ”} is also a Nash equilibrium of that game.
In each instance, our formal analysis confirms the verbal arguments we
made in Section 3.C. Some of you may find the verbal approach sufficient for
most, if not all, of your needs. Others may prefer the more formal model presented in this section. Be aware, however, that game trees and matrices can only
go so far: once your model becomes sufficiently complex, with a continuum
of report choices, for example, you will need to rely almost entirely on mathematics to identify equilibria. Being able to solve models of asymmetric information in a variety of forms—verbally, with trees and tables, or with algebra or
calculus—is an important skill. Later in this chapter, we present additional examples of such games: we solve one using a combination of intuition and algebra and the other with a game tree and payoff table. In each case, the one
solution method does not preclude the other, so you may attempt alternative
solutions on your own.
4 Adverse selection, sIGNALING, AND SCREENING
A. Adverse Selection and Market Failure
In many games, one of the players knows something pertinent to the outcomes
that the other players don’t know. An employer knows much less about the skills
of a potential employee than does the employee himself; vaguer but important
6841D CH08 UG.indd 294
12/18/14 3:12 PM
A d v e r s e s e l e c t i o n , s i g n a l i n g , a n d s c r e e n i n g 2 9 5
matters such as work attitude and collegiality are even harder to observe. An insurance company knows much less about the health or driving skills of someone applying for medical or auto insurance than does the applicant. The seller
of a used car knows a lot about the car from long experience; a potential buyer
can at best get a little information by inspection.
In such situations, direct communication will not credibly signal information. Unskilled workers will claim to have skills to get higher-paid jobs; people
who are bad risks will claim good health or driving habits to get lower insurance
premiums; owners of bad cars will assert that their cars run fine and have given
them no trouble in all the years they have owned them. The other parties to the
transactions will be aware of the incentives to lie and will not trust the information conveyed by the words. There is no possibility of a cheap talk equilibrium of
the type described in Section 3.
What if the less-informed parties in these transactions have no way of obtaining the pertinent information at all? In other words, to use the terminology
introduced in Section 2 above, suppose that no credible screening devices nor
signals are available. If an insurance company offers a policy that costs 5 cents
for each dollar of coverage, then the policy will be especially attractive to people
who know that their own risk (of illness or a car crash) exceeds 5%. Of course,
some people who know their risk to be lower than 5% will still buy the insurance
because they are risk averse. But the pool of applicants for this insurance policy will have a larger proportion of the poorer risks than the proportion of these
risks in the population as a whole. The insurance company will selectively attract
an unfavorable, or adverse, group of customers. This phenomenon is very common in transactions involving asymmetric information and is known as adverse
selection. (This term in fact originated within the insurance industry.)
Potential consequences of adverse selection for market transactions were
dramatically illustrated by George Akerlof in a paper that became the starting
point of economic analysis of asymmetric information situations and won him
a Nobel Prize in 2001.12 We use his example to introduce you to the effects that
adverse selection may have.
B. The Market for “Lemons”
Think of the market in 2014 for a specific kind of used car, say a 2011 Citrus.
Suppose that in use these cars have proved to be either largely trouble free and
reliable or have had many things go wrong. The usual slang name for the latter
type is “lemon,” so for contrast let us call the former type “orange.”
12
George Akerlof, “The Market for Lemons: Qualitative Uncertainty and the Market Mechanism,”
Quarterly Journal of Economics, vol. 84, no. 3 (August 1970), pp. 488–500.
6841D CH08 UG.indd 295
12/18/14 3:12 PM
2 9 6 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
Suppose that each owner of an orange Citrus values it at $12,500; he is willing to part with it for a price higher than this but not for a lower price. Similarly,
each owner of a lemon Citrus values it at $3,000. Suppose that potential buyers are willing to pay more than these values for each type. If a buyer could be
confident that the car he was buying was an orange, he would be willing to pay
$16,000 for it; if the car was a known lemon, he would be willing to pay $6,000.
Since the buyers value each type of car more than do the original owners, it benefits everyone if all the cars are traded. The price for an orange can be anywhere
between $12,500 and $16,000; that for a lemon anywhere between $3,000 and
$6,000. For definiteness, we will suppose that there is a limited stock of such
cars and a larger number of potential buyers. Then the buyers, competing with
each other, will drive the price up to their full willingness to pay. The prices will
be $16,000 for an orange and $6,000 for a lemon—if each type could be identified with certainty.
But information about the quality of any specific car is not symmetric between the two parties to the transaction. The owner of a Citrus knows perfectly
well whether it is an orange or a lemon. Potential buyers don’t, and the owner
of a lemon has no incentive to disclose the truth. For now, we confine our analysis to the private used-car market in which laws requiring truthful disclosure
are either nonexistent or hard to enforce. We also assume away any possibility
that the potential buyer can observe something that tells him whether the car
is an orange or a lemon; similarly, the car owner has no way to indicate the type
of car he owns. Thus, for this example, we consider the effects of the information asymmetry alone without allowing either side of the transaction to signal or
screen.
When buyers cannot distinguish between oranges and lemons, there cannot
be distinct prices for the two types in the market. There can be just one price,
p, for a Citrus; the two types—oranges and lemons—must be pooled. Whether
efficient trade is possible under such circumstances will depend on the proportions of oranges and lemons in the population. We suppose that oranges are a
fraction f of used Citruses and lemons the remaining fraction (1 ­2 f ).
Even though buyers cannot verify the quality of an individual car, they can
know the proportion of good cars in the population as a whole, for example,
from newspaper reports, and we assume this to be the case. If all cars are being
traded, a potential buyer will expect to get a random selection, with probabilities f and (1 2 f ) of getting an orange and a lemon, respectively. The expected
value of the car purchased is 16,000 3 f 1 6,000 3 (1 2 f ) 5 6,000 1 10,000 3 f.
He will buy such a car if its expected value exceeds the price he is asked to pay,
that is, if 6,000 1 10,000 3 f . p.
Now consider the point of view of the seller. The owners know whether their
cars are oranges or lemons. The owner of a lemon is willing to sell it as long
as the price exceeds its value to him, that is, if p . 3,000. But the owner of an
6841D CH08 UG.indd 296
12/18/14 3:12 PM
A d v e r s e s e l e c t i o n , s i g n a l i n g , a n d s c r e e n i n g 2 9 7
orange requires p . 12,500. If this condition for an orange owner to sell is satisfied, so is the sell condition for a lemon owner.
To meet the requirements for all buyers and sellers to want to make the
trade, therefore, we need 6,000 1 10,000 3 f . p . 12,500. If the fraction of
oranges in the population satisfies 6,000 1 10,000 3 f . 12,500, or f . 0.65, a
price can be found that does the job; otherwise there cannot be efficient trade.
If 6,000 1 10,000 3 f , 12,500 (leaving out the exceptional and unlikely case
where the two are just equal), owners of oranges are unwilling to sell at the maximum price the potential buyers are willing to pay. We then have adverse selection in the set of used cars put up for sale; no oranges will appear in the market
at all. The potential buyers will recognize this, will expect to get a lemon for sure,
and will pay at most $6,000. The owners of lemons will be happy with this outcome, so lemons will trade. But the market for oranges will collapse completely
due to the asymmetric information. The outcome will be a kind of Gresham’s
law, where bad cars drive out the good.
Because the lack of information makes it impossible to get a reasonable
price for an orange, the owners of oranges will want a way to convince the buyers that their cars are the good type. They will want to signal their type. The
trouble is that the owners of lemons would also like to pretend that their cars
are oranges, and to this end they can imitate most of the signals that owners
of oranges might attempt to use. Michael Spence, who developed the concept
of signaling and shared the 2001 Nobel Prize for information economics with
Akerlof and Stiglitz, summarizes the problems facing our orange owners in his
pathbreaking book on signaling: “Verbal declarations are costless and therefore
useless. Anyone can lie about why he is selling the car. One can offer to let the
buyer have the car checked. The lemon owner can make the same offer. It’s a
bluff. If called, nothing is lost. Besides, such checks are costly. Reliability reports
from the owner’s mechanic are untrustworthy. The clever nonlemon owner
might pay for the checkup but let the purchaser choose the inspector. The problem for the owner, then, is to keep the inspection cost down. Guarantees do not
work. The seller may move to Cleveland, leaving no forwarding address.”13
In reality, the situation is not so hopeless as Spence implies. People and
firms that regularly sell used cars as a business can establish a reputation for
honesty and profit from this reputation by charging a markup. (Of course, some
used car dealers are unscrupulous.) Some buyers are knowledgeable about cars;
some buy from personal acquaintances and can therefore verify the history of
the car they are buying. Or dealers may offer warranties, a topic we discuss in
13
A. Michael Spence, Market Signaling: Information Transfer in Hiring and Related Screening Processes (Cambridge, Mass.: Harvard University Press, 1974), pp. 93–94. The present authors apologize
on behalf of Spence to any residents of Cleveland who may be offended by any unwarranted suggestion that that’s where shady sellers of used cars go!
6841D CH08 UG.indd 297
12/18/14 3:12 PM
2 9 8 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
more detail later. And in other markets it is harder for bad types to mimic the
actions of good types, so credible signaling will be viable. For a specific example
of such a situation, consider the possibility that education can signal skill. Then
it may be hard for the unskilled to acquire enough education to be mistaken for
highly skilled people. The key requirement for education to separate the types
is that education should be sufficiently more costly for the truly unskilled to acquire than for the truly skilled. To show how and when signaling can successfully separate types, therefore, we turn to the labor market.
C. Signaling and Screening: Sample Situations
The basic idea of signaling or screening to convey or elicit information is very
simple: players of different “types” (that is, possessing different information
about their own characteristics or about the game and its payoffs more generally) should find it optimal to take different actions so that their actions truthfully reveal their types. Situations of such information asymmetry, and signaling
and screening strategies to cope with them, are ubiquitous. Here are some additional situations to which the methods of analysis developed throughout this
chapter can be applied.
I. Insurance The prospective buyers of an insurance policy vary in their risk categories, or their levels of riskiness to the insurer. For example, among the numerous applicants for an automobile collision insurance policy will be some drivers
who are naturally cautious and others who are simply less careful. Each potential
customer has a better knowledge of his or her own risk class than does the insurance company. Given the terms of any particular policy, the company will make
less profit (or a greater loss) on the more risky customers. However, the more
risky customers will be the ones who find the specified policy more attractive.
Thus, the company attracts the less favorable group of customers, and we have
a situation of adverse selection.14 Clearly, the insurance company would like to
distinguish between the risk classes. They can do so using a screening device.
Suppose as an example that there are just two risk classes. The company can
then offer two policies from which any individual customer chooses one. The
first has a lower premium (in units of so many cents per dollar of coverage), but
covers a lower percentage of any loss incurred by the customer; the second has
a higher premium, but covers a higher percentage, perhaps even 100%, of the
loss. (In the case of collision insurance, this loss represents the cost of having
14
Here we are not talking about the possibility that a well-insured driver will deliberately exercise
less care. That is moral hazard, and it can be mitigated using co-insurance schemes similar to those
discussed here. But for now our concern is purely adverse selection, where some drivers are just by
their nature careful, and others are equally uncontrollably spaced out and careless when they drive.
6841D CH08 UG.indd 298
12/18/14 3:12 PM
A d v e r s e s e l e c t i o n , s i g n a l i n g , a n d s c r e e n i n g 2 9 9
an auto body shop complete the needed repairs to one’s car.) A higher-risk customer is more likely to suffer the uncovered loss and therefore is more willing to
pay the higher premium to get more coverage. The company can then adjust the
premiums and coverage ratios so that customers of the higher-risk type choose
the high-premium, high-coverage policy and customers of the less-risky type
choose the lower-premium, lower-coverage policy. If there are more risk types,
there have to be correspondingly more policies in the menu offered to prospective customers: with a continuous spectrum of risks, there may be a corresponding continuum of policies.
Of course, this insurance company has to compete with other insurance
companies for the business of each customer. That competition affects the
packages of premiums and levels of coverage it can offer. Sometimes the competition may even preclude the attainment of an equilibrium as each offering
can be defeated by another.15 But the general idea behind differential premium
policies for differential risk-class customers is valid and important.
II. warranties Many types of durable goods—cars, computers, washing
machines—vary in their quality. Any company that has produced such a good
will have a pretty good idea of its quality. But a prospective buyer will be much
less informed. Can a company that knows its product to be of high quality signal
this fact credibly to its potential customers?
The most obvious, and most commonly used, signal is a good warranty. The
cost of providing a warranty is lower for a genuinely high-quality product; the
high-quality producer is less likely to be called on to provide repairs or replacement than the company with a shoddier product. Therefore, warranties can
serve as signals of quality, and consumers are intuitively quite aware of this fact
when they make their purchase decisions.
Typically in such situations, the signal has to be carried to excess in order to
make it sufficiently costly to mimic. Thus, the producer of a high-quality car has
to offer a sufficiently long or strong warranty to signal the quality credibly. This
requirement is especially relevant for any company that is a relative newcomer
or one that does not have a previous reputation for offering high-quality products. Hyundai, for example, began selling cars in the United States in 1986 and
for its first decade had a low-quality reputation. In the mid-1990s, it invested
heavily in better technology, design, and manufacturing. To revamp its image,
it offered the then-revolutionary 10-year, 100,000-mile warranty. Now it ranks
with consumer groups as one of the better-quality automobile manufacturers.
15
See Michael Rothschild and Joseph Stiglitz, “Equilibrium in Competitive Insurance Markets: An
Essay on the Economics of Imperfect Information,” Quarterly Journal of Economics, vol. 90, no. 4
(November 1976), pp. 629–49.
6841D CH08 UG.indd 299
12/18/14 3:12 PM
3 0 0 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
III. price discrimination The buyers of most products are heterogeneous in terms
of their willingness to pay, their willingness to devote time to searching for a
better price, and so on. Companies would like to identify those potential customers with a higher willingness to pay and charge them one, presumably fairly
high, price while offering selective good deals to those who are not willing to
pay so much (as long as that willingness to pay still exceeds the cost of supplying the product). The companies can successfully charge different prices to different groups of customers by using screening devices to separate the types. We
will discuss such strategies, known as price discrimination in the economics literature, in more detail in Chapter 13. Here we provide just a brief overview.
The example of discriminatory prices best known to most people comes
from the airline industry. Business travelers are willing to pay more for their
airline tickets than are tourists, often because at least some of the cost of the
ticket is borne by the business traveler’s employer. It would be illegal for airlines blatantly to identify each traveler’s type and then to charge them different prices. But the airlines take advantage of the fact that tourists are also more
willing to commit to an itinerary well in advance, while business travelers need
to retain flexibility in their plans. Therefore, airlines charge different prices for
nonrefundable versus refundable fares and leave it to the travelers to choose
the fare type. This pricing strategy is an example of screening by self-selection.16
Other devices—advance purchase or Saturday night stay requirements, different
classes of onboard service (first versus business versus coach)—serve the same
screening purpose.
Price discrimination is not specific to high-priced products like airline tickets. Other discriminatory pricing schemes can be observed in many markets
where product prices are considerably lower than those for air travel. Coffee and
sandwich shops, for example, commonly offer “frequent buyer” discount cards.
These cards effectively lower the price of coffee or a sandwich to the shop’s regular customers. The idea is that regular customers are more willing to search for
the best deal in the neighborhood, while visitors or occasional users would go to
the first coffee or sandwich shop they see without spending the time necessary
to determine whether any lower prices might be available. The higher regular
price and “free 11th item” discount represent the menu of options from which
the two types of customer select, thereby separating them by type.
Books are another example. They are typically first published in a higherprice hardcover version; a cheaper paperback comes out several months to a
year or more later. The difference in the costs of producing the two versions is
negligible. But the versions serve to separate the buyers who want to read the
16
We investigate the idea of self-selection more formally in Section 5 of this chapter.
6841D CH08 UG.indd 300
12/18/14 3:12 PM
A d v e r s e s e l e c t i o n , s i g n a l i n g , a n d s c r e e n i n g 3 0 1
book immediately and are willing to pay more for the ability to do so from those
who wish to pay less and are willing to wait longer.
IV. Product design and advertising Can an attractive, well-designed product exterior
serve the purpose of signaling high quality? The key requirement is that the cost
of the signal be sufficiently higher for a company trying to pretend high quality than for one that has a truly high-quality product. Typically, the cost of the
product exterior is the same regardless of the innate quality that resides within.
Therefore, the mimic would face no cost differential, and the signal would not
be credible.
But such signals may have some partial validity. Exterior design is a fixed
cost that is spread over the whole product run. Buyers do learn about quality
from their own experience, from friends, and from reviews and comments in
the media. These considerations indicate that a high-quality good can expect to
have a longer market life and higher total sales. Therefore, the cost of an expensive exterior is spread over a larger volume and adds less to the cost of each unit
of the product, if that product is of higher innate quality. The firm is in effect
making a statement: “We have a good product that will sell a lot. That is why
we can afford to spend so much on its design. A fly-by-night firm would find
this prohibitive for the few units it expects to sell before people find out its poor
quality and don’t buy any more from it.” Even expensive, seemingly useless and
uninformative product launch and advertising campaigns can have a similar
signaling effect.17
Similarly, when you walk into a bank and see solid, expensive marble counters and plush furnishings, you may be reassured about its stability. However,
for this particular signal to work, it is important that the building, furnishings,
and décor be specific to the bank. If everything could easily be sold to other
types of establishments and the space converted into a restaurant, say, then a
fly-by-night operator could mimic a truly solid bank at no higher cost. In that
situation, the signal would not be credible.
V. Taxis The examples above are drawn primarily from economics, but here is
one from the field of sociology about taxi service. The overwhelming majority
of people who hail a taxi simply want to go to their destination, pay the fare,
and depart. But a few are out to rob the driver or hijack the cab, perhaps with
some physical violence involved. How can taxi drivers screen their prospective
customers and accept only the good ones? Sociologists Diego Gambetta and
Heather Hamill researched this question using extensive interviews with taxi
17
Kyle Bagwell and Gary Ramey, “Coordination Economies, Advertising, and Search Behavior in Retail Markets,” American Economic Review, vol. 84, no. 3 (June 1994), pp. 498–517.
6841D CH08 UG.indd 301
12/18/14 3:12 PM
3 0 2 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
drivers in New York (where robbery is the main problem) and Northern Ireland
(where sectarian violence was a serious problem at the time of their study).18
The drivers need an appropriate screening device, knowing that the bad
types of potential customers are trying to mimic the actions of the good type.
The usual differential cost condition applies. A New York customer wearing a
suit is not guaranteed to be harmless, because a robber can buy and wear a suit
for the same cost as a good customer; race and gender cannot be used to screen
customers either. In Northern Ireland, the warring factions were also not easily
distinguishable by external characteristics.
Gambetta and Hamill found that some screens were more useful to the taxi
drivers than others. For example, ordering a cab by phone was a better signal
of a customer’s trustworthiness than hailing on the street: when you revealed
a pickup location, the taxi company literally “knew where you lived.”19 More
important, some signaling devices worked better for customers (and were
therefore better screens for the drivers) when used in combination rather than
individually. Wearing a suit was no good as a credible screen all by itself, but a
customer coming out of an office building wearing a suit was deemed safer than
a random suit-wearing customer standing on a street corner. Most office buildings have some security in the lobby these days, and such a customer could be
deemed to have already passed one level of security testing.
Perhaps most important were the involuntary signals that people give off—
microexpressions, gestures, and so forth—that experienced drivers learn to read
and interpret. Exactly because these are involuntary, they act as signals with an
infinite cost of mimicry and are therefore the most effective in screening to separate types.20
VI. Political business cycles And now we provide two examples from the field of
political economy. Incumbent governments often increase spending to get the
economy to boom just before an election, thereby hoping to attract more votes
and win the election. But shouldn’t rational voters see through this stratagem
and recognize that, as soon as the election is over, the government will be forced
to retrench, perhaps leading to a recession? For pre-election spending to be an
effective signal of type, there has to be some uncertainty in the voters’ mind
about the “competence-type” of the government. The future recession will create a political cost for the government. This cost will be smaller if the government is more competent in its handling of the economy. If the cost differential
18
Diego Gambetta and Heather Hamill, Streetwise: How Taxi Drivers Establish Their Customers’
Trustworthiness (New York: Russell Sage Foundation, 2005).
19
Even if the location was a restaurant or office, not a home, you leave more evidence about yourself when you call for pickup than when you hail a cab on the street.
20
Paul Ekman, Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage (New York:
W. W. Norton & Company, 2009), reports on how such inadvertent signals can be read and interpreted.
6841D CH08 UG.indd 302
12/18/14 3:12 PM
A d v e r s e s e l e c t i o n , s i g n a l i n g , a n d s c r e e n i n g 3 0 3
between competent and incompetent government types is large enough, a sufficiently high expenditure spike can credibly signal competence.21
Another similar example relates to inflation controls. Many countries at
many times have suffered high inflation, and governments have piously declared their intentions to reduce this level. Can a government that truly cares
about price stability credibly signal its type? Yes. Governments can issue bonds
protected against inflation: the interest rate on such bonds is automatically
ratcheted up by the rate of inflation or the capital value of the bond rises in proportion to the increase in the price level. Issuing government debt in this form
is more costly to a government that likes policies that lead to higher inflation,
because it has to make good on the contract of paying more interest or increasing the value of its debt. Therefore, a government with genuinely anti-inflation
preferences can issue inflation-protected bonds as a credible signal, separating
itself from the inflation-loving type of government.
VII. evolutionary biology Finally, an example from the natural sciences. In many
species of birds, the males have very elaborate and heavy plumage that females
find attractive. One should expect the females to seek genetically superior males
so that their offspring will be better equipped to survive to adulthood and to attract mates in their turn. But why does elaborate plumage indicate such desirable genetic qualities? One would think that such plumage might be a handicap,
making the male bird more visible to predators (including human hunters) and
less mobile, therefore less able to evade these predators. Why do females choose
these seemingly handicapped males? The answer comes from the conditions for
credible signaling. Although heavy plumage is indeed a handicap, it is less of a
handicap to a male who is sufficiently genetically superior in qualities such as
strength and speed. The weaker the male, the harder it will be for him to produce and maintain plumage of a given quality. Thus, it is precisely the heaviness
of the plumage that makes it a credible signal of the male’s quality.22
D. Experimental Evidence
The characterization of and solution for equilibria in games of signaling and
screening entail some quite subtle concepts and computations. Thus, in each
case above, formal models must be carefully described in order to formulate
reasonable and accurate predictions for player choices. In all such games, players must revise or update their probabilities about other players’ type(s) based
21
These ideas and the supporting evidence are reviewed by Alan Drazen in “The Political Business
Cycle after 25 Years,” in NBER Macroeconomics Annual 2000, ed. Ben S. Bernanke and Kenneth S.
Rogoff (Cambridge, Mass.: MIT Press, 2001), pp. 75–117.
22
Matt Ridley, The Red Queen: Sex and the Evolution of Human Behavior (New York: Penguin, 1995),
p. 148.
6841D CH08 UG.indd 303
12/18/14 3:12 PM
3 0 4 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
on observation of those other players’ actions. This updating requires an application of Bayes’ theorem, which is explained in the appendix to this chapter. We
also carefully analyze an example of a game with this kind of updating in Section 6 below.
You can imagine, without going into any of the details of the appendixes,
that these probability-updating calculations are quite complex. Should we expect players to perform them correctly? There is ample evidence that people are
very bad at performing calculations that include probabilities and are especially
bad at conditioning probabilities on new information.23 Therefore, we should be
justifiably suspicious of equilibria that depend on the players’ doing so.
Relative to this expectation, the findings of economists who have conducted
laboratory experiments of signaling games are encouraging. Some surprisingly
subtle refinements of Bayesian-Nash and perfect Bayesian equilibria are successfully observed, even though these refinements require not only updating
of information by observing actions along the equilibrium path but also deciding how one would infer information from off-equilibrium actions that should
never have been taken in the first place. However, the verdict of the experiments
is not unanimous: much seems to depend on the precise details of the laboratory design of the experiment.24
5 Signaling in the Labor Market
Many of you expect that when you graduate, you will work for an elite firm in finance or computing. These firms have two kinds of jobs. One kind requires high
quantitative and analytical skills and capacity for hard work and offers high pay
in return. The other kind of jobs are semiclerical, lower-skill, lower-pay jobs. Of
course, you want the job with higher pay. You know your own qualities and skills
far better than your prospective employer does. If you are highly skilled, you
want your employer to know this about you, and he also wants to know. He can
test and interview you, but what he can find out by these methods is limited by
the available time and resources. You can tell him how skilled you are, but mere
assertions about your qualifications are not credible. More objective evidence is
needed, both for you to offer and for your employer to seek out.
23
Deborah J. Bennett, Randomness (Cambridge, Mass.: Harvard University Press, 1998), pp. 2–3 and
ch. 10. See also Paul Hoffman, The Man Who Loved Only Numbers (New York: Hyperion, 1998), pp.
233–40, for an entertaining account of how several probability theorists, as well as the brilliant and
prolific mathematician Paul Erdös, got a very simple probability problem wrong and even failed to
understand their error when it was explained to them.
24
Douglas D. Davis and Charles A. Holt, Experimental Economics (Princeton: Princeton University
Press, 1995), review and discuss these experiments in their chapter 7.
6841D CH08 UG.indd 304
12/18/14 3:12 PM
S i g n a l i n g I n t h e l a b o r m a r k e t 3 0 5
What items of evidence can the employer seek, and what can you offer?
Recall from Section 2 of this chapter that your prospective employer will use
screening devices to identify your qualities and skills. You will use signals to convey your information about those same qualities and skills. Sometimes similar
or even identical devices can be used for either signaling or screening.
In this instance, if you have selected (and passed) particularly tough and
quantitative courses in college, your course choices can be credible evidence of
your capacity for hard work in general and of your quantitative and logical skills
in particular. Let us consider the role of course choice as a screening device.
A. Screening to Separate Types
To keep things simple, we approach this screening game using intuition and
some algebra. Suppose college students are of just two types when it comes to
the qualities most desired by employers: A (able) and C (challenged). Potential
employers in finance or computing are willing to pay $160,000 a year to a type
A and $60,000 to a type C. Other employment opportunities yield the A types a
salary of $125,000 and the C types a salary of $30,000. These are just the numbers in the Citrus car example in Section 4.B above, but multiplied by a factor of
10 better to suit the reality of the job-market example. And just as in the usedcar example where we supposed there was fixed supply and numerous potential
buyers, we suppose here that there are many potential employers who have to
compete with each other for a limited number of job candidates, so they have
to pay the maximum amount that they are willing to pay. Because employers
cannot directly observe any particular job applicant’s type, they have to look for
other credible means to distinguish among them.25
Suppose the types differ in their tolerance for taking a tough course rather
than an easy one in college. Each type must sacrifice some party time or other
activities to take a tougher course, but this sacrifice is smaller or easier to bear
for the A types than it is for the C types. Suppose the A types regard the cost of
each such course as equivalent to $3,000 a year of salary, while the C types regard it as $15,000 a year of salary. Can an employer use this differential to screen
his applicants and tell the A types from the C types?
Consider the following hiring policy: anyone who has taken a certain number, n, or more of the tough courses will be regarded as an A and paid $160,000,
and anyone who has taken fewer than n will be regarded as a C and paid $60,000.
The aim of this policy is to create natural incentives whereby only the A types
will take the tough courses, and the C types will not. Neither wants to take more
25
You may wonder whether the fact that the two types have different outside opportunities can be
used to distinguish between them. For example, an employer may say, “Show me an offer of a job at
$125,000, and I will accept you as type A and pay you $160,000.” However, such a competing offer
can be forged or obtained in cahoots with someone else, so it is not reliable.
6841D CH08 UG.indd 305
12/18/14 3:12 PM
3 0 6 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
of the tough courses than he has to, so the choice is between taking n to qualify
as an A or giving up and settling for being regarded as a C, in which case he may
as well not take any of the tough courses and just coast through college.
To succeed, such a policy must satisfy two kinds of conditions. The first set
of conditions requires that the policy gives each type of job applicant the incentive to make the choice that the firm wants him to make. In other words, the
policy should be compatible with the incentives of the workers; therefore, the
relevant conditions are called incentive-compatibility conditions. The second
kind of conditions ensure that, with such an incentive-compatible choice, the
workers get a better (at least, no worse) payoff from these jobs than they would
get in their alternative opportunities. In other words, the workers should be willing to participate in this firm’s offer; therefore, the relevant conditions are called
the participation conditions. We will develop these conditions in the labor market context now. Similar conditions will appear in other examples later in this
chapter and again in Chapter 13, where we develop the general theory of mechanism design.
i. Incentive Compatibility The criterion that employers devise to distinguish an A
from a C—namely, the number of tough courses taken—should be sufficiently
strict that the C types do not bother to meet it but not so strict as to discourage
even the A types from attempting it. The correct value of n must be such that
the true C types prefer to settle for being revealed as such and getting $60,000,
rather than incurring the extra cost of imitating the A type’s behavior. That is, we
need the policy to be incentive compatible for the C types, so26
60,000 $ 160,000 2 15,000 n,
or
15 n $ 100,
or
n $ 6.67.
Similarly, the condition that the true A types prefer to prove their type by taking
n tough courses is
160,000 2 3,000 n $ 60,000, or
3n # 100,
or
n # 33.33.
These incentive-compatibility conditions or, equivalently, incentivecompatibility constraints, align the job applicant’s incentives with the employer’s desires, or make it optimal for the applicant to reveal the truth about his skill
through his action. The n satisfying both constraints, because it is required to
be an integer, must be at least 7 and at most 33.27 The latter is not realistically
26
We require merely that the payoff from choosing the option intended for one’s type be at least as
high as that from choosing a different option, not that it be strictly greater. However, it is possible to
approach the outcome of this analysis as closely as one wants while maintaining a strict inequality,
so nothing substantial hinges on this assumption.
27
If in some other context the corresponding choice variable is not required to be an integer—for
example, if it is a sum of money or an amount of time—then a whole continuous range will satisfy
both incentive-compatibility constraints.
6841D CH08 UG.indd 306
12/18/14 3:12 PM
S i g n a l i n g I n t h e l a b o r m a r k e t 3 0 7
relevant in this example, as an entire college program is typically 32 courses, but
in other examples it might matter.
What makes it possible to meet both conditions is the difference in the costs
of taking tough courses between the two types: the cost is sufficiently lower for
the “good” type that the employers wish to identify. When the constraints are
met, the employer can use a policy to which the two types will respond differently, thereby revealing their types. This is called separation of types based on
self-selection.
We did not assume here that the tough courses actually imparted any additional skills or work habits that might convert C types into A types. In our
scenario, the tough courses serve only the purpose of identifying the persons
who already possess these attributes. In other words, they have a pure screening
function.
In reality, education does increase productivity. But it also has the additional
screening or signaling function of the kind described here. In our example, we
found that education might be undertaken solely for the latter function; in reality, the corresponding outcome is that education is carried further than is
justified by the extra productivity alone. This extra education carries an extra
cost—the cost of the information asymmetry.
ii. Participation When the incentive-compatibility conditions for the two types of
jobs in this firm are satisfied, the A types take n tough courses and get a payoff
of 160,000 2 3,000n, and the C types take no tough courses and get a payoff of
60,000. For the types to be willing to make these choices instead of taking their
alternative opportunities, the participation conditions must be satisfied as well.
So we need
160,000 2 3,000n $ 125,000, and 60,000 $ 30,000.
The C types’ participation condition is trivially satisfied in this example (although that may not be the case in other examples); the A types’ participation
condition requires n # 11.67, or, since n must be an integer, n # 11. Here, any
n that satisfies the A types’ participation constraint of n # 11 also satisfies their
incentive compatibility constraint of n # 33, so the latter becomes logically redundant, regardless of its realistic irrelevance.
The full set of conditions that are required to achieve separation of types
in this labor market is then 7 # n # 11. This restriction on possible values of n
combines the incentive-compatibility condition for the C types and the participation condition for the A types. The participation condition for the C types and
the incentive-compatibility condition for the A types in this example are automatically satisfied when the other conditions hold.
When the requirement of taking enough tough courses is used for screening,
the A types bear the cost. Assuming that only the minimum needed to achieve
6841D CH08 UG.indd 307
12/18/14 3:12 PM
3 0 8 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
separation is used—namely, n 5 7—the cost to each A type has the monetary
equivalent of 7 3 $3,000 5 $21,000. This is the cost, in this context, of the information asymmetry. It would not exist if a person’s type could be directly and
objectively identified. Nor would it exist if the population consisted solely of A
types. The A types have to bear this cost because there are some C types in the
population from whom they (or their prospective employers) seek to distinguish
themselves.28
B. Pooling of Types
Rather than having the A types bear the cost of the information asymmetry,
might it be better not to bother with the separation of types at all? With the separation, A types get a salary of $160,000 but suffer a cost, the monetary equivalent of $21,000, in taking the tough courses; thus, their net money-equivalent
payoff is $139,000. And C types get the salary of $60,000. What happens to the
two types if they are not separated?
If employers do not use screening devices, they have to treat every applicant
as a random draw from the population and pay all the same salary. This is called
pooling of types, or simply pooling when the sense is clear.29 In a competitive
job market, the common salary under pooling will be the population average
of what the types are worth to an employer, and this average will depend on the
proportions of the types in the population. For example, if 60% of the population is type A and 40% is type C, then the common salary with pooling will be
0.6 3 $160,000 1 0.4 3 $60,000 5 $120,000.
The A types will then prefer the situation with separation because it yields
$139,000 instead of the $120,000 with pooling. But if the proportions are 80%
A and 20% C, then the common salary with pooling will be $140,000, and the A
types will be worse off under separation than they would be under pooling. The
C types are always better off under pooling. The existence of the A types in the
population means that the common salary with pooling will always exceed the
C types’ separation salary of $60,000.
However, even if both types prefer the pooling outcome, it cannot be an
equilibrium when many employers or workers compete with each other in the
screening or signaling process. Suppose the population proportions are 80–20
and there is an initial situation with pooling where both types are paid $140,000.
An employer can announce that he will pay $144,000 for someone who takes
just one tough course. Relative to the initial situation, the A types will find it
28
In the terminology of economics, the C types in this example inflict a negative external effect on
the A types. We will develop this concept in Chapter 11.
29
It is the opposite of separation of types, described above where players differing in their characteristics get different outcomes, so the outcome reveals the type perfectly.
6841D CH08 UG.indd 308
12/18/14 3:12 PM
S i g n a l i n g I n t h e l a b o r m a r k e t 3 0 9
worthwhile because their cost of taking the course is only $3,000 and it raises
their salary by $4,000, whereas C types will not find it worthwhile because their
cost, $15,000, exceeds the benefit, $4,000. Because this particular employer selectively attracts the A types, each of whom is worth $160,000 to him but is paid
only $144,000, he makes a profit by deviating from the pooling salary package.
But his deviation starts a process of adjustment by competing employers,
and that causes the old pooling situation to collapse. As A types flock to work for
him, the pool available to the other employers is of lower average quality, and
eventually they cannot afford to pay $140,000 anymore. As the salary in the pool
is lowered, the differential between that salary and the $144,000 offered by the
deviating employer widens to the point where the C types also find it desirable
to take that one tough course. But then the deviating employer must raise his requirement to two courses and must increase the salary differential to the point
where two courses become too much of a burden for the C types, but the A types
find it acceptable. Other employers who would like to hire some A types must
use similar policies to attract them. This process continues until the job market
reaches the separating equilibrium described earlier.
Even if the employers did not take the initiative to attract As rather than Cs,
a type A earning $140,000 in a pooling situation might take a tough course, take
his transcript to a prospective employer, and say, “I have a tough course on my
transcript, and I am asking for a salary of $144,000. This should be convincing
evidence that I am type A; no type C would make you such a proposition.” Given
the facts of the situation, the argument is valid, and the employer should find
it very profitable to agree: the employee, being type A, will generate $160,000
for the employer but get only $144,000 in salary. Other A types can do the same.
This starts the same kind of cascade that leads to the separating equilibrium.
The only difference is in who takes the initiative. Now the type A workers choose
to get the extra education as credible proof of their type; it becomes a case of
signaling rather than screening.
The general point is that, even though the pooling outcome may be better
for all, they are not choosing the one or the other in a cooperative, binding process. They are pursuing their own individual interests, which lead them to the
separating equilibrium. This is like a prisoners’ dilemma game with many players, and therefore there is something unavoidable about the cost of the information asymmetry.
C. Many Types
We have considered an example with only two types, but the analysis generalizes immediately. Suppose there are several types: A, B, C, . . . , ranked in an
order that is at the same time decreasing in their worth to the employer and increasing in the costs of extra education. Then it is possible to set up a sequence
6841D CH08 UG.indd 309
12/18/14 3:12 PM
3 1 0 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
of requirements of successively higher and higher levels of education, such that
the very worst type needs none, the next-worst type needs the lowest level, the
type third from the bottom needs the next higher level, and so on, and the types
will self-select the level that identifies them.
To finish this discussion, we provide one further point, or perhaps a word of
warning, regarding signaling. You are the informed party and have available an
action that would credibly signal good information (information whose credible
transmission would work to your advantage). If you fail to send that signal, you
will be assumed to have bad information. In this respect, signaling is like playing chicken: if you refuse to play, you have already played and lost.
You should keep this in mind when you have the choice between taking
a course for a letter grade or on a pass/fail basis. The whole population in the
course spans the whole spectrum of grades; suppose the average is B. A student
is likely to have a good idea of his own abilities. Those reasonably confident of
getting an A1 have a strong incentive to take the course for a letter grade. When
they have done so, the average of the rest is less than B, say, B2, because the top
end has been removed from the distribution. Now, among the rest, those expecting an A have a strong incentive to choose the letter-grade option. That in turn
lowers the average of the rest. And so on. Finally, the pass/fail option is chosen
by only those anticipating Cs and Ds. A strategically smart reader of a transcript
(a prospective employer or the admissions officer for a professional graduate
school) will be aware that the pass/fail option will be selected mainly by students in the lower portion of the grade distribution; such a reader will therefore
interpret a Pass as a C or a D, not as the class-wide average B.
6 EQUILIBRIA IN two-player SIGNALING GAMES
Our analysis so far in this chapter has covered the general concept of incomplete information as well as the specific strategies of screening and signaling; we
have also seen the possible outcomes of separation and pooling that can arise
when these strategies are being used. We saw how adverse selection could arise
in a market where many car owners and buyers came together and how signals
and screening devices would operate in an environment where many employers
and employees meet each other. However, we have not specified and solved a
game in which just two players with differential information confront one another. Here we develop an example to show how that can be done using a game
tree and payoff table as our tools of analysis. We will see that either separating
or pooling can be an equilibrium and that a new type of partially revealing or
semiseparating equilibrium can emerge.
6841D CH08 UG.indd 310
12/18/14 3:12 PM
e q u i l i b r i a i n t w o - p l ay e r s i g n a l i n g g a m e s 3 1 1
A. Basic Model and Payoff Structure
In this section, we analyze a game of market entry with asymmetric information; the players are two auto manufacturers, Tudor and Fordor. Tudor Auto
Corporation currently enjoys a monopoly in the market for a particular kind of
automobile, say a nonpolluting, fuel-efficient compact car. An innovator, Fordor, has a competing concept and is deciding whether to enter the market. But
Fordor does not know how tough a competitor Tudor will prove to be. Specifically, Tudor’s production cost, unknown to Fordor, may be high or low. If it is
high, Fordor can enter and compete profitably; if it is low, Fordor’s entry and
development costs cannot be recouped by subsequent operating profits, and it
will make a net loss if it enters.
The two firms interact in a sequential game. In the first stage of the game
(period 1), Tudor sets a price (high or low, for simplicity) knowing that it is the
only manufacturer in the market. In the next stage, Fordor makes its entry decision. Payoffs, or profits, are determined based on the market price of the
automobile relative to each firm’s production costs and, for Fordor, entry and
development costs as well.
Tudor would of course prefer that Fordor not enter the market. It might
therefore try to use its price in the first stage of the game as a signal of its cost.
A low-cost firm would charge a lower price than would a high-cost firm. Tudor
might therefore hope that if it keeps its period-1 price low, Fordor will interpret this as evidence that Tudor’s cost is low and will stay out. (Once Fordor has
given up and is out of the picture, in later periods Tudor can jack its price back
up.) Just as a poker player might bet on a poor hand, hoping that the bluff will
succeed and the opponent will fold, Tudor might try to bluff Fordor into staying out. Of course, Fordor is a strategic player and is aware of this possibility.
The question is whether Tudor can bluff successfully in an equilibrium of their
game. The answer depends on the probability that Tudor is genuinely low cost
and on Tudor’s cost of bluffing. We consider different cases below and show the
resulting different equilibria.
In all the cases, the per-unit costs and prices are expressed in thousands of
dollars, and the numbers of cars sold are expressed in hundreds of thousands,
so the profits are measured in hundreds of millions. This will help us write the
payoffs and tables in a relatively compact form that is easy to read. We calculate those payoffs using the same type of analysis that we used for the restaurant
pricing game of Chapter 5, assuming that the underlying relationship between
the price charged (P ) and the quantity demanded (Q) is given by P 5 25 2 Q.30
30
We do not supply the full calculations necessary to generate the profit-maximizing prices and the
resulting firm profits in each case. You may do so on your own for extra practice, using the methods
learned in Chapter 5.
6841D CH08 UG.indd 311
12/18/14 3:12 PM
3 1 2 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
To enter the market, Fordor must incur an up-front cost of 40 (this payment is in
the same units as profits, or hundreds of millions, so the actual figure is $4 billion) to build its plant, launch an ad campaign, and so on. If it enters the market,
its cost for producing and delivering each of its cars to the market will always be
10 (thousand dollars).
Tudor could be either a lumbering, old firm with a high unit production cost
of 15 (thousand dollars) or a nimble, efficient producer with a lower unit cost. To
start, we suppose that the lower cost is 5; this cost is less than what Fordor can
achieve. Later in Sections 6.C and 6.D, we will investigate the effect of other cost
levels. For now, suppose further that Tudor can achieve the lower unit cost with
probability 0.4, or 40% of the time; therefore it has high unit cost with probability 0.6, or 60% of the time.31
Fordor’s choices in the entry game will depend on how much information
it has about Tudor’s costs. We assume that Fordor knows the two possible levels
of cost and therefore can calculate the profits associated with each case (as we
do below). In addition, Fordor will form some belief about the probability that
Tudor is the low-cost type. We are assuming that the structure of the game is
common knowledge to both players. Therefore, although Fordor does not know
the type of the specific Tudor it is facing, Fordor’s prior belief exactly matches
the probability with which Tudor has the lower unit cost; that is, Fordor’s belief
is that the probability of facing a low-cost Tudor is 40%.
If Tudor’s cost is high, 15 (thousand), then under conditions of unthreatened
monopoly it will maximize its profit by pricing its car at 20 (thousand). At that
price it will sell 5 (hundred thousand) units and make a profit of 25 [5 5 3 (20 2
15) hundred million, or 2.5 billion]. If Fordor enters and the two compete, then
the Nash equilibrium of their duopoly game will yield operating profits of 3 to
Tudor and 45 to Fordor. The operating profit exceeds Fordor’s up-front cost of
entry (40), so Fordor would choose to enter and earn a net profit of 5 if it knew
Tudor to be high cost.
If Tudor’s cost is low, 5, then in unthreatened monopoly it will price its car at
15, selling 10 and making a profit of 100. In the second-stage equilibrium following the entry of Fordor, the operating profits will be 69 for Tudor and 11 for Fordor. The 11 is less than Fordor’s cost of entry of 40. Therefore, it would not enter
and avoid incurring a loss of 29 if it knew Tudor to be low cost.
B. Separating Equilibrium
If Tudor is actually high cost, but wants Fordor to think that it is low cost, Tudor
must mimic the action of the low-cost type; that is, it has to price at 15. But that
31
Tudor’s probability of having low unit cost could be denoted with an algebraic parameter, z. The
equilibrium will be the same regardless of the value of z, as you will be asked to show in Exercise S5
at the end of this chapter.
6841D CH08 UG.indd 312
12/18/14 3:12 PM
e q u i l i b r i a i n t w o - p l ay e r s i g n a l i n g g a m e s 3 1 3
Tudor’s
cost low
(Prob. 0.4)
FORDOR
TUDOR
Price low
Out
NATURE
FORDOR
Tudor’s
cost high
(Prob. 0.6)
In
In
100 69, 11 40
100 100, 0
0 3, 45 40
Price low
Out
TUDOR
In
0 25, 0
25 3, 45 40
Price high
FORDOR
Out
25 25, 0
FIGURE 8.7 Extensive Form of Entry Game: Tudor’s Low Cost Is 5
price equals its cost in this case; it will make zero profit. Will this sacrifice of initial profit give Tudor the benefit of scaring Fordor off and enjoying the benefits
of being a monopoly in subsequent periods?
We show the full game in extensive form in Figure 8.7. Note that we use the
fictitious player called Nature, as in Section 3, to choose Tudor’s cost type at
the start of the game. Then Tudor makes its pricing decision. We assume that
if Tudor has low cost, it will not choose a high price.32 But if Tudor has high cost,
it may choose either the high price or the low price if it wants to bluff. Fordor
cannot tell apart the two situations in which Tudor prices low; therefore its entry
choices at these two nodes are enclosed in one information set. Fordor must
choose either In at both or Out at both.
At each terminal node, the first payoff entry (in blue) is Tudor’s profit, and
the second entry (in black) is Fordor’s profit. Tudor’s profit is added over two periods, the first period when it is the sole producer, and the second period when
32
This seems obvious: Why choose a price different from the profit-maximizing price? Charging the
high price when you have low cost not only sacrifices some profit in period 1 (if the low-cost Tudor
charges 20, its sales will drop by so much that it will make a profit of only 75 instead of the 100 it gets
by charging 15), but also increases the risk of entry and so lowers period-2 profits as well (competing with Fordor, the low-cost Tudor would have a profit of only 69 instead of the 100 it gets under
monopoly). However, game theorists have found strange equilibria where a high period-1 price for
Tudor is perversely interpreted as evidence of low cost, and they have applied great ingenuity in ruling out these equilibria. We leave out these complications, as we did in our analysis of cheap talk
equilibria earlier, but refer interested readers to In-Koo Cho and David Kreps, “Signaling Games and
Stable Equilibria,” Quarterly Journal of Economics, vol. 102, no. 2 (May 1987), pp. 179–222.
6841D CH08 UG.indd 313
12/18/14 3:12 PM
3 1 4 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
FORDOR
Bluff (LL)
TUDOR
Regardless (II)
Conditional (OI)
169 0.4 3 0.6 69.4,
29 0.4 5 0.6 8.6
200 0.4 25 0.6 95,
0
Honest (LH) 169 0.4 28 0.6 84.4, 200 0.4 28 0.6 96.8,
29 0.4 5 0.6 8.6
5 0.6 3
FIGURE 8.8 Strategic Form of Entry Game: Tudor’s Low Cost Is 5
it may be a monopolist or a duopolist, depending on whether Fordor enters.
Fordor’s profit covers only the second period and is non-zero only when it has
chosen to enter.
Using one step of rollback analysis, we see that Fordor will choose In at the
bottom node where Tudor has chosen the high price, because 45 – 40 5 5 . 0.
Therefore, we can prune the Out branch at that node. Then each player has just
two strategies (complete plans of action). For Tudor the strategies are Bluff, or
choose the low price in period 1 regardless of cost (LL in the shorthand notation
of Chapter 3), and Honest, or choose the low price in period 1 if cost is low and
the high price if cost is high (LH). For Fordor, the two strategies are Regardless,
or enter irrespective of Tudor’s period-1 price (II, for In-In), and Conditional, or
enter only if Tudor’s period-1 price is high (OI).
We can now show the game in strategic (normal) form. Figure 8.8 shows each
player with two possible strategies; payoffs in each cell are the expected profits
to each firm, given the probability (40%) that Tudor’s cost is low. The calculations
are similar to those we performed to fill in the table in Figure 8.6. As in that example, you may find the calculations easier if you label the terminal nodes in the
tree and determine which ones are relevant for each cell of the table.
This is a simple dominance-solvable game. For Tudor, Honest dominates
Bluff. And Fordor’s best response to Tudor’s dominant strategy of Honest is Conditional. Thus (Honest, Conditional) is the only (subgame-perfect) Nash equilibrium of the game.
The equilibrium found in Figure 8.8 is separating. The two cost types of
Tudor charge different prices in period 1. This action reveals Tudor’s type to Fordor, which then makes its entry decision appropriately.
The key to understanding why Honest is the dominant strategy for Tudor
can be found in the comparison of its payoffs against Fordor’s Conditional strategy. These are the outcomes when Tudor’s bluff “works’’: Fordor enters if Tudor
charges the high price in period 1 and stays out if Tudor charges the low price in
period 1. If Tudor is truly low cost, then its payoffs against Fordor playing Conditional are the same whether it chooses Bluff or Honest. But when Tudor is actually high cost, the results are different.
6841D CH08 UG.indd 314
12/18/14 3:12 PM
e q u i l i b r i a i n t w o - p l ay e r s i g n a l i n g g a m e s 3 1 5
If Fordor’s strategy is Conditional and Tudor is high cost, Tudor can use
Bluff successfully. However, even the successful bluff will be too costly. If Tudor
charged its best monopoly (Honest) price in period 1, it would make a profit
of 25; the bluffing low price reduces this period-1 profit drastically, in this instance all the way to 0. The higher monopoly price in period 1 would encourage
Fordor’s entry and reduce period-2 profit for Tudor, from the monopoly level of
25 to the duopoly level of 3. But Tudor’s period-2 benefit from charging the low
(Bluff) price and keeping Fordor out (25 2 3 5 22) is less than the period-1 cost
imposed by bluffing and giving up its monopoly profits (25 2 0 5 25). As long as
there is some positive probability that Tudor is high cost, then the benefits from
choosing Honest will outweigh those from choosing Bluff, even when Fordor’s
choice is Conditional.
If the low price were not so low, then a truly high-cost Tudor would sacrifice less by mimicking the low-cost type. In such a case, Bluff might be a more
profitable strategy for a high-cost Tudor. We consider exactly this possibility in
the analysis below.
C. Pooling Equilibrium
Let us now suppose that the lower of the production costs for Tudor is 10 per car
instead of 5. With this cost change, the high-cost Tudor still makes profit of 25
under monopoly if it charges its profit-maximizing price of 20. But the low-cost
Tudor now charges 17.5 as a monopolist (instead of 15) and makes a profit of 56.
If the high-cost type mimics the low-cost type and also charges 17.5, its profit
is now 19 (rather than the 0 it earned in this case before); the loss of profit from
bluffing is now much smaller: 25 2 19 5 6, rather than 25. If Fordor enters, then
the two firms’ profits in their duopoly game are 3 for Tudor and 45 for Fordor if
Tudor has high costs (as in the previous section). Duopoly profits are now 25 for
each firm if Tudor has low costs; in this situation, Fordor and the low-cost Tudor
have identical unit costs of 10.
Suppose again that the probability of Tudor being the low-cost type is 40%
(0.4) and Fordor’s belief about the low-cost probability is correct. The new game
tree is shown in Figure 8.9. Because Fordor will still choose In when Tudor prices
High, the game again collapses to one in which each player has exactly two
complete strategies; those strategies are the same ones we described in Section
6.B above. The payoff table for the normal form of this game is then the one illustrated in Figure 8.10.
This is another dominance-solvable game. Here it is Fordor with a dominant strategy, however; it will always choose Conditional. And given the dominance of Conditional, Tudor will choose Bluff. Thus, (Bluff, Conditional) is the
unique (subgame-perfect) Nash equilibrium of this game. In all other cells
of the table, one firm gains by deviating to its other action. We leave it to you
6841D CH08 UG.indd 315
12/18/14 3:12 PM
3 1 6 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
Tudor’s
cost low
(Prob. 0.4)
FORDOR
TUDOR
Price low
Out
NATURE
FORDOR
Tudor’s
cost high
(Prob. 0.6)
56 25, 25 40
In
56 56, 0
19 3, 45 40
In
Price low
Out
TUDOR
19 25, 0
25 3, 45 40
In
Price high
FORDOR
Out
25 25, 0
FIGURE 8.9 Extensive Form of Entry Game: Tudor’s Low Cost Is 10
to think about the intuitive explanations of why each of these deviations is
profitable.
The equilibrium found using Figure 8.10 involves pooling. Both cost types of
Tudor charge the same (low) price and, seeing this, Fordor stays out. When both
types of Tudor charge the same price, observation of that price does not convey
any information to Fordor. Its estimate of the probability of Tudor’s cost being
low stays at 0.4, and it calculates its expected profit from entry to be 23 , 0,
so it does not enter. Even though Fordor knows full well that Tudor is bluffing in
equilibrium, the risk of calling the bluff is too great because the probability of
Tudor’s cost actually being low is sufficiently great.
What if this probability were smaller—say, 0.1—and Fordor was aware of
this fact? If all the other numbers remain unchanged, then Fordor’s expected
profit from its Regardless strategy is 215 3 0.1 1 5 3 0.9 5 4.5 2 1.5 5 3 . 0.
FORDOR
Regardless (II)
Bluff (LL)
TUDOR
Conditional (OI)
81 0.4 22 0.6 45.6, 112 0.4 44 0.6 71.2,
15 0.4 5 0.6 3
0
Honest (LH) 81 0.4 28 0.6 49.2, 112 0.4 28 0.6 61.6,
15 0.4 5 0.6 3
5 0.6 3
FIGURE 8.10 Strategic Form of Entry Game: Tudor’s Low Cost Is 10
6841D CH08 UG.indd 316
12/18/14 3:12 PM
e q u i l i b r i a i n t w o - p l ay e r s i g n a l i n g g a m e s 3 1 7
Then Fordor will enter no matter what price Tudor charges, and Tudor’s bluff
will not work. Such a situation results in a new kind of equilibrium; we consider
its features below.
D. Semiseparating Equilibrium
Here we consider the outcomes in the entry game when Tudor’s probability of
achieving the low production cost of 10 is small, only 10% (0.1). All of the cost
and profit numbers are the same as in the previous section; only the probabilities have changed. Therefore, we do not show the game tree (Figure 8.9) again.
We show only the payoff table as Figure 8.11.
In this new situation, the game illustrated in Figure 8.11 has no equilibrium
in pure strategies. From (Bluff, Regardless), Tudor gains by deviating to Honest;
from (Honest, Regardless), Fordor gains by deviating to Conditional; from (Honest, Conditional), Tudor gains by deviating to Bluff; and from (Bluff, Conditional),
Fordor gains by deviating to Regardless. Once again, we leave it to you to think
about the intuitive explanations of why each of these deviations is profitable.
So now we need to look for an equilibrium in mixed strategies. We suppose
Tudor mixes Bluff and Honest with probabilities p and (1 2 p), respectively. Similarly, Fordor mixes Regardless and Conditional with probabilities q and (1 2 q),
respectively. Tudor’s p-mix must keep Fordor indifferent between its two pure
strategies of Regardless and Conditional; therefore we need
3p 1 3 (1 2 p) 5 0p 1 4.5 (1 2 p), or
1 2 p 5 23,
or
4.5 (1 2 p) 5 3,
or
p 5 13.
And Fordor’s q-mix must keep Tudor indifferent between its two pure strategies
of Bluff and Honest; therefore we need
27.9q 1 50.8 (1 2 q) 5 33.3q 1 36.4 (1 2 q),
or
5.4q 5 14 (1 2 q),
or
q 5 14.419.8 5 1622 5 0.727.
The mixed-strategy equilibrium of the game then entails Tudor playing
Bluff one-third of the time and Honest two-thirds of the time, while Fordor
FORDOR
Regardless (II)
Bluff (LL)
TUDOR
Conditional (OI)
81 0.1 22 0.9 27.9, 112 0.1 44 0.9 50.8,
15 0.1 5 0.9 3
0
Honest (LH) 81 0.1 28 0.9 33.3, 112 0.1 28 0.9 36.4,
15 0.1 5 0.9 3
5 0.9 4.5
FIGURE 8.11 Strategic Form of Entry Game: Tudor's Low Cost Is 10 with Probability 0.1
6841D CH08 UG.indd 317
12/18/14 3:12 PM
3 1 8 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
TUDOR’S PRICE
TUDOR’S
COST
Sum of
row
Low
High
Low
0.1
0
0.1
High
0.9 13 0.3
0.9 23 0.6
0.9
0.4
0.6
Sum of column
FIGURE 8.12 Applying Bayes’ Theorem to the Entry Game
plays Regardless sixteen twenty-seconds of the time and Conditional six
twenty-seconds of the time.
In this equilibrium, the Tudor types are only partially separated. The
low-cost-type Tudor always prices Low in period 1, but the high-cost-type
Tudor mixes and will also charge the low price one-third of the time. If Fordor observes a high price in period 1, it can be sure that Tudor has high cost; in
that case, it will always enter. But if Fordor observes a low price, it does not know
whether it faces a truly low-cost Tudor or a bluffing, high-cost Tudor. Then Fordor
also plays a mixed strategy, entering 72.7% of the time. Thus, a high price conveys
full information, but a low price conveys only partial information about Tudor’s
type. Therefore, this kind of equilibrium is labeled semiseparating.
To understand better the mixed strategies of each firm and the semiseparating equilibrium, consider how Fordor can use the partial information conveyed
by Tudor’s low price. If Fordor sees the low price in period 1, it will use this observation to update its belief about the probability that Tudor is low cost; it does
this updating using Bayes’ theorem.33 The table of calculations is shown as Figure 8.12; this table is similar to Figure 8A.3 in the appendix.
The table shows the possible types of Tudor in the rows and the prices Fordor observes in the columns. The values in the cells represent the overall probability that a Tudor of the type shown in the corresponding row chooses the price
shown in the corresponding column (incorporating Tudor’s equilibrium mixture
probability); the final row and column show the total probabilities of each type
and of observing each price, respectively.
Using Bayes’ rule, when Fordor observes Tudor charging a low period-1
price, it will revise its belief about the probability of Tudor being low cost by taking the probability that a low-cost Tudor is charging the low price (the 0.1 in the
top-left cell) and dividing that by the total probability of the two types of Tudor
choosing the low price (0.4, the column sum in the left column). This calculation
yields Fordor’s updated belief about the probability that Tudor has low costs to
33
We provide a thorough explanation of Bayes’ theorem in the appendix to this chapter. Here, we
simply apply the analysis found there to our entry game.
6841D CH08 UG.indd 318
12/18/14 3:12 PM
s u m m a r y 3 1 9
be 0.1  0.4 5 0.25. Then Fordor also updates its expected profit from entry to be
215 3 0.25 1 5 3 0.75 5 0. Thus, Tudor’s equilibrium mixture is exactly right for
making Fordor indifferent between entering and not entering when it sees the
low period-1 price. This outcome is exactly what is needed to keep Tudor willing
to mix in the equilibrium.
The original probability 0.1 of Tudor being low cost was too low to deter Fordor from entering. Fordor’s revised probability of 0.25, after observing the low
price in period 1, is higher. Why? Precisely because the high-cost-type Tudor is
not always bluffing. If it were, then the low price would convey no information
at all. Fordor’s revised probability would equal 0.1 in that case, whereupon it
would enter. But when the high-cost-type Tudor bluffs only sometimes, a low
price is more likely to be indicative of low cost.
We developed the equilibria in this entry game in an intuitive way, but we
now look back and think systematically about the nature of those equilibria. In
each case, we first ensured that each player’s (and each type’s) strategy was optimal, given the strategies of everyone else; we applied the Nash concept of equilibrium. Second, we ensured that players drew the correct inference from their
observations; this required a probability calculation using Bayes’ theorem, most
explicitly in the semiseparating equilibrium. The combination of concepts necessary to identify equilibria in such asymmetric information games justifies giving them the label Bayesian Nash equilibria. Finally, although this was a rather
trivial part of this example, we did a little bit of rollback, or subgame perfectness,
reasoning. The use of rollback justifies calling it the perfect Bayesian equilibrium (PBE) as well. Our example was a simple instance of all of these equilibrium
concepts: you will meet some of them again in slightly more sophisticated forms
in later chapters and in much fuller contexts in further studies of game theory.
Summary
When facing imperfect or incomplete information, game players with different
attitudes toward risk or different amounts of information can engage in strategic
behavior to control and manipulate the risk and information in a game. Players
can reduce their risk with payment schemes or by sharing the risk with others,
although the latter is complicated by moral hazard and adverse selection. Risk
can sometimes be manipulated to a player’s benefit, depending on the circumstances within the game.
Players with private information may want to conceal or reveal that information, while those without the information try to elicit it or avoid it. Actions
speak louder than words in the presence of asymmetric information. To reveal
information, a credible signal is required. In some cases, mere words may be
sufficient to convey information credibly, and then a cheap talk equilibrium can
arise. The extent to which player interests are aligned plays an important role in
6841D CH08 UG.indd 319
12/18/14 3:12 PM
3 2 0 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
achieving such equilibria. When the information content of a player’s words is
ignored, the game has a babbling equilibrium.
More generally, specific actions taken by players convey information. Signaling works only if the signal action entails different costs to players with different information. To obtain information, when questioning is not sufficient
to elicit truthful information, a screening scheme that looks for a specific action
may be required. Screening works only if the screening device induces others to
reveal their types truthfully; there must be incentive compatibility to get separation. At times, credible signaling or screening may not be possible; then the
equilibrium can entail pooling or there can be a complete collapse of the market or transaction for one of the types. Many examples of signaling and screening games can be found in ordinary situations such as the labor market or in
the provision of insurance. The evidence on players’ abilities to achieve perfect
Bayesian equilibria seems to suggest that, despite the difficult probability calculations necessary, such equilibria are often observed. Different experimental
results appear to depend largely on the design of the experiment.
In the equilibrium of a game with asymmetric information, players must not
only use their best actions given their information, but must also draw correct
inferences (update their information) by observing the actions of others. This
type of equilibrium is known as a Bayesian Nash equilibrium. When the further
requirement of optimality at all nodes (as in rollback analysis) must be imposed,
the equilibrium becomes a perfect Bayesian equilibrium. The outcome of such
a game may entail pooling, separation, or partial separation, depending on the
specifics of the payoff structure and the specified updating processes used by
players. In some parameter ranges, such games may have multiples types of
perfect Bayesian equilibria.
Key Terms
adverse selection (295)
babbling equilibrium (283)
Bayesian Nash equilibrium (319)
cheap talk equilibrium (281)
incentive-compatability
condition (constraint) (306)
moral hazard (272)
negatively correlated (273)
partially revealing
equilibrium (310)
participation condition
(constraint) (306)
6841D CH08 UG.indd 320
perfect Bayesian
equilibrium (PBE) (319)
pooling (308)
pooling of types (308)
pooling equilibrium (281)
positively correlated (274)
screening (281)
screening device (281)
self-selection (307)
semiseparating equilibrium (310)
separating equilibrium (281)
separation of types (307)
12/18/14 3:12 PM
E x e r c i s e s 3 2 1
signal (280)
signaling (280)
signal jamming (280)
type (281)
solved Exercises
6841D CH08 UG.indd 321
S1.
In the risk-trading example in Section 1, you had a risky income that
was $160,000 with good luck (probability 0.5) and $40,000 with bad luck
(probability 0.5). When your neighbor had a sure income of $100,000, we
derived a scheme in which you could eliminate all of your risk while raising his expected utility slightly. Assume that the utility of each of you is
still the square root of the respective income. Now, however, let the probability of good luck be 0.6. Consider a contract that leaves you with exactly $100,000 when you have bad luck. Let x be the payment that you
make to your neighbor when you have good luck.
(a) What is the minimum value of x (to the nearest penny) such that
your neighbor slightly prefers to enter into this kind of contract
rather than no contract at all?
(b) What is the maximum value of x (to the nearest penny) for which
this kind of contract gives you a slightly higher expected utility than
no contract at all?
S2.
A local charity has been given a grant to serve free meals to the homeless
in its community, but it is worried that its program might be exploited
by nearby college students, who are always on the lookout for a free
meal. Both a homeless person and a college student receive a payoff of
10 for a free meal. The cost of standing in line for the meal is t 2320 for a
homeless person and t 2160 for a college student, where t is the amount
of time in line measured in minutes. Assume that the staff of the charity
cannot observe the true type of those coming for free meals.
(a) What is the minimum wait time t that will achieve separation of types?
(b) After a while, the charity finds that it can successfully identify and
turn away college students half of the time. College students who
are turned away receive no free meal and, further, incur a cost of 5
for their time and embarrassment. Will the partial identification of
college students reduce or increase the answer in part (a)? Explain.
S3.
Consider the used-car market for the 2011 Citrus described in Section
4.B. There is now a surge in demand for used Citruses; buyers would now
be willing to pay up to $18,000 for an orange and $8,000 for a lemon. All
else remains identical to the example in Section 4.B.
12/18/14 3:12 PM
3 2 2 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
(a) What price would buyers be willing to pay for a 2011 Citrus of unknown type if the fraction of oranges in the population, f, were 0.6?
(b) Will there be a market for oranges if f 5 0.6? Explain.
(c) What price would buyers be willing to pay if f were 0.2?
(d) Will there be a market for oranges if f 5 0.2? Explain.
(e) What is the minimum value of f such that the market for oranges
does not collapse?
(f) Explain why the increase in the buyers’ willingness to pay changes
the threshold value of f , where the market for oranges collapses.
S4.
Suppose electricians come in two types: competent and incompetent.
Both types of electricians can get certified, but for the incompetent types
certification takes extra time and effort. Competent ones have to spend
C months preparing for the certification exam; incompetent ones take
twice as long. Certified electricians can earn 100 (thousand dollars) each
year working on building sites for licensed contractors. Uncertified electricians can earn only 25 (thousand dollars) each year in freelance work;
licensed contractors won’t hire them. Each type of electrician gets a payoff equal to 
S  M, where S is the salary measured in thousands of dollars and M is the number of months spent getting certified. What is the
range of values of C for which a competent electrician will choose to signal with this device but an incompetent one will not?
S5.
Return to the Tudor-Fordor example in Section 6.A, when Tudor’s low
per-unit cost is 5. Let z be the probability that Tudor actually has a low
per-unit cost.
(a) Rewrite the table in Figure 8.8 in terms of z.
(b) How many pure-strategy equilibria are there when z 5 0? Explain.
(c) How many pure-strategy equilibria are there when z 5 1? Explain.
(d) Show that the Nash equilibrium of this game is always a separating
equilibrium for any value of z between 0 and 1 (inclusive).
S6.
Looking at Tudor and Fordor again, assume that the old, established
company Tudor is risk averse, whereas the would-be entrant Fordor
(which is planning to finance its project through venture capital) is risk
neutral. That is, Tudor’s utility is always the square root of its total profit
over both periods. Fordor’s utility is simply the amount of its profit—if
any—during the second period. Assume that Tudor’s low per-unit cost is
5, as in Section 6.A.
(a) Redraw the extensive-form game shown in Figure 8.7, giving the
proper payoffs for a risk-averse Tudor.
6841D CH08 UG.indd 322
12/18/14 3:12 PM
E x e r c i s e s 3 2 3
(b) Let the probability that Tudor is low cost, z, be 0.4. Will the equilibrium be separating, pooling, or semiseparating? (Hint: Use a table
equivalent to Figure 8.8.)
(c) Repeat part (b) with z 5 0.1.
S7.
Return to a risk-neutral Tudor, but with a low per-unit cost of 6 (instead
of 5 or 10 as in Section 6). If Tudor’s cost is low, 6, then it will earn 90
in a profit-maximizing monopoly. If Fordor enters, Tudor will earn 59 in
the resulting duopoly while Fordor earns 13. If Tudor is actually high cost
(that is, its per-unit cost is 15) and prices as if it were low cost (that is,
with a per-unit cost of 6), then it earns 5 in a monopoly situation.
(a) Draw a game tree for this game equivalent to Figure 8.7 or 8.9 in the
text, changing the appropriate payoffs.
(b) Write the normal form of this game, assuming that the probability
that Tudor is low price is 0.4.
(c) What is the equilibrium of the game? Is it separating, pooling, or
semiseparating? Explain why.
S8.
Felix and Oscar are playing a simplified version of poker. Each makes an
initial bet of 8 dollars. Then each separately draws a card, which may be
High or Low with equal probabilities. Each sees his own card but not that
of the other.
Then Felix decides whether to Pass or to Raise (bet an additional 4
dollars). If he chooses to pass, the two cards are revealed and compared.
If the outcomes are different, the one who has the High card collects the
whole pot. The pot has 16 dollars, of which the winner himself contributed 8, so his winnings are 8 dollars. The loser’s payoff is 28 dollars. If the
outcomes are the same, the pot is split equally and each gets his 8 dollars back
(payoff 0).
If Felix chooses Raise, then Oscar has to decide whether to Fold (concede) or See (match with his own additional 4 dollars). If Oscar chooses
Fold, then Felix collects the pot irrespective of the cards. If Oscar chooses
See, then the cards are revealed and compared. The procedure is the
same as that in the preceding paragraph, but the pot is now bigger.
(a) Show the game in extensive form. (Be careful about information
sets.)
If the game is instead written in the normal form, Felix has four strategies: (1) Pass always (PP for short), (2) Raise always (RR), (3) Raise if his
own card is High and Pass if it is Low (RP), and (4) the other way round
(PR). Similarly, Oscar has four strategies: (1) Fold always (FF), (2) See always (SS), (3) See if his own card is High and Fold if it is Low (SF), and (4)
the other way round (FS).
6841D CH08 UG.indd 323
12/18/14 3:12 PM
3 2 4 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
(b) Show that the table of payoffs to Felix is as follows:
OSCAR
FELIX
FF
SS
SF
FS
PP
0
0
0
0
RR
8
0
1
7
RP
2
1
0
3
PR
6
–1
1
4
(In each case, you will have to take an expected value by averaging over
the consequences for each of the four possible combinations of the card
draws.)
(c) Eliminate dominated strategies as far as possible. Find the mixedstrategy equilibrium in the remaining table and the expected payoff
to Felix in the equilibrium.
(d) Use your knowledge of the theory of signaling and screening to explain intuitively why the equilibrium has mixed strategies.
S9.
6841D CH08 UG.indd 324
Felix and Oscar are playing another simplified version of poker called
Stripped-Down Poker. Both make an initial bet of one dollar. Felix (and
only Felix) draws one card, which is either a King or a Queen with equal
probability (there are four Kings and four Queens). Felix then chooses
whether to Fold or to Bet. If Felix chooses to Fold, the game ends, and
Oscar receives Felix’s dollar in addition to his own. If Felix chooses to Bet,
he puts in an additional dollar, and Oscar chooses whether to Fold or to
Call.
If Oscar Folds, Felix wins the pot (consisting of Oscar’s initial bet of
one dollar and two dollars from Felix). If Oscar Calls, he puts in another
dollar to match Felix’s bet, and Felix’s card is revealed. If the card is a
King, Felix wins the pot (two dollars from each of the roommates). If it is
a Queen, Oscar wins the pot.
(a) Show the game in extensive form. (Be careful about information
sets.)
(b) How many strategies does each player have?
(c) Show the game in strategic form, where the payoffs in each cell reflect the expected payoffs given each player’s respective strategy.
(d) Eliminate dominated strategies, if any. Find the equilibrium in
mixed strategies. What is the expected payoff to Felix in equilibrium?
12/18/14 3:12 PM
E x e r c i s e s 3 2 5
S10.
Wanda works as a waitress and consequently has the opportunity to earn
cash tips that are not reported by her employer to the Internal Revenue
Service. Her tip income is rather variable. In a good year (G), she earns a
high income, so her tax liability to the IRS is $5,000. In a bad year (B), she
earns a low income, and her tax liability to the IRS is $0. The IRS knows
that the probability of her having a good year is 0.6, and the probability
of her having a bad year is 0.4, but it doesn’t know for sure which outcome has resulted for her this tax year.
In this game, first Wanda decides how much income to report to
the IRS. If she reports high income (H), she pays the IRS $5,000. If she
reports low income (L), she pays the IRS $0. Then the IRS has to decide
whether to audit Wanda. If she reports high income, they do not audit,
because they automatically know they’re already receiving the tax payment Wanda owes. If she reports low income, then the IRS can either
audit (A) or not audit (N). When the IRS audits, it costs the IRS $1,000
in administrative costs, and also costs Wanda $1,000 in the opportunity
cost of the time spent gathering bank records and meeting with the auditor. If the IRS audits Wanda in a bad year (B), then she owes nothing to
the IRS, although she and the IRS have each incurred the $1,000 auditing
cost. If the IRS audits Wanda in a good year (G), then she has to pay the
$5,000 she owes to the IRS, in addition to her and the IRS each incurring
the cost of auditing.
(a) Suppose that Wanda has a good year (G), but she reports low income (L). Suppose the IRS then audits her (A). What is the total payoff to Wanda, and what is the total payoff to the IRS?
(b) Which of the two players has an incentive to bluff (that is, to give a
false signal) in this game? What would bluffing consist of?
(c) Show this game in extensive form. (Be careful about information
sets.)
(d) How many pure strategies does each player have in this game? Explain your reasoning.
(e) Write down the strategic-form game matrix for this game. Find all of
the Nash equilibria to this game. Identify whether the equilibria you
find are separating, pooling, or semiseparating.
(f) Let x equal the probability that Wanda has a good year. In the original version of this problem, we had x 5 0.6. Find a value of x such
that Wanda always reports low income in equilibrium.
(g) What is the full range of values of x for which Wanda always reports
low income in equilibrium?
S11.
The design of a health-care system concerns matters of information and
strategy at several points. The users—potential and actual patients—
6841D CH08 UG.indd 325
12/18/14 3:12 PM
3 2 6 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
have better information about their own state of health, lifestyle, and
so forth—than the insurance companies can find out. The providers—
doctors, hospitals, and so forth—know more about what the patients
need than do either the patients themselves or the insurance companies.
Doctors also know more about their own skills and efforts, and hospitals
about their own facilities. Insurance companies may have some statistical
information about outcomes of treatments or surgical procedures from
their past records. But outcomes are affected by many unobservable and
random factors, so the underlying skills, efforts, or facilities cannot be inferred perfectly from observation of the outcomes. The pharmaceutical
companies know more about the efficacy of drugs than do the others. As
usual, the parties do not have natural incentives to share their information fully or accurately with others. The design of the overall scheme must
try to face these matters and find the best feasible solutions.
Consider the relative merits of various payment schemes—fee for
service versus capitation fees to doctors, comprehensive premiums per
year versus payment for each visit for patients, and so forth—from this
strategic perspective. Which are likely to be most beneficial to those
seeking health care? To those providing health care? Think also about the
relative merits of private insurance and coverage of costs from general
tax revenues.
S12.
In a television commercial for a well-known brand of instant cappuccino,
a gentleman is entertaining a lady friend at his apartment. He wants to
impress her and offers her cappuccino with dessert. When she accepts,
he goes into the kitchen to make the instant cappuccino—simultaneously tossing take-out boxes into the trash and faking the noises made by
a high-class (and expensive) espresso machine. As he is doing so, a voice
comes from the other room: “I want to see the machine . . . .”
Use your knowledge of games of asymmetric information to comment on the actions of these two people. Pay attention to their attempts
to use signaling and screening, and point out specific instances of each
strategy. Offer an opinion about which player is the better strategist.
S13.
(Optional, requires appendix) In the genetic test example, suppose the
test comes out negative (Y is observed). What is the probability that the
person does not have the defect (B exists)? Calculate this probability by
applying Bayes’ rule, and then check your answer by doing an enumeration of the 10,000 members of the population.
S14.
(Optional, requires appendix) Return to the example of the 2011
Citrus in Section 4.B. The two types of Citrus—the reliable orange and
the hapless lemon—are outwardly indistinguishable to a buyer. In the
6841D CH08 UG.indd 326
12/18/14 3:12 PM
E x e r c i s e s 3 2 7
example, if the fraction f of oranges in the Citrus population is less than
0.65, the seller of an orange will not be willing to part with the car for
the maximum price buyers are willing to pay, so the market for oranges
collapses.
But what if a seller has a costly way to signal her car’s type? Although
oranges and lemons are in nearly every respect identical, the defining difference between the two is that lemons break down much more frequently.
Knowing this, owners of oranges make the following proposal. On a buyer’s request, the seller will in one day take a 500-mile round-trip drive in
the car. (Assume this trip will be verifiable via odometer readings and a
time-stamped receipt from a gas station 250 miles away.) For the sellers
of both types of Citrus, the cost of the trip in fuel and time is $0.50 per
mile (that is, $250 for the 500-mile trip). However, with probability q a
lemon attempting the journey will break down. If a car breaks down, the
cost is $2 per mile of the total length of the attempted road trip (that is,
$1,000). Additionally, breaking down will be a sure sign that the car is a
lemon, so a Citrus that does so will sell for only $6,000.
Assume that the fraction of oranges in the Citrus population, f, is 0.6.
Also, assume that the probability of a lemon breaking down, q, is 0.5 and
that owners of lemons are risk neutral.
(a) Use Bayes’ rule to determine fupdated, the fraction of Citruses that
have successfully completed a 500-mile road trip that are oranges.
Assume that all Citrus owners attempt the trip. Is fupdated greater than
or less than f ? Explain why.
(b) Use fupdated to determine the price, pupdated, that buyers are willing to
pay for a Citrus that has successfully completed the 500-mile road
trip.
(c) Will an owner of an orange be willing to make the road trip and sell
her car for pupdated? Why or why not?
(d) What is the expected payoff of attempting the road trip to the seller
of a lemon?
(e) Would you describe the outcome of this market as pooling, separating, or semiseparating? Explain.
UNSOLVED Exercises
U1.
6841D CH08 UG.indd 327
Jack is a talented investor, but his earnings vary considerably from year
to year. In the coming year he expects to earn either $250,000 with good
luck or $90,000 with bad luck. Somewhat oddly, given his chosen profession, Jack is risk averse, so that his utility is equal to the square root of his
income. The probability of Jack’s having good luck is 0.5.
12/18/14 3:12 PM
3 2 8 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
(a) What is Jack’s expected utility for the coming year?
(b) What amount of certain income would yield the same level of utility
for Jack as the expected utility in part (a)?
Jack meets Janet, whose situation is identical in every respect. She’s
an investor who will earn $250,000 in the next year with good luck and
$90,000 with bad, she’s risk averse with square-root utility, and her probability of having good luck is 0.5. Crucially, it turns out that Jack and Janet
invest in such a way that their luck is completely independent. They
agree to the following deal. Regardless of their respective luck, they will
always pool their earnings and then split them equally.
(c) What are the four possible luck-outcome pairs, and what is the
probability of reaching each one?
(d) What is the expected utility for Jack or Janet under this arrangement?
(e) What amount of certain income would yield the same level of utility
for Jack and Janet as in part (d)?
Incredibly, Jack and Janet then meet Chrissy, who is also identical
to Jack and Janet with respect to her earnings, utility, and luck. Chrissy’s
probability of good luck is independent from either Jack’s or Janet’s. After
some discussion, they decide that Chrissy should join the agreement of
Jack and Janet. All three of them will pool their earnings and then split
them equally three ways.
(f) What are the eight possible luck-outcome triplets, and what is the
probability of reaching each of them?
(g) What is the expected utility for each of the investors under this expanded arrangement?
(h) What amount of certain income would yield the same level of utility
as in part (g) for these risk-averse investors?
U2.
6841D CH08 UG.indd 328
Consider again the case of the 2011 Citrus. Almost all cars depreciate over
time, and so it is with the Citrus. Every month that passes, all sellers of
Citruses—regardless of type—are willing to accept $100 less than they
were the month before. Also, with every passing month, buyers are maximally willing to pay $400 less for an orange than they were the previous
month and $200 less for a lemon. Assume that the example in the text
takes place in month 0. Eighty percent of the Citruses are oranges, and
this proportion never changes.
12/18/14 3:12 PM
E x e r c i s e s 3 2 9
(a) Fill out three versions of the following table for month 1, month 2,
and month 3:
Willingness to
accept of sellers
Willingness to
pay of buyers
Orange
Lemon
(b) Graph the willingness to accept of the sellers of oranges over the
next 12 months. On the same figure, graph the price that buyers
are willing to pay for a Citrus of unknown type (given that the proportion of oranges is 0.8). (Hint: Make the vertical axis range from
10,000 to 14,000.)
(c) Is there a market for oranges in month 3? Why or why not?
(d) In what month does the market for oranges collapse?
(e) If owners of lemons experienced no depreciation (that is, they were
never willing to accept anything less than $3,000), would this affect
the timing of the collapse of the market for oranges? Why or why not?
In what month does the market for oranges collapse in this case?
(f) If buyers experienced no depreciation for a lemon (that is, they were
always willing to pay up to $6,000 for a lemon), would this affect the
timing of the collapse of the market for oranges? Why or why not? In
what month does the market for oranges collapse in this case?
U3.
6841D CH08 UG.indd 329
An economy has two types of jobs, Good and Bad, and two types of workers, Qualified and Unqualified. The population consists of 60% Qualified and 40% Unqualified. In a Bad job, either type of worker produces
10 units of output. In a Good job, a Qualified worker produces 100 units,
and an Unqualified worker produces 0. There is enough demand for
workers that for each type of job, companies must pay what they expect
the appointee to produce.
Companies must hire each worker without observing his type and
pay him before knowing his actual output. But Qualified workers can
signal their qualification by getting educated. For a Qualified worker, the
cost of getting educated to level n is n22, whereas for an Unqualified
worker, it is n2. These costs are measured in the same units as output,
and n must be an integer.
(a) What is the minimum level of n that will achieve separation?
(b) Now suppose the signal is made unavailable. Which kind of jobs will
be filled by which kinds of workers and at what wages? Who will gain
and who will lose from this change?
12/18/14 3:12 PM
3 3 0 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
U4.
You are the Dean of the Faculty at St. Anford University. You hire Assistant
Professors for a probationary period of 7 years, after which they come up
for tenure and are either promoted and gain a job for life or turned down,
in which case they must find another job elsewhere.
Your Assistant Professors come in two types, Good and Brilliant.
Any types worse than Good have already been weeded out in the hiring
process, but you cannot directly distinguish between Good and Brilliant
types. Each individual Assistant Professor knows whether he or she is
Brilliant or merely Good. You would like to tenure only the Brilliant types.
The payoff from a tenured career at St. Anford is $2 million; think
of this as the expected discounted present value of salaries, consulting
fees, and book royalties, plus the monetary equivalent of the pride and
joy that the faculty member and his or her family would get from being
tenured at St. Anford. Anyone denied tenure at St. Anford will get a faculty position at Boondocks College, and the present value of that career
is $0.5 million.
Your faculty can do research and publish the findings. But each
publication requires effort and time and causes strain on the family;
all these are costly to the faculty member. The monetary equivalent of
this cost is $30,000 per publication for a Brilliant Assistant Professor and
$60,000 per publication for a Good one. You can set a minimum number,
N, of publications that an Assistant Professor must produce in order to
achieve tenure.
(a) Without doing any math, describe, as completely as you can, what
would happen in a separating equilibrium to this game.
(b) There are two potential types of pooling outcomes to this game.
Without doing any math, describe what they would look like, as
completely as you can.
(c) Now please go ahead and do some math. What is the set of possible
N that will accomplish your goal of screening the Brilliant professors out from the merely Good ones?
U5.
Return to the Tudor-Fordor problem from Section 6.C, when Tudor’s low
per-unit cost is 10. Let z be the probability that Tudor actually has a low
per-unit cost.
(a) Rewrite the table in Figure 8.10 in terms of z.
(b) How many pure-strategy equilibria are there when z 5 0? What type
of equilibrium (separating, pooling, or semiseparating) occurs when
z 5 0? Explain.
(c) How many pure-strategy equilibria are there when z 5 1? What type
of equilibrium (separating, pooling, or semiseparating) occurs when
z 5 1? Explain.
6841D CH08 UG.indd 330
12/18/14 3:12 PM
E x e r c i s e s 3 3 1
(d) What is the lowest value of z such that there is a pooling equilibrium?
(e) Explain intuitively why the pooling equilibrium cannot occur when
the value of z is too low.
6841D CH08 UG.indd 331
U6.
Assume that Tudor is risk averse, with square-root utility over its total
profit (see Exercise S6), and that Fordor is risk neutral. Also, assume that
Tudor’s low per-unit cost is 10, as in Section 6.C.
(a) Redraw the extensive-form game shown in Figure 8.9, giving the
proper payoffs for a risk-averse Tudor.
(b) Let the probability that Tudor is low cost, z, be 0.4. Will the equilibrium be separating, pooling, or semiseparating? (Hint: Use a table
equivalent to Figure 8.10.)
(c) Repeat part (b) with z 5 0.1.
(d) (Optional) Will Tudor’s risk aversion change the answer to part (d)
of Exercise U5? Explain why or why not.
U7.
Return to the situation in Exercise S7, where Tudor’s low per-unit cost
is 6.
(a) Write the normal form of this game in terms of z, the probability
that Tudor is low price.
(b) What is the equilibrium when z 5 0.1? Is it separating, pooling, or
semiseparating?
(c) Repeat part (b) for z 5 0.2.
(d) Repeat part (b) for z 5 0.3.
(e) Compare your answers in parts (b), (c), and (d) of this problem with
part (d) of Exercise U5. When Tudor’s low cost is 6 instead of 10, can
pooling equilibria be achieved at lower values of z ? Or are higher
values of z required for pooling equilibria to occur? Explain intuitively why this is the case.
U8.
Corporate lawsuits may sometimes be signaling games. Here is one example. In 2003, AT&T filed suit against eBay, alleging that its Billpoint
and PayPal electronic-payment systems infringed on AT&T’s 1994 patent
on “mediation of transactions by a communications system.”
Let’s consider this situation from the point in time when the suit was
filed. In response to this suit, as in most patent-infringement suits, eBay
can offer to settle with AT&T without going to court. If AT&T accepts
eBay’s settlement offer, there will be no trial. If AT&T rejects eBay’s settlement offer, the outcome will be determined by the court.
The amount of damages claimed by AT&T is not publicly available.
Let’s assume that AT&T is suing for $300 million. In addition, let’s assume
12/18/14 3:12 PM
3 3 2 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
that if the case goes to trial, the two parties will incur court costs (paying
lawyers and consultants) of $10 million each.
Because eBay is actually in the business of processing electronic
payments, we might think that eBay knows more than AT&T does about
its probability of winning the trial. For simplicity, let’s assume that eBay
knows for sure whether it will be found innocent (i) or guilty (g) of patent
infringement. From AT&T’s point of view, there is a 25% chance that eBay
is guilty (g) and a 75% chance that eBay is innocent (i).
Let’s also suppose that eBay has two possible actions: a generous
settlement offer (G) of $200 million or a stingy settlement offer (S) of $20
million. If eBay offers a generous settlement, assume that AT&T will accept, thus avoiding a costly trial. If eBay offers a stingy settlement, then
AT&T must decide whether to accept (A) and avoid a trial or reject and
take the case to court (C). In the trial, if eBay is guilty, it must pay AT&T
$300 million in addition to paying all the court costs. If eBay is found innocent, it will pay AT&T nothing, and AT&T will pay all the court costs.
(a) Show the game in extensive form. (Be careful to label information
sets correctly.)
(b) Which of the two players has an incentive to bluff (that is, to give a
false signal) in this game? What would bluffing consist of? Explain
your reasoning.
(c) Write the strategic-form game matrix for this game. Find all of the
Nash equilibria to this game. What are the expected payoffs to each
player in equilibrium?
U9.
For the Stripped-Down Poker game that Felix and Oscar play in Exercise
S9, what does the mix of Kings and Queens have to be for the game to be
fair? That is, what fraction of Kings will make the expected payoff of the
game zero for both players?
U10.
Bored with Stripped-Down Poker, Felix and Oscar now make the game
more interesting by adding a third card type: Jack. Four Jacks are added
to the deck of four Kings and four Queens. All rules remain the same as
before, except for what happens when Felix Bets and Oscar Calls. When
Felix Bets and Oscar Calls, Felix wins the pot if he has a King, they “tie”
and each gets his money back if Felix is holding a Queen, and Oscar wins
the pot if the card is a Jack.
(a) Show the game in extensive form. (Be careful to label information
sets correctly.)
(b) How many pure strategies does Felix have in this game? Explain
your reasoning.
(c) How many pure strategies does Oscar have in this game? Explain
your reasoning.
6841D CH08 UG.indd 332
12/18/14 3:12 PM
E x e r c i s e s 3 3 3
(d) Represent this game in strategic form. This should be a matrix of
expected payoffs for each player, given a pair of strategies.
(e) Find the unique pure-strategy Nash equilibrium of this game.
(f) Would you call this a pooling equilibrium, a separating equilibrium,
or a semiseparating equilibrium?
(g) In equilibrium, what is the expected payoff to Felix of playing this
game? Is it a fair game?
U11.
Consider Spence’s job-market signaling model with the following specifications. There are two types of workers, 1 and 2. The productivities of the
two types, as functions of the level of education E, are
W1(E ) 5 E
and
W2(E ) 5 1.5E.
The costs of education for the two types, as functions of the level of education, are
C1(E ) 5 E 22 and C2(E ) 5 E 23.
Each worker’s utility equals his or her income minus the cost of education. Companies that seek to hire these workers are perfectly competitive
in the labor market.
(a) If types are public information (observable and verifiable), find expressions for the levels of education, incomes, and utilities of the
two types of workers.
Now suppose each worker’s type is his or her private information.
(b) Verify that if the contracts of part (a) are attempted in this situation
of information asymmetry, then type 2 does not want to take up the
contract intended for type 1, but type 1 does want to take up the
contract intended for type 2, so “natural” separation cannot prevail.
(c) If we leave the contract for type 1 as in part (a), what is the range
of contracts (education-wage pairs) for type 2 that can achieve
separation?
(d) Of the possible separating contracts, which one do you expect to
prevail? Give a verbal but not a formal explanation for your answer.
(e) Who gains or loses from the information asymmetry? How much?
U12.
6841D CH08 UG.indd 333
“Mr. Robinson pretty much concludes that business schools are a sifting device—M.B.A. degrees are union cards for yuppies. But perhaps
the most important fact about the Stanford business school is that all
meaningful sifting occurs before the first class begins. No messy weeding is done within the walls. ‘They don’t want you to flunk. They want you
to become a rich alum who’ll give a lot of money to the school.’ But one
wonders: If corporations are abdicating to the Stanford admissions office
12/18/14 3:12 PM
3 3 4 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
the responsibility for selecting young managers, why don’t they simply replace their personnel departments with Stanford admissions officers, and
eliminate the spurious education? Does the very act of throwing away a
lot of money and two years of one’s life demonstrate a commitment to
business that employers find appealing?” (From the review by Michael
Lewis of Peter Robinson’s Snapshots from Hell: The Making of an MBA, in
the New York Times, May 8, 1994, Book Review section.) What answer to
Lewis’s question can you give, based on our analysis of strategies in situations of asymmetric information?
U13.
(Optional, requires appendix) An auditor for the IRS is reviewing Wanda’s latest tax return (see Exercise S10), on which she reports having had
a bad year. Assume that Wanda is playing according to her equilibrium
strategy and that the auditor knows this.
(a) Using Bayes’ rule, find the probability that Wanda had a good year
given that she reports having had a bad year.
(b) Explain why the answer in part (a) is more or less than the baseline
probability of having a good year, 0.6.
U14.
(Optional, requires appendix) Return to Exercise S14. Assume, reasonably, that the probability of a lemon’s breaking down increases over the
length of the road trip. Specifically, let q 5 m (m 1 500), where m is the
number of miles in the round trip.
(a) Find the minimum integer number of miles, m, necessary to avoid
the collapse of the market for oranges. That is, what is the smallest
m such that the seller of an orange is willing to sell her car at the
market price for a Citrus that has successfully completed the road
trip? (Hint: Remember to calculate fupdated and pupdated.)
(b) What is the minimum integer number of miles, m, necessary to
achieve complete separation between functioning markets for oranges and lemons? That is, what is the smallest m such that the
owner of a lemon will never decide to attempt the road trip?
6841D CH08 UG.indd 334
12/18/14 3:12 PM
A p p e n d i x : R i s k at t i t u d e s a n d B ay e s ’ T h e o r e m 3 3 5
■
Appendix:
Risk Attitudes and Bayes’ Theorem
1 ATTITUDES TOWARD RISK AND EXPECTED UTILITY
In Chapter 2, we pointed out a difficulty about using probabilities to calculate
the average or expected payoff for players in a game. Consider a game where
players gain or lose money, and suppose we measure payoffs simply in money
amounts. If a player has a 75% chance of getting nothing and a 25% chance of
getting $100, then the expected payoff is calculated as a probability-weighted
average; the expected payoff is the average of the different payoffs with the
probabilities of each as weights. In this case, we have $0 with a probability of
75%, which yields 0.75  0  0 on average, added to $100 with a probability of
25%, which yields 0.25  100  25 on average. That is the same payoff as the
player would get from a simple nonrandom outcome that guaranteed him $25
every time he played. People who are indifferent between two alternatives with
the same average monetary value but different amounts of risk are said to be
risk-neutral. In our example, one prospect is riskless ($25 for sure), while the
other is risky, yielding either $0 with a probability of 0.75 or $100 with a probability of 0.25, for the same average of $25. In contrast are risk-averse people—
those who, given a pair of alternatives each with the same average monetary
value, would prefer the less risky option. In our example, they would rather get
$25 for sure than face the risky $100-or-nothing prospect and, given the choice,
would pick the safe prospect. Such risk-averse behavior is quite common; we
should therefore have a theory of decision making under uncertainty that takes
it into account.
We also said in Chapter 2 that a very simple modification of our payoff calculation can get us around this difficulty. We said that we could measure payoffs
not in money sums but by using a nonlinear rescaling of the dollar amounts.
Here we show explicitly how that rescaling can be done and why it solves our
problem for us.
Suppose that, when a person gets D dollars, we define the payoff to be
something other than just D, perhaps 
D . Then the payoff number associated
with $0 is 0, and that for $100 is 10. This transformation does not change the way
in which the person rates the two payoffs of $0 and $100; it simply rescales the
payoff numbers in a particular way.
6841D CH08 UG.indd 335
12/18/14 3:12 PM
3 3 6 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
Payoff
scale
10
2.5
6.25
25
100
Dollars
FIGURE 8A.1 Concave Scale: Risk Aversion
Now consider the risky prospect of getting $100 with probability 0.25 and
nothing otherwise. After our rescaling, the expected payoff (which is the average
of the two payoffs with the probabilities as weights) is (0.75  0)  (0.25  10) 
2.5. This expected payoff is equivalent to the person’s getting the dollar amount
whose square root is 2.5; because 2.5  
6.2
5, a person getting $6.25 for
sure would also receive a payoff of 2.5. In other words, the person with our
square-root payoff scale would be just as happy getting $6.25 for sure as he would
getting a 25% chance at $100. This indifference between a guaranteed $6.25 and a
1 in 4 chance of $100 indicates quite a strong aversion to risk; this person is
­willing to give up the difference between $25 and $6.25 to avoid facing the risk.
Figure 8A.1 shows this nonlinear scale (the square root), the expected payoff,
and the person’s indifference between the sure prospect and the gamble.
What if the nonlinear scale that we use to rescale dollar payoffs is the cube
root instead of the square root? Then the payoff from $100 is 4.64, and the
expected payoff from the gamble is (0.75  0)  (0.25  4.64)  1.16, which is
the cube root of 1.56. Therefore, a person with this payoff scale would accept
only $1.56 for sure instead of a gamble that has a money value of $25 on average;
such a person is extremely risk-averse indeed. (Compare a graph of the cube
root of x with a graph of the square root of x to see why this should be so.)
And what if the rescaling of payoffs from x dollars is done by using the
function x 2? Then the expected payoff from the gamble is (0.75  0)  (0.25 
10,000)  2,500, which is the square of 50. Therefore, a person with this payoff
scale would be indifferent between getting $50 for sure and the gamble with an
6841D CH08 UG.indd 336
12/18/14 3:12 PM
A p p e n d i x : R i s k at t i t u d e s a n d B ay e s ’ T h e o r e m 3 3 7
Payoff
scale
10,000
2,500
25
50
100
Dollars
FIGURE 8A.2 Convex Scale: Risk Loving
expected money value of only $25. This person must be a risk lover because he
is not willing to give up any money to get a reduction in risk; on the contrary,
he must be given an extra $25 in compensation for the loss of risk. Figure 8A.2
shows the nonlinear scale associated with a function such as x 2.
So by using different nonlinear scales instead of pure money payoffs, we can
capture different degrees of risk-averse or risk-loving behavior. A concave scale
like that of Figure 8A.1 corresponds to risk aversion, and a convex scale like that
of Figure 8A.2 corresponds to risk-loving behavior. You can experiment with
different simple nonlinear scales—for example, logarithms, exponentials, and
other roots and powers—to see what they imply about attitudes toward risk.34
This method of evaluating risky prospects has a long tradition in decision
theory; it is called the expected utility approach. The nonlinear scale that gives
payoffs as functions of money values is called the utility function; the square
root, cube root, and square functions referred to earlier are simple examples.
Then the mathematical expectation, or probability-weighted average, of the
utility values of the different money sums in a random prospect is called the
expected utility of that prospect. And different random prospects are compared
with one another in terms of their expected utilities; prospects with higher expected utility are judged to be better than those with lower expected utility.
34
Additional information on the use of expected utility and risk attitudes of players can be found in
many intermediate microeconomic texts; for example, Hal Varian, Intermediate Microeconomics, 7th
ed. (New York: W. W. Norton & Company, 2006), ch. 12; Walter Nicholson and Christopher Snyder,
Microeconomic Theory, 10th ed. (New York: Dryden Press, 2008), ch. 7.
6841D CH08 UG.indd 337
12/18/14 3:12 PM
3 3 8 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
Almost all of game theory is based on the expected utility approach, and it is
indeed very useful, although it is not without flaws. We will adopt it in this book,
leaving more detailed discussions to advanced treatises.35
2 Inferring Probabilities from observing consequences
When players have different amounts of information in a game, they will try to
use some device to ascertain their opponents’ private information. As we saw
in Section 3 of this chapter, it is sometimes possible for direct communication
to yield a cheap talk equilibrium. But more often, players will need to determine one another’s information by observing one another’s actions. They then
must estimate the probabilities of the underlying information by using those actions or their observed consequences. This estimation requires some relatively
sophisticated manipulation of the rules of probability, and we examine this process in detail here.
The rules given in the appendix to Chapter 7 for manipulating and calculating the probability of events, particularly the combination rule, prove useful in
our calculations of payoffs when individual players are differently informed. In
games of asymmetric information, players try to find out the other’s information
by observing their actions. Then they must draw inferences about the likelihood
of—estimate the probabilities of—the underlying information by exploiting the
actions or consequences that are observed.
The best way to understand this is by example. Suppose 1% of the population has a genetic defect that can cause a disease. A test that can identify this
genetic defect has a 99% accuracy: when the defect is present, the test will fail
to detect it 1% of the time, and the test will also falsely find a defect when none
is present 1% of the time. We are interested in determining the probability that a
person with a positive test result really has the defect. That is, we cannot directly
observe the person’s genetic defect (underlying condition), but we can observe
the results of the test for that defect (consequences)—except that the test is not
a perfect indicator of the defect. How certain can we be, given our observations,
that the underlying condition does in fact exist?
We can do a simple numerical calculation to answer the question for our
particular example. Consider a population of 10,000 persons in which 100 (1%)
have the defect and 9,900 do not. Suppose they all take the test. Of the 100
persons with the defect, the test will be (correctly) positive for 99. Of the 9,900
35
See R. Duncan Luce and Howard Raiffa, Games and Decisions (New York: John Wiley & Sons, 1957),
ch. 2 and app. 1, for an exposition; and Mark Machina, “Choice Under Uncertainty: Problems Solved
and Unsolved,” Journal of Economic Perspectives, vol. 1, no. 1 (Summer 1987), pp. 121–54, for a critique and alternatives. Although decision theory based on these alternatives has made considerable
progress, it has not yet influenced game theory to any significant extent.
6841D CH08 UG.indd 338
12/18/14 3:12 PM
A p p e n d i x : R i s k at t i t u d e s a n d B ay e s ’ T h e o r e m 3 3 9
without the defect, it will be (wrongly) positive for 99. That is 198 positive test results of which one-half are right and one-half are wrong. If a random person receives a positive test result, it is just as likely to be because the test is indeed right
as because the test is wrong, so the risk that the defect is truly present for a person with a positive result is only 50%. (That is why tests for rare conditions must
be designed to have especially low error rates of generating “false positives.”)
For general questions of this type, we use an algebraic formula called Bayes’
theorem to help us set up the problem and do the calculations. To do so, we
generalize our example, allowing for two alternative underlying conditions,
A and B (genetic defect or not, for example), and two observable consequences,
X and Y (positive or negative test result, for example). Suppose that, in the absence of any information (over the whole population), the probability that A exists is p, so the probability that B exists is (1 2 p). When A exists, the chance of
observing X is a, so the chance of observing Y is (1 2 a). (To use the language
that we developed in the appendix to Chapter 7, a is the probability of X conditional on A, and (1 2 a) is the probability of Y conditional on A.) Similarly, when
B exists, the chance of observing X is b, so the chance of observing Y is (1 2 b).
This description shows us that four alternative combinations of events
could arise: (1) A exists and X is observed, (2) A exists and Y is observed, (3) B
exists and X is observed, and (4) B exists and Y is observed. Using the modified
multiplication rule, we find the probabilities of the four combinations to be, respectively, pa, p(1 2 a), (1 2 p)b, and (1 2 p)(1 2 b).
Now suppose that X is observed: a person has the test for the genetic defect
and gets a positive result. Then we restrict our attention to a subset of the four
preceding possibilities—namely, the first and third, both of which include the
observation of X. These two possibilities have a total probability of pa 1 (1 2 p)
b; this is the probability that X is observed. Within this subset of outcomes in
which X is observed, the probability that A also exists is just pa, as we have
already seen. So we know how likely we are to observe X alone and how likely it
is that both X and A exist.
But we are more interested in determining how likely it is that A exists, given
that we have observed X—that is, the probability that a person has the genetic
defect, given that the test is positive. This calculation is the trickiest one. Using
the modified multiplication rule, we know that the probability of both A and X
happening equals the product of the probability that X alone happens times the
probability of A conditional on X; it is this last probability that we are after. Using
the formulas for “A and X ” and for “X alone,” which we just calculated, we get:
Prob(A and X ) 5 Prob(X alone) 3 Prob(A conditional on X)
pa 5 [ pa 1 (1 2 p)b ] 3 Prob(A conditional on X )
Prob(A conditional on X 5
6841D CH08 UG.indd 339
pa
.
pa (1 p)b
12/18/14 3:12 PM
3 4 0 [ C h . 8 ] u n c e r ta i n t y a n d i n f o r m at i o n
This formula gives us an assessment of the probability that A has occurred,
given that we have observed X (and have therefore conditioned everything on
this fact). The outcome is known as Bayes’ theorem (or rule or formula).
In our example of testing for the genetic defect, we had Prob(A ) 5 p 5 0.01,
Prob(X conditional on A) 5 a 5 0.99, and Prob(X conditional on B) 5 b 5 0.01.
We can substitute these values into Bayes’ formula to get
Probability defect exists given that test is positive 5 Prob(A conditional on X)
Probability defect exists given that test is positive 5 Prob(A conditional on X)
5
(0.01)(0.99)
(0.01)(0.99) (1 0.01)(0.01)
5
0.0099
0.0099 0.0099
5 0.5
The probability algebra employing Bayes’ rule confirms the arithmetical calculation that we used earlier, which was based on an enumeration of all of the possible cases. The advantage of the formula is that, once we have it, we can apply it
mechanically; this saves us the lengthy and error-susceptible task of enumerating every possibility and determining each of the necessary probabilities.
We show Bayes’ rule in Figure 8A.3 in tabular form, which may be easier to
remember and to use than the preceding formula. The rows of the table show
the alternative true conditions that might exist, for example, “genetic defect”
and “no genetic defect.” Here, we have just two, A and B, but the method generalizes immediately to any number of possibilities. The columns show the observed events—for example, “test positive” and “test negative.”
Each cell in the table shows the overall probability of that combination of
the true condition and the observation; these are just the probabilities for the
four alternative combinations listed above. The last column on the right shows
the sum across the first two columns for each of the top two rows. This sum is
OBSERVATION
TRUE
CONDITION
Sum of
row
X
Y
A
pa
p(1 – a)
p
B
(1 – p)b
(1 – p)(1 – b)
1–p
Sum of column
pa + (1 – p)b p(1 – a) + (1 – p)(1 – b)
FIGURE 8A.3 Bayes’ Rule
6841D CH08 UG.indd 340
12/18/14 3:12 PM
A p p e n d i x : R i s k at t i t u d e s a n d B ay e s ’ T h e o r e m 3 4 1
the total probability of each true condition (so, for instance, A’s probability is
p, as we have seen). The last row shows the sum of the first two rows in each
column. This sum gives the probability that each observation occurs. For example, the entry in the last row of the X column is the total probability that X is
observed, either when A is the true condition (a true positive in our genetic test
example) or when B is the true condition (a false positive).
To find the probability of a particular condition, given a particular observation, then, Bayes’ rule says that we should take the entry in the cell corresponding to the combination of that condition and that observation and divide
it by the column sum in the last row for that observation. As an example, Prob
(B given X) 5 (1 – p)b[pa 1 (1 – p)b].
summary
Judging consequences by taking expected monetary payoffs assumes risk-neutral
behavior. Risk aversion can be allowed by using the expected utility approach,
which requires the use of a utility function, which is a concave rescaling of monetary payoffs, and taking its probability-weighted average as the measure of expected payoff.
If players have asymmetric information in a game, they may try to infer
probabilities of hidden underlying conditions from observing actions or the
consequences of those actions. Bayes’ theorem provides a formula for inferring
such probabilities.
key terms
Bayes’ theorem (339)
expected utility (337)
risk-averse (335)
6841D CH08 UG.indd 341
risk-neutral (335)
utility function (337)
12/18/14 3:12 PM
9
9
■
Strategic Moves
A
is specified by the choices or moves available to the players, the
order, if any, in which they make those moves, and the payoffs that re‑
sult from all logically possible combinations of all the players’ choices. In
Chapter 6, we saw how changing the order of moves from sequential to
simultaneous or vice versa can alter the game’s outcomes. Adding or removing
moves available to a player or changing the payoffs at some terminal nodes or
in some cells of the game table also can change outcomes. Unless the rules of a
game are fixed by an outside authority, each player has the incentive to manipu‑
late them to produce an outcome that is more to his own advantage. Devices to
manipulate a game in this way are called strategic moves, which are the subject
of this chapter.
A strategic move changes the rules of the original game to create a new
two‑stage game. In this sense, strategic moves are similar to the direct commu‑
nications of information that we examined in Chapter 8. With strategic moves,
though, the second stage is the original game, often with some alteration of the
order of moves and the payoffs; there was no such alteration in our games with di‑
rect communication. The first stage in a game with strategic moves specifies how
you will act in the second stage. Different first-stage actions correspond to differ‑
ent strategic moves, and we classify them into three types: commitments, threats,
and promises. The aim of all three is to alter the outcome of the second‑stage
game to your own advantage. Which, if any, suits your purpose depends on the
context. But most important, any of the three works only if the other player be‑
lieves that at the second stage you will indeed do what you declared at the first
game
342
6841D CH09 UG.indd 342
12/18/14 3:13 PM
A c l a s s i f i c at i o n o f s t r at e g i c m o v e s 3 4 3
stage. In other words, the credibility of the strategic move is open to question.
Only a credible strategic move will have the desired effect and, as was often the
case in Chapter 8, mere declarations are not enough. At the first stage, you must
take some ancillary actions that lend credibility to your declared second-stage
actions. We will study both the kinds of second-stage actions that work to your
benefit and the first-stage ancillary moves that make them credible.
You are probably more familiar with the use and credibility of strategic
moves than you might think. Parents, for instance, constantly attempt to influ‑
ence the behavior of their children by using threats (“no dessert unless you fin‑
ish your vegetables”) and promises (“you will get the new racing bike at the end
of the term if you maintain at least a B average in school”). And children know
very well that many of these threats and promises are not credible; much bad
behavior can escape the threatened punishment if the child sweetly promises
not to do that again, even though the promise itself may not be credible. Fur‑
thermore, when the children get older and become concerned with their own
appearance, they find themselves making commitments to themselves to exer‑
cise and diet; many of these commitments also turn out to lack credibility. All of
these devices—commitments, threats, and promises—are examples of strategic
moves. Their purpose is to alter the actions of another player, perhaps even your
own future self, at a later stage in a game. But they will not achieve this pur‑
pose unless they are credible. In this chapter, we will use game theory to study
systematically how to use such strategies and how to make them credible.
Be warned, however, that credibility is a difficult and subtle matter. We can
offer you some general principles and an overall understanding of how strategic
moves can work—a science of strategy. But actually making them work depends
on your specific understanding of the context, and your opponent may get the
better of you by having a better understanding of the concepts or the context
or both. Therefore, the use of strategic moves in practice retains a substantial
component of art. It also entails risk, particularly when using the strategy of
brinkmanship, which can sometimes lead to disasters. You can have success as
well as fun trying to put these ideas into practice, but note our disclaimer and
warning: use such strategies at your own risk.
1 A CLASSIFICATION OF STRATEGIC MOVES
Because the use of strategic moves depends so critically on the order of moves,
to study them we need to know what it means to “move first.” Thus far, we have
taken this concept to be self-evident, but now we need to make it more precise.
It has two components. First, your action must be observable to the other
player; second, it must be irreversible.
6841D CH09 UG.indd 343
12/18/14 3:13 PM
3 4 4 [ C h . 9 ] s t r at e g i c m o v e s
Consider a strategic interaction between two players, A and B, in which A’s
move is made first. If A’s choice is not observable to B, then B cannot respond
to it, and the mere chronology of action is irrelevant. For example, suppose A
and B are two companies bidding in an auction. A’s committee meets in secret
on Monday to determine its bid; B’s committee meets on Tuesday; the bids are
mailed separately to the auctioneer and opened on Friday. When B makes its de‑
cision, it does not know what A has done; therefore the moves are strategically
the same as if they were simultaneous.
If A’s move is not irreversible, then A might pretend to do one thing, lure
B into responding, and then change its own action to its own advantage. B
should anticipate this ruse and not be lured; then it will not be responding to A’s
choice. Once again, in the true strategic sense A does not have the first move.
Considerations of observability and irreversibility affect the nature and
types of strategic moves as well as their credibility. We begin with a taxonomy of
strategic moves available to players.
A. Unconditional Strategic Moves
Let us suppose that player A is the one making a strategic observable and irrevers‑
ible move in the first stage of the game. He can declare: “In the game to follow,
I will make a particular move, X.” This declaration says that A’s future move is un‑
conditional; A will do X irrespective of what B does. Such a statement, if credible, is
tantamount to changing the order of the game at stage 2 so that A moves first and
B second, and A’s first move is X. This strategic move is called a commitment.
If the previous rules of the game at the second stage already have A moving
first, then such a declaration would be irrelevant. But if the game at the second
stage has simultaneous moves or if A is to move second there, then such a decla‑
ration, if credible, can change the outcome because it changes B’s beliefs about
the consequences of his actions. Thus, a commitment is a simple seizing of the
first-mover advantage when it exists.
In the street-garden game of Chapter 3, three women play a sequential‑move
game in which each must decide whether to contribute toward the creation of
a public flower garden on their street; two or more contributors are necessary
for the creation of a pleasant garden. The rollback equilibrium entails the first
player (Emily) choosing not to contribute while the other players (Nina and
Talia) do contribute. By making a credible commitment not to contribute, how‑
ever, Talia (or Nina) could alter the outcome of the game. Even though she does
not get her turn to announce her decision until after Emily and Nina have made
theirs public, Talia could let it be known that she has sunk all of her savings
(and energy) into a large house-renovation project, and so she will have abso‑
lutely nothing left to contribute to the street garden. Then Talia essentially com‑
mits herself not to contribute regardless of Emily’s and Nina’s decisions, before
6841D CH09 UG.indd 344
12/18/14 3:13 PM
A c l a s s i f i c at i o n o f s t r at e g i c m o v e s 3 4 5
Emily and Nina make those decisions. In other words, Talia changes the game
to one in which she is in effect the first mover. You can easily check that the new
rollback equilibrium entails Emily and Nina both contributing to the garden and
the equilibrium payoffs are 3 to each of them but 4 to Talia—the equilibrium
outcome associated with the game when Talia moves first. Several more detailed
examples of commitments are given in the following sections.
B. Conditional Strategic Moves
Another possibility for A is to declare at the first stage: “In the game to follow, I
will respond to your choices in the following way. If you choose Y1, I will do Z1; if
you do Y2, I will do Z2, . . .” In other words, A can use a move that is conditional
on B’s behavior; we call this type of move a response rule or reaction function.
A’s statement means that, in the game to be played at the second stage, A will
move second, but how he will respond to B’s choices at that point is already pre‑
determined by A’s declaration at stage 1. For such declarations to be meaning‑
ful, A must be physically able to wait to make his move at the second stage until
after he has observed what B has irreversibly done. In other words, at the second
stage, B should have the true first move in the double sense just explained.
Conditional strategic moves take different forms, depending on what they
are trying to achieve and how they set about achieving it. When A wants to
stop B from doing something, we say that A is trying to deter B, or to achieve
deterrence; when A wants to induce B to do something, we say that A is trying
to compel B, or to achieve compellence. We return to this distinction later. Of
more immediate interest is the method used in pursuit of either of these aims.
If A declares, “Unless your action (or inaction, as the case may be) conforms to
my stated wish, I will respond in a way that will hurt you,” that is, a threat. If
A declares, “If your action (or inaction, as the case may be) conforms to my
stated wish, I will respond in a way that will reward you,” that is, a promise.
“Hurt” and “reward” are measured in terms of the payoffs in the game itself.
When A hurts B, A does something that lowers B’s payoff; when A rewards B, A
does something that leads to a higher payoff for B. Threats and promises are the
two conditional strategic moves on which we focus our analysis.
To understand the nature of these strategies, consider the dinner game men‑
tioned earlier. In the natural chronological order of moves, first the child decides
whether to eat his vegetables, and then the parent decides whether to give the
child dessert. Rollback analysis tells us the outcome: the child refuses to eat the
vegetables, knowing that the parent, unwilling to see the child hungry and un‑
happy, will give him the dessert. The parent can foresee this outcome, however,
and can try to alter it by making an initial move—namely, by stating a condi‑
tional response rule of the form “no dessert unless you finish your vegetables.”
This declaration constitutes a threat. It is a first move in a pregame, which fixes
6841D CH09 UG.indd 345
12/18/14 3:13 PM
3 4 6 [ C h . 9 ] s t r at e g i c m o v e s
how you will make your second move in the actual game to follow. If the child
believes the threat, that alters the child’s rollback calculation. The child “prunes”
that branch of the game tree in which the parent serves dessert even if the child
has not finished his vegetables. This may alter the child’s behavior; the parent
hopes that it will make the child act as the parent wants him to. Similarly, in the
“study game,” the promise of the bike may induce a child to study harder.
2 CREDIBILITY OF STRATEGIC MOVES
We have already seen that payoffs to the other player can be altered by one play‑
er’s strategic move, but what about the payoffs for the player making that move?
Player A gets a higher payoff when B acts in conformity with A’s wishes. But A’s
payoff also may be affected by his own response. In regard to a threat, A’s threat‑
ened response if B does not act as A would wish may have consequences for A’s
own payoffs: the parent may be made unhappy by the sight of the unhappy child
who has been denied dessert. Similarly, in regard to a promise, rewarding B if he
does act as A would wish can affect A’s own payoff: the parent who rewards the
child for studying hard has to incur the monetary cost of the gift but is happy to
see the child’s happiness on receiving the gift and even happier about the aca‑
demic performance of the child.
This effect on A’s payoffs has an important implication for the efficacy of A’s
strategic moves. Consider the threat. If A’s payoff is actually increased by car‑
rying out the threatened action, then B reasons that A will carry out this action
even if B fulfills A’s demands. Therefore, B has no incentive to comply with A’s
wishes, and the threat is ineffective. For example, if the parent is a sadist who
enjoys seeing the child go without dessert, then the child thinks, “I am not going
to get dessert anyway, so why eat the vegetables?”
Therefore, an essential aspect of a threat is that it should be costly for the
threatener to carry out the threatened action. In the dinner game, the parent
must prefer to give the child dessert. Threats in the true strategic sense have the
innate property of imposing some cost on the threatener, too; they are threats of
mutual harm.
In technical terms, a threat fixes your strategy (response rule) in the subse‑
quent game. A strategy must specify what you will do in each eventuality along
the game tree. Thus, “no dessert if you don’t finish your vegetables” is an incom‑
plete specification of the strategy; it should be supplemented by “and dessert if
you do.” Threats generally don’t specify this latter part. Why not? Because the
second part of the strategy is automatically understood; it is implicit. And for
the threat to work, this second part of the strategy—the implied promise in this
case—has to be automatically credible, too.
6841D CH09 UG.indd 346
12/18/14 3:13 PM
c r e d i b i l i t y o f s t r at e g i c m o v e s 3 4 7
Thus, the threat “no dessert if you don’t finish your vegetables” carries with
it an implicit promise of “dessert if you do finish your vegetables.” This promise
also should be credible if the threat is to have the desired effect. In our exam‑
ple, the credibility of the implicit promise is automatic when the parent prefers
to see the child get and enjoy his dessert. In other words, the implicit promise
is automatically credible precisely when the threatened action is costly for the
parent to carry out.
To put it yet another way, a threat carries with it the stipulation that you will
do something if your wishes are not met that, if those circumstances actually
arise, you will regret having to do. Then why make this stipulation at the first
stage? Why tie your own hands in this way when it might seem that leaving one’s
options open would always be preferable? Because in the realm of game theory,
having more options is not always preferable. In regard to a threat, your lack of
freedom in the second stage of the game has strategic value. It changes other
players’ expectations about your future responses, and you can use this change
in expectations to your advantage.
A similar effect arises with a promise. If the child knows that the parent en‑
joys giving him gifts, he may expect to get the racing bike anyway on some occa‑
sion in the near future—for example, an upcoming birthday. Then the promise
of the bike has little effect on the child’s incentive to study hard. To have the
intended strategic effect, the promised reward must be so costly to provide that
the other player would not expect you to hand over that reward anyway. (This
is a useful lesson in strategy that you can point out to your parents: the rewards
that they promise must be larger and more costly than what they would give you
just for the pleasure of seeing you happy.)
The same is true of unconditional strategic moves (commitments), too. In
bargaining, for example, others know that, when you have the freedom to act,
you also have the freedom to capitulate; so a “no concessions” commitment can
secure you a better deal. If you hold out for 60% of the pie and the other party
offers you 55%, you may be tempted to take it. But if you can credibly assert in
advance that you will not take less than 60%, then this temptation does not arise
and you can do better than you otherwise would.
Thus, it is in the very nature of strategic moves that after the fact—that is,
when the stage 2 game actually requires it—you do not want to carry out the ac‑
tion that you had stipulated you would take. This is true for all types of strategic
moves, and it is what makes credibility so problematic. You have to do something
at the first stage to create credibility—something that convincingly tells the other
player that you will not give in to the temptation to deviate from the stipulated ac‑
tion when the time comes—in order for your strategic move to work. That is why
giving up your own freedom to act can be strategically beneficial. Alternatively,
credibility can be achieved by changing your own payoffs in the second-stage
game in such a way that it becomes truly optimal for you to act as you declare.
6841D CH09 UG.indd 347
12/18/14 3:13 PM
3 4 8 [ C h . 9 ] s t r at e g i c m o v e s
Thus, there are two general ways of making your strategic moves credible:
(1) remove from your own set of future choices the other moves that may tempt
you or (2) reduce your own payoffs from those temptation moves so that the
stipulated move becomes the actual best one. In the sections that follow, we
first elucidate the mechanics of strategic moves, assuming them to be credible.
We make some comments about credibility as we go along but postpone our
general analysis of credibility until the last section of the chapter.
3 COMMITMENTS
We studied the game of chicken in Chapter 4 and found two pure-strategy Nash
equilibria. Each player prefers the equilibrium in which he goes straight and the
other person swerves.1 We saw in Chapter 6 that, if the game were to have se‑
quential rather than simultaneous moves, the first mover would choose Straight,
leaving the second to make the best of the situation by settling for Swerve rather
than causing a crash. Now we can consider the same matter from another per‑
spective. Even if the game itself has simultaneous moves, if one player can make
a strategic move—create a first stage in which he makes a credible declaration
about his action in the chicken game itself, which is to be played at the second
stage—then he can get the same advantage afforded a first mover by making a
commitment to act tough (choose Straight).
Although the point is simple, we outline the formal analysis to develop
your understanding and skill, which will be useful for later, more complex ex‑
amples. Remember our two players, James and Dean. Suppose James is the one
who has the opportunity to make a strategic move. Figure 9.1 shows the tree for
the two‑stage game. At the first stage, James has to decide whether to make a
commitment. Along the upper branch emerging from the first node, he does
not make the commitment. Then at the second stage the simultaneous-move
game is played, and its payoff table is the familiar one shown in Figure 4.13 and
Figure 6.6. This second-stage game has multiple equilibria, and James gets his
best payoff in only one of them. Along the lower branch, James makes the com‑
mitment. Here, we interpret this commitment to mean giving up his freedom to
act in such a way that Straight is the only action available to James at this stage.
Therefore, the second-stage game table has only one row for James, correspond‑
ing to his declared choice of Straight. In this table, Dean’s best action is Swerve;
so the equilibrium outcome gives James his best payoff. Therefore, at the first
stage, James finds it optimal to make the commitment; this strategic move en‑
sures his best payoff, while not committing leaves the matter uncertain.
1
We saw in Chapter 7 and will see again in Chapter 12 that the game has a third equilibrium, in
mixed strategies, in which both players do quite poorly.
6841D CH09 UG.indd 348
12/18/14 3:13 PM
c o m m i t m e n t s 3 4 9
DEAN
Uncommitted
Swerve
Straight
Swerve
0, 0
–1, 1
Straight
1, –1
–2, –2
JAMES
JAMES
DEAN
Committed
JAMES
Straight
Swerve
Straight
1, –1
–2, –2
FIGURE 9.1 Chicken: Commitment by Restricting Freedom to Act
How can James make this commitment credibly? Like any first move, the com‑
mitment move must be (1) irreversible and (2) visible to the other player. People
have suggested some extreme and amusing ideas. James can disconnect the steer‑
ing wheel of the car and throw it out of the window so that Dean can see that James
can no longer Swerve. (James could just tie the wheel so that it could no longer
be turned, but it would be more difficult to demonstrate to Dean that the wheel
was truly tied and that the knot was not a trick one that could be undone quickly.)
These devices simply remove the Swerve option from the set of choices available
to James in the stage 2 game, leaving Straight as the only thing he can do.
More plausibly, if such games are played every weekend, James can acquire
a general reputation for toughness that acts as a guarantee of his action on any
one day. In other words, James can alter his own payoff from swerving by sub‑
tracting an amount that represents the loss of reputation. If this amount is large
enough—say, 3—then the second-stage game when James has made the com‑
mitment has a different payoff table. The complete tree for this version of the
game is shown in Figure 9.2.
Now, in the second stage with commitment, Straight has become truly op‑
timal for James; in fact, it is his dominant strategy in that stage. Dean’s optimal
strategy is then Swerve. Looking ahead to this outcome at stage 1, James sees
that he gets 1 by making the commitment (changing his own stage 2 payoffs),
while without the commitment he cannot be sure of 1 and may do much worse.
Thus, a rollback analysis shows that James should make the commitment.
Both (or all) can play the game of commitment, so success may depend
both on the speed with which you can seize the first move and on the credibility
with which you can make that move. If there are lags in observation, the two
may even make incompatible simultaneous commitments: each disconnects his
steering wheel and tosses it out of the window just as he sees the other’s wheel
come flying out, and then the crash is unavoidable.
6841D CH09 UG.indd 349
12/18/14 3:13 PM
3 5 0 [ C h . 9 ] s t r at e g i c m o v e s
DEAN
Swerve
Straight
Swerve
0, 0
–1, 1
Straight
1, –1
–2, –2
JAMES
Uncommitted
JAMES
DEAN
Swerve
Committed
Swerve
JAMES
Straight
–3, 0
1, –1
Straight
–4, 1
–2, –2
FIGURE 9.2 Chicken: Commitment by Changing Payoffs
Even if one of the players has the advantage in making a commitment, the
other player can defeat the first player’s attempt to do so. The second player
could demonstrably remove his ability to “see” the other’s commitment, for ex‑
ample, by cutting off communication.
Games of chicken may be a 1950s anachronism, but our second example is
perennial and familiar. In a class, the teacher’s deadline enforcement policy can
be Weak or Tough, and the students’ work can be Punctual or Late. Figure 9.3
shows this game in the strategic form. The teacher does not like being tough; for
him the best outcome (a payoff of 4) is when students are punctual even when
he is weak; the worst (1) is when he is tough but students are still late. Of the
two intermediate strategies, he recognizes the importance of punctuality and
rates (Tough, Punctual) better than (Weak, Late). The students most prefer the
outcome (Weak, Late), where they can party all weekend without suffering any
penalty for the late assignment. (Tough, Late) is the worst for them, just as it is
for the teacher. Between the intermediate ones, they prefer (Weak, Punctual) to
(Tough, Punctual) because they have higher self-esteem if they can think that
STUDENT
Punctual
Late
Weak
4, 3
2, 4
Tough
3, 2
1, 1
TEACHER
FIGURE 9.3 Payoff Table for Class Deadline Game
6841D CH09 UG.indd 350
12/18/14 3:13 PM
c o m m i t m e n t s 3 5 1
they acted punctually of their own volition rather than because of the threat of a
penalty.2
If this game is played as a simultaneous-move game or if the teacher
moves second, Weak is dominant for the teacher, and then the student chooses
Late. The equilibrium outcome is (Weak, Late), and the payoffs are (2, 4). But
the teacher can achieve a better outcome by committing at the outset to the
policy of Tough. We do not draw a tree as we did in Figures 9.1 and 9.2. The
tree would be very similar to that for the preceding chicken case, and so we
leave it for you to draw. Without the commitment, the second-stage game is as
before, and the teacher gets a 2. When the teacher is committed to Tough, the
students find it better to respond with Punctual at the second stage, and the
teacher gets a 3.
The teacher commits to a move different from what he would do in simul‑
taneous play or, indeed, his best second move if the students moved first. This
is where strategic thinking enters. The teacher has nothing to gain by declaring
that he will have a Weak enforcement regime; the students expect that anyway
in the absence of any declaration. To gain advantage by making a strategic move,
he must commit not to follow what would be his equilibrium strategy in the
simultaneous-move game. This strategic move changes the students’ expecta‑
tions and therefore their action. Once they believe the teacher is really commit‑
ted to tough discipline, they will choose to turn in their assignments punctually.
If they tested this out by being late, the teacher would like to forgive them,
maybe with an excuse to himself, such as “just this once.” The existence of this
temptation to shift away from your commitment is what makes its credibility
problematic.
Even more dramatic, in this instance the teacher benefits by making a stra‑
tegic move that commits him to a dominated strategy. He commits to choosing
Tough, which is dominated by Weak. If you think it paradoxical that one can gain
by choosing a dominated strategy, you are extending the concept of dominance
beyond the proper scope of its validity. Dominance entails either of two calcula‑
tions: (1) After the other player does something, how do I respond, and is some
choice best (or worst), given all possibilities? (2) If the other player is simultane‑
ously doing action X, what is best (or worst) for me, and is this the same for all
the X actions that the other could be choosing? Neither is relevant when you are
moving first. Instead, you must look ahead to how the other will respond. There‑
fore, the teacher does not compare his payoffs in vertically adjacent cells of the
table (taking the possible actions of the students one at a time). Instead, he
2
You may not regard these specific rankings of outcomes as applicable either to you or to your own
teachers. We ask you to accept them for this example, whose main purpose is to convey some general ideas about commitment in a simple way. The same disclaimer applies to all the examples that
follow.
6841D CH09 UG.indd 351
12/18/14 3:13 PM
3 5 2 [ C h . 9 ] s t r at e g i c m o v e s
calculates how the students will react to each of his moves. If he is committed to
Tough, they will be Punctual, but if he is committed to Weak (or uncommitted),
they will be Late, so the only pertinent comparison is that of the top-right cell
with the bottom left, of which the teacher prefers the latter.
To be credible, the teacher’s commitment must be everything a first
move has to be. First, it must be made before the other side makes its move.
The teacher must establish the ground rules of deadline enforcement before
the assignment is due. Next, it must be observable—the students must know the
rules by which they must abide. Finally, and perhaps most important, it must be
irreversible—the students must know that the teacher cannot, or at any rate
will not, change his mind and forgive them. A teacher who leaves loopholes and
provisions for incompletely specified emergencies is merely inviting imagina‑
tive excuses accompanied by fulsome apologies and assertions that “it won’t
happen again.”
The teacher might achieve credibility by hiding behind general university
regulations; this simply removes the Weak option from his set of available
choices at stage 2. Or, as is true in the chicken game, he might establish a repu‑
tation for toughness, changing his own payoffs from Weak by creating a suffi‑
ciently high cost of loss of reputation.
4 THREATS AND PROMISES
We emphasize that threats and promises are response rules: your actual future
action is conditioned on what the other players do in the meantime, but your
freedom of future action is constrained to following the stated rule. Once again,
the aim is to alter the other players’ expectations and therefore their actions in
a way favorable to you. Tying yourself to a rule that you would not want to fol‑
low if you were completely free to act at the later time is an essential part of this
process. Thus, the initial declaration of intention must be credible. Once again,
we will elucidate some principles for achieving credibility of these moves, but
we remind you that their actual implementation remains largely an art.
Remember the taxonomy given in Section 1. A threat is a response rule
that leads to a bad outcome for the other players if they act contrary to your
interests. A promise is a response rule by which you offer to create a good out‑
come for the other players if they act in a way that promotes your own interests.
Each of these responses may aim either to stop the other players from doing
something that they would otherwise do (deterrence) or to induce them to do
something that they would otherwise not do (compellence). We consider these
features in turn.
6841D CH09 UG.indd 352
12/18/14 3:13 PM
t h r e at s a n d p r o m i s e s 3 5 3
A. Example of a Threat: U.S.–Japan Trade Relations
Our example comes from a hardy perennial of U.S. international economic
­policy—namely, trade friction with Japan. Each country has the choice of keep‑
ing its own markets open or closed to the other’s goods. They have somewhat
different preferences regarding the outcomes.
Figure 9.4 shows the payoff table for the trade game. For the United States,
the best outcome (a payoff of 4) comes when both markets are open; this is
partly because of its overall commitment to the market system and free trade
and partly because of the benefit of trade with Japan itself—U.S. consumers get
high-quality cars and consumer electronics products, and U.S. producers can
export their agricultural and high-tech products. Similarly, its worst outcome
(payoff 1) occurs when both markets are closed. Of the two outcomes when only
one market is open, the United States would prefer its own market to be open,
because the Japanese market is smaller, and loss of access to it is less important
than the loss of access to Hondas and video games.
As for Japan, for the purpose of this example we accept the protectionist,
producer-oriented picture of Japan, Inc. Its best outcome is when the U.S. mar‑
ket is open and its own is closed; its worst is when matters are the other way
around. Of the other two outcomes, it prefers that both markets be open, be‑
cause its producers then have access to the much larger U.S. market.3
Both sides have dominant strategies. No matter how the game is played—
simultaneously or sequentially with either move order—the equilibrium
­
outcome is (Open, Closed), and the payoffs are (3, 4). This outcome also fits well
the common American impression of how the actual trade policies of the two
countries work.
Japan is already getting its best payoff in this equilibrium and so has no
need to try any strategic moves. The United States, however, can try to get a
4 instead of a 3. But in this case, an ordinary unconditional commitment will
JAPAN
UNITED
STATES
Open
Closed
Open
4, 3
3, 4
Closed
2, 1
1, 2
FIGURE 9.4 Payoff Table for U.S.–Japan Trade Game
3
Again, we ask you to accept this payoff structure as a vehicle for conveying the ideas. You can ex‑
periment with the payoff tables to see what difference that would make to the role and effectiveness
of the strategic moves.
6841D CH09 UG.indd 353
12/18/14 3:13 PM
3 5 4 [ C h . 9 ] s t r at e g i c m o v e s
JAPAN
No
threat
UNITED
STATES
Open
Closed
Open
4, 3
3, 4
Closed
2, 1
1, 2
UNITED
STATES
(US, J)
Threat
Closed (1, 2)
JAPAN
Open
(4, 3)
FIGURE 9.5 Tree for the U.S.–Japan Trade Game with Threat
not work. Japan’s best response, no matter what commitment the United States
makes, is to keep its market closed. Then the United States does better for itself
by committing to keep its own market open, which is the equilibrium without
any strategic moves anyway.
But suppose the United States can choose the following conditional re‑
sponse rule: “We will close our market if you close yours.” The situation then be‑
comes the two-stage game shown in Figure 9.5. If the United States does not use
the threat, the second stage is as before and leads to the equilibrium in which
the U.S. market is open and it gets a 3, whereas the Japanese market is closed
and it gets a 4. If the United States does use the threat, then at the second stage
only Japan has freedom of choice; given what Japan does, the United States then
merely does what its response rule dictates. Therefore, along this branch of the
tree, we show only Japan as an active player and write down the payoffs to the
two parties: If Japan keeps its market closed, the United States closes its own,
and the United States gets a 1 and Japan gets a 2. If Japan keeps its market open,
then the United States threat has worked, it is happy to keep its own market
open, and it gets a 4, while Japan gets a 3. Of these two possibilities, the second
is better for Japan.
Now we can use the familiar rollback reasoning. Knowing how the second
stage will work in all eventualities, it is better for the United States to deploy its
threat at the first stage. This threat will result in an open market in Japan, and
the United States will get its best outcome.
Having described the mechanics of the threat, we now point out some of its
important features:
1. When the United States deploys its threat credibly, Japan doesn’t follow
its dominant strategy Closed. Again, the idea of dominance is relevant
only in the context of simultaneous moves or when Japan moves second.
6841D CH09 UG.indd 354
12/18/14 3:13 PM
t h r e at s a n d p r o m i s e s 3 5 5
2.
3.
4.
5.
6841D CH09 UG.indd 355
Here, Japan knows that the United States will take actions that depart
from its dominant strategy. In the payoff table, Japan is looking at a
choice between just two cells, the top left and the bottom right, and of
those two, it prefers the latter.
Credibility of the threat is problematic because, if Japan puts it to the
test by keeping its market closed, the United States faces the temptation
to refrain from carrying out the threat. In fact, if the threatened action
were the best U.S. response after the fact, then there would be no need to
make the threat in advance (but the United States might issue a warning
just to make sure that the Japanese understand the situation). The stra‑
tegic move has a special role exactly because it locks a player into doing
something other than what it would have wanted to do after the fact. As
explained earlier, a threat in the true strategic sense is necessarily costly
for the threatener to carry out; the threatened action would inflict mutual harm.
The conditional rule “We will close our market if you close yours” does
not completely specify the U.S. strategy. To be complete, it needs an ad‑
ditional clause indicating what the United States will do in response to
an open Japanese market: “and we will keep our market open if you keep
yours open.” This additional clause, the implicit promise, is really part of
the threat, but it does not need to be stated explicitly, because it is automat‑
ically credible. Given the payoffs of the second-stage game, it is in the best
interests of the United States to keep its market open if Japan keeps its
market open. If that were not the case, if the United States would respond
by keeping its market closed even when Japan kept its own market open,
then the implicit promise would have to be made explicit and somehow
made credible. Otherwise, the U.S. threat would become tantamount to
the unconditional commitment “We will keep our market closed,” and
that would not elicit the desired response from Japan.
The threat, when credibly deployed, results in a change in Japan’s ac‑
tion. We can regard this as deterrence or compellence, depending on the
status quo. If the Japanese market is initially open, and the Japanese are
considering a switch to protectionism, then the threat deters them from
that action. But if the Japanese market is initially closed, then the threat
compels them to open it. Thus, whether a strategic move is deterrent or
compellent depends on the status quo. The distinction may seem to be a
matter of semantics, but in practice the credibility of a move and the way
that it works are importantly affected by this distinction. We return to this
matter later in the chapter.
Here are a few ways in which the United States can make its threat credi‑
ble. First, it can enact a law that mandates the threatened action under the
right circumstances. This removes the temptation action from the set of
12/18/14 3:13 PM
3 5 6 [ C h . 9 ] s t r at e g i c m o v e s
available choices at stage 2. Some reciprocity provisions in the World Trade
Organization agreements have this effect, but the procedures are very
slow and uncertain. Second, it can delegate fulfillment to an agency such
as the U.S. Commerce Department that is captured by U.S. producers who
would like to keep our markets closed and so reduce the competitive pres‑
sure on themselves. This changes the U.S. payoffs at stage 2—replacing
the true U.S. payoffs by those of the Commerce Department—with the
result that the threatened action becomes truly optimal. (The danger is
that the Commerce Department will then retain a protectionist stance
even if Japan opens its market; gaining credibility for the threat may lose
credibility for the implied promise.)
6. If a threat works, it doesn’t have to be carried out. So its cost to you is im‑
material. In practice, the danger that you may have miscalculated or the
risk that the threatened action will take place by error even if the other
player complies is a strong reason to refrain from using threats more se‑
vere than necessary. To make the point starkly, the United States could
threaten to pull out of defensive alliances with Japan if it didn’t buy our
rice and semiconductors, but that threat is “too big” and too risky for
the United States ever to carry out; therefore it is not credible. If the
only available threat appears “too big,” then a player can reduce its size
by making its fulfillment a matter of chance. Instead of saying, “If you
don’t open your markets, we will refuse to defend you in the future,” the
United States can say to Japan, “If you don’t open your markets, the rela‑
tions between our countries will deteriorate to the point where Congress
may refuse to allow us to come to your assistance if you are ever attacked,
even though we do have an alliance.” In fact, the United States can de‑
liberately foster sentiments that raise the probability that Congress will
do just that, so the Japanese will feel the danger more vividly. A threat of
this kind, which creates a risk but not a certainty of the bad outcome, is
called brinkmanship. It is an extremely delicate and even dangerous vari‑
ant of the strategic move. We will study brinkmanship in greater detail in
Chapter 14.
7. Japan gets a worse outcome when the United States deploys its threat
than it would without this threat, so it would like to take strategic actions
that defeat or disable U.S. attempts to use the threat. For example, sup‑
pose its market is currently closed, and the United States is attempting
compellence. The Japanese can accede in principle but stall in practice,
pleading unavoidable delays for assembling the necessary political con‑
sensus to legislate the market opening, then delays for writing the neces‑
sary administrative regulations to implement the legislation, and so on.
Because the United States does not want to go ahead with its threatened
action, at each point it has the temptation to accept the delay. Or Japan
6841D CH09 UG.indd 356
12/18/14 3:13 PM
t h r e at s a n d p r o m i s e s 3 5 7
can claim that its domestic politics makes it difficult to open all markets
fully; will the United States accept the outcome if Japan keeps just a few
of its industries protected? It gradually expands this list, and at any point
the extra small step is not enough cause for the United States to unleash
a trade war. This device of defeating a compellent threat by small steps,
or “slice by slice,” is called salami tactics.
B. Example of a Promise: The Restaurant Pricing Game
We now illustrate a promise by using the restaurant pricing game of Chapter 5.
We saw in Chapter 5 that the game is a prisoners’ dilemma, and we simplify it
here by supposing that only two choices of price are available: the jointly best
price of $26 or the Nash equilibrium price of $20. The profits for each restaurant
in this version of the game can be calculated by using the functions in Section
1 of Chapter 5; the results are shown in Figure 9.6. Without any strategic moves,
the game has the usual equilibrium in dominant strategies in which both stores
charge the low price of 20, and both get lower profits than they would if they
both charged the high price of 26.
If either side can make the credible promise “I will charge a high price if you
do,” the cooperative outcome is achieved. For example, if Xavier’s makes the
promise, then Yvonne’s knows that its choice of 26 will be reciprocated, leading
to the payoff shown in the lower-right cell of the table, and that its choice of 20
will bring forth Xavier’s usual action—namely, 20—leading to the upper-left cell.
Between the two, Yvonne’s prefers the first and therefore chooses the high price.
The analysis can be done more properly by drawing a tree for the two-stage
game in which Xavier’s has the choice of making or not making the promise at
the first stage. We omit the tree, partly so that you can improve your understand‑
ing of the process by constructing it yourself and partly to show how such de‑
tailed analysis becomes unnecessary as one becomes familiar with the ideas.
The credibility of Xavier’s promise is open to doubt. To respond to what
Yvonne’s does, Xavier’s must arrange to move second in the second stage of the
game; correspondingly, Yvonne’s must move first in stage 2. Remember that a
YVONNE’S BISTRO
XAVIER’S
TAPAS
20 (low)
26 (high)
20 (low)
288, 288
360, 216
26 (high)
216, 360
324, 324
FIGURE 9.6 Payoff Table for Restaurant Prisoners’ Dilemma ($100s per month)
6841D CH09 UG.indd 357
12/18/14 3:13 PM
3 5 8 [ C h . 9 ] s t r at e g i c m o v e s
first move is an irreversible and observable action. Therefore, if Yvonne’s moves
first and prices high, it leaves itself vulnerable to Xavier’s cheating, and Xavier’s
is very tempted to renege on its promise to price high when it sees Yvonne’s in
this vulnerable position. Xavier’s must somehow convince Yvonne’s that it will
not give in to the temptation to charge a low price when Yvonne’s charges a high
price.
How can it do so? Perhaps Xavier’s owner can leave the pricing deci‑
sion in the hands of a local manager, with clear written instructions to recip‑
rocate with the high price if Yvonne’s charges the high price. Xavier’s owner
can invite Yvonne’s to inspect these instructions, after which he leaves on a
solo round‑the‑world sailing trip so that he cannot rescind them. (Even then,
Yvonne’s management may be doubtful—Xavier might secretly carry a tele‑
phone or a laptop computer onboard.) This scenario is tantamount to removing
the cheating action from the choices available to Xavier’s at stage 2.
Or Xavier’s restaurant can develop a reputation for keeping its promises,
in business and in the community more generally. In a repeated relationship,
the promise may work because reneging on the promise once may cause future
cooperation to collapse. In essence, an ongoing relationship means splitting the
game into smaller segments, in each of which the benefit from reneging is too
small to justify the costs. In each such game, then, the payoff from cheating is
altered by the cost of collapse of future cooperation.4
We saw earlier that every threat has an implicit attached promise. Similarly,
every promise has an implicit attached threat. In this case, the threat is “I will
charge the low price if you do.” It does not have to be stated explicitly, because
it is automatically credible—it describes Xavier’s best response to Yvonne’s low
price.
There is also an important difference between a threat and a promise. If a
threat is successful, it doesn’t have to be carried out and is then costless to the
threatener. Therefore, a threat can be bigger than what is needed to make it ef‑
fective (although making it too big may be too risky, even to the point of losing
its credibility as suggested earlier). If a promise is successful in altering the oth‑
er’s action in the desired direction, then the promisor has to deliver what he had
promised, and so it is costly. In the preceding example, the cost is simply giving
up the opportunity to cheat and get the highest payoff; in other instances where
the promiser offers an actual gift or an inducement to the other, the cost may
be more tangible. In either case, the player making the promise has a natural
incentive to keep its size small—just big enough to be effective.
4
In Chapter 10, we will investigate in great detail the importance of repeated or ongoing relation‑
ships in attempts to reach the cooperative outcome in a prisoners’ dilemma.
6841D CH09 UG.indd 358
12/18/14 3:13 PM
t h r e at s a n d p r o m i s e s 3 5 9
C. Example Combining Threat and Promise: Joint U.S.–China Political Action
When we considered threats and promises one at a time, the explicit statement
of a threat included an implicit clause of a promise that was automatically cred‑
ible, and vice versa. There can, however, be situations in which the credibility
of both aspects is open to question; then the strategic move has to make both
aspects explicit and make them both credible.
Our example of an explicit-threat-and-promise combination comes from
a context in which multiple nations must work together toward some common
goal in dealing with a dangerous situation in a neighboring country. Specifically,
we consider an example of the United States and China contemplating whether
to take action to compel North Korea to give up its nuclear weapons programs.
We show in Figure 9.7 the payoff table for the United States and China when
each must choose between action and inaction.
Each country would like the other to take on the whole burden of taking ac‑
tion against the North Koreans; so the top-right cell has the best payoff for China
(4), and the bottom-left cell is best for the United States. The worst situation
for the United States is where no action is taken, because it finds the increased
threat of nuclear war in that case to be unacceptable. For China, however, the
worst outcome arises when it takes on the whole burden of action, because the
costs of action are so high. Both regard a joint involvement as the second best (a
payoff of 3). The United States assigns a payoff of 2 to the situation in which it is
the only one to act. And for China, a payoff of 2 is assigned to the case in which
no action is taken.
Without any strategic moves, the intervention game is dominance solvable.
Inaction is the dominant strategy for China, and then Action is the best choice
for the United States. The equilibrium outcome is the top-right cell, with payoffs
of 2 for the United States and 4 for China. Because China gets its best outcome,
it has no reason to try any strategic moves. But the United States can try to do
better than a 2.
What strategic move will work to improve the equilibrium payoff for the
United States? An unconditional move (commitment) will not work, because
China will respond with “Inaction” to either first move by the United States.
CHINA
UNITED
STATES
Action
Inaction
Action
3, 3
2, 4
Inaction
4, 1
1, 2
FIGURE 9.7 Payoff Table for U.S.­–China Political Action Game
6841D CH09 UG.indd 359
12/18/14 3:13 PM
3 6 0 [ C h . 9 ] s t r at e g i c m o v e s
A threat alone (“We won’t take action unless you do”) does not work, because
the implied promise (“We will if you do”) is not credible—if China does act, the
United States would prefer to back off and leave everything to China, getting a
payoff of 4 instead of the 3 that would come from fulfilling the promise. A prom‑
ise alone won’t work: because China knows that the United States will intervene
if China does not, an American promise of “We will intervene if you do” becomes
tantamount to a simple commitment to intervene; then China can stay out and
get its best payoff of 4.
In this game, an explicit promise from the United States must carry the im‑
plied threat “We won’t take action if you don’t,” but that threat is not automati‑
cally credible. Similarly, America’s explicit threat must carry the implied promise
“We will act if you do,” but that is not automatically credible, either. Therefore,
the United States has to make both the threat and the promise explicit. It must
issue the combined threat-cum-promise “We will act if, and only if, you do.” It
needs to make both clauses credible. Usually such credibility has to be achieved
by means of a treaty that covers the whole relationship, not just with agreements
negotiated separately when each incident arises.
5 SOME ADDITIONAL TOPICS
A. When Do Strategic Moves Help?
We have seen several examples in which a strategic move brings a better outcome
to one player or another, compared with the original game without such moves.
What can be said in general about the desirability of such moves?
An unconditional move—a commitment—need not always be advanta‑
geous to the player making it. In fact, if the original game gives the advantage
to the second mover, then it is a mistake to commit oneself to move in advance,
thereby effectively becoming the first mover.
The availability of a conditional move—threat or promise—can never be
an actual disadvantage. At the very worst, one can commit to a response rule
that would have been optimal after the fact. However, if such moves bring one
an actual gain, it must be because one is choosing a response rule that in some
eventualities specifies an action different from what one would find optimal at
that later time. Thus, whenever threats and promises bring a positive gain, they
do so precisely when (one might say precisely because) their credibility is inher‑
ently questionable and must be achieved by some specific credibility “device.”
We have mentioned some such devices in connection with each earlier example
and will later discuss the topic of achieving credibility in greater generality.
6841D CH09 UG.indd 360
12/18/14 3:13 PM
s o m e a d d i t i o n a l t o p i c s 3 6 1
What about the desirability of being on the receiving end of a strategic move?
It is never desirable to let the other player threaten you. If a threat seems likely,
you can gain by looking for a different kind of advance action—one that makes
the threat less effective or less credible. We will consider some such actions
shortly. However, it is often desirable to let the other player make promises to
you. In fact, both players may benefit when one can make a credible promise, as
in the prisoners’ dilemma example of restaurant pricing earlier in this chapter, in
which a promise achieved the cooperative outcome. Thus, it may be in the play‑
ers’ mutual interest to facilitate the making of promises by one or both of them.
B. Deterrence versus Compellence
In principle, either a threat or a promise can achieve either deterrence or com‑
pellence. For example, a parent who wants a child to study hard (compellence)
can promise a reward (a new racing bike) for good performance in school or can
threaten a punishment (a strict curfew the following term) if the performance is
not sufficiently good. Similarly, a parent who wants the child to keep away from
bad company (deterrence) can try either a reward (promise) or a punishment
(threat). In practice, the two types of strategic moves work somewhat differently,
and that will affect the ultimate decision regarding which to use. Generally, de‑
terrence is better achieved by a threat and compellence by a promise. The rea‑
son is an underlying difference of timing and initiative.
A deterrent threat can be passive—you don’t need to do anything so long as
the other player doesn’t do what you are trying to deter. And it can be static—you
don’t have to impose any time limit. Thus, you can set a trip wire and then leave
things up to the other player. So the parent who wants the child to keep away
from bad company can say, “If I ever catch you with X again, I will impose a 7 p.m.
curfew on you for a whole year.” Then the parent can sit back to wait and watch;
only if the child acts contrary to the parent’s wishes does the parent have to act
on her threat. Trying to achieve the same deterrence by a promise would require
more complex monitoring and continual action: “At the end of each month in
which I know that you did not associate with X, I will give you $25.”
Compellence must have a deadline or it is pointless—the other side can
defeat your purpose by procrastinating or by eroding your threat in small steps
(salami tactics). This makes a compellent threat harder to implement than a
compellent promise. The parent who wants the child to study hard can sim‑
ply say, “Each term that you get an average of B or better, I will give you CDs or
games worth $500.” The child will then take the initiative in showing the parent
each time he has fulfilled the conditions. Trying to achieve the same thing by a
threat—“Each term that your average falls below B, I will take away one of your
computer games”—will require the parent to be much more vigilant and active.
The child will postpone bringing the grade report or will try to hide the games.
6841D CH09 UG.indd 361
12/18/14 3:13 PM
3 6 2 [ C h . 9 ] s t r at e g i c m o v e s
The concepts of reward and punishment are relative to those of some sta‑
tus quo. If the child has a perpetual right to the games, then taking one away
is a punishment; if the games are temporarily assigned to the child on a
term‑by‑term basis, then renewing the assignment for another term is a reward.
Therefore, you can change a threat into a promise or vice versa by changing
the status quo. You can use this change to your own advantage when making a
strategic move. If you want to achieve compellence, try to choose a status quo
such that what you do when the other player acts to comply with your demand
becomes a reward, and so you are using a compellent promise. To give a rather
dramatic example, a mugger can convert the threat “If you don’t give me your
wallet, I will take out my knife and cut your throat” into the promise “Here is a
knife at your throat; as soon as you give me your wallet I will take it away.” But if
you want to achieve deterrence, try to choose a status quo such that, if the other
player acts contrary to your wishes, what you do is a punishment, and so you are
using a deterrent threat.
6 ACQUIRING CREDIBILITY
We have emphasized the importance of credibility of strategic moves throughout,
and we accompanied each example with some brief remarks about how credibil‑
ity could be achieved in that particular context. Devices for achieving credibility
are indeed often context specific, and there is a lot of art to discovering or devel‑
oping such devices. Some general principles can help you organize your search.
We pointed out two broad approaches to credibility: (1) reducing your own
future freedom of action in such a way that you have no choice but to carry out
the action stipulated by your strategic move and (2) changing your own future
payoffs in such a way that it becomes optimal for you to do what you stipulate in
your strategic move. We now elaborate some practical methods for implement‑
ing each of these approaches.
A. Reducing Your Freedom of Action
i. automatic fulfillment Suppose at stage 1 you relinquish your choice at stage
2 and hand it over to a mechanical device or similar procedure or mechanism
that is programmed to carry out your committed, threatened, or promised a
­ ction
under the appropriate circumstances. You demonstrate to the other player that
you have done so. Then he will be convinced that you have no freedom to change
your mind, and your strategic move will be credible. The doomsday device, a
nuclear explosive device that would detonate and contaminate the whole world’s
atmosphere if the enemy launched a nuclear attack, is the best-known example,
6841D CH09 UG.indd 362
12/18/14 3:13 PM
a c q u i r i n g c r e d i b i l i t y 3 6 3
popularized by the early 1960s movies Fail Safe and Dr. Strangelove. Luckily, it
remained in the realm of fiction. But automatic procedures that retaliate with
import tariffs if another country tries to subsidize its exports to your country
(countervailing duties) are quite common in the arena of trade policy.
ii. delegation A fulfillment device does not even have to be mechanical. You could
delegate the power to act to another person or to an organization that is required
to follow certain preset rules or procedures. In fact, that is how the countervailing
duties work. They are set by two agencies of the U.S. government—the Commerce
Department and the International Trade Commission—whose operating proce‑
dures are laid down in the general trade laws of the country.
An agent should not have his own objectives that defeat the purpose of his
strategic move. For example, if one player delegates to an agent the task of in‑
flicting threatened punishment and the agent is a sadist who enjoys inflicting
punishment, then he may act even when there is no reason to act—that is, even
when the second player has complied. If the second player suspects this, then
the threat loses its effectiveness, because the punishment becomes a case of
“damned if you do and damned if you don’t.”
Delegation devices are not complete guarantees of credibility. Even the
doomsday device may fail to be credible if the other side suspects that you con‑
trol an override button to prevent the risk of a catastrophe. And delegation and
mandates can always be altered; in fact, the U.S. government has often set aside
the stipulated countervailing duties and reached other forms of agreements
with other countries so as to prevent costly trade wars.
iii. burning bridges Many invaders, from Xenophon in ancient Greece to William
the Conqueror in England to Cortés in Mexico, are supposed to have deliberately
cut off their own army’s avenue of retreat to ensure that it will fight hard. Some
of them literally burned bridges behind them, while others burned ships, but
the device has become a cliche. Its most recent users in military contexts may
have been the Japanese kamikaze pilots in World War II, who took only enough
fuel to reach the U.S. naval ships into which they were to ram their airplanes.
The principle even appears in the earliest known treatise on war, in a commen‑
tary attributed to Prince Fu Ch’ai: “Wild beasts, when they are at bay, fight des‑
perately. How much more is this true of men! If they know there is no alternative
they will fight to the death.”5
Related devices are used in other high-stakes games. Although the Euro‑
pean Monetary Union could have retained separate currencies and merely fixed
the exchange rates among them, a common currency was adopted precisely to
make the process irreversible and thereby give the member countries a much
5
6841D CH09 UG.indd 363
Sun Tzu, The Art of War, trans. Samuel B. Griffith (Oxford: Oxford University Press, 1963), p. 110.
12/18/14 3:13 PM
3 6 4 [ C h . 9 ] s t r at e g i c m o v e s
greater incentive to make the union a success. (In fact, it is the extent of the nec‑
essary commitment that has kept some nations, Great Britain in particular, from
agreeing to be part of the European Monetary Union.) It is not totally impos‑
sible to abandon a common currency and go back to separate national ones; it
is just inordinately costly. If things get really bad inside the Union, one or more
countries may yet choose to get out. As with automatic devices, the credibility of
burning bridges is not an all-or-nothing matter, but one of degree.
iv. cutting off communication If you send the other player a message demonstrating
your commitment and at the same time cut off any means for him to communi‑
cate with you, then he cannot argue or bargain with you to reverse your action.
The danger in cutting off communication is that, if both players do so simul‑
taneously, then they may make mutually incompatible commitments that can
cause great mutual harm. Additionally, cutting off communication is harder to
do with a threat, because you have to remain open to the one message that tells
you whether the other player has complied and therefore whether you need
to carry out your threat. In this age, it is also quite difficult for a person to cut
himself off from all contact.
But players who are large teams or organizations can try variants of this
device. Consider a labor union that makes its decisions at mass meetings
of members. To convene such a meeting takes a lot of planning—reserving a
hall, communicating with members, and so forth—and several weeks of time.
A meeting is convened to decide on a wage demand. If management does not
meet the demand in full, the union leadership is authorized to call a strike and
then it must call a new mass meeting to consider any counteroffer. This process
puts management under a lot of time pressure in the bargaining; it knows that
the union will not be open to communication for several weeks at a time. Here,
we see that cutting off communication for extended periods can establish some
degree of credibility, but not absolute credibility. The union’s device does not
make communication totally impossible; it only creates several weeks of delay.
B. Changing Your Payoffs
i. reputation You can acquire a reputation for carrying out threats and deliver‑
ing on promises. Such a reputation is most useful in a repeated game against
the same player. It is also useful when playing different games against different
players, if each of them can observe your actions in the games that you play
with others. The circumstances favorable to the emergence of such a reputa‑
tion are the same as those for achieving cooperation in the prisoners’ dilemma,
and for the same reasons. The greater the likelihood that the interaction will
continue and the greater the concern for the future relative to the present, the
more likely the players will be to sacrifice current temptations for the sake of
6841D CH09 UG.indd 364
12/18/14 3:13 PM
a c q u i r i n g c r e d i b i l i t y 3 6 5
future gains. The players will therefore be more willing to acquire and maintain
reputations.
In technical terms, this device links different games, and the payoffs of ac‑
tions in one game are altered by the prospects of repercussions in other games.
If you fail to carry out your threat or promise in one game, your reputation suf‑
fers and you get a lower payoff in other games. Therefore, when you consider
any one of these games, you should adjust your payoffs in it to take into consid‑
eration such repercussions on your payoffs in the linked games.
The benefit of reputation in ongoing relationships explains why your regular
car mechanic is less likely to cheat you by doing an unnecessary or excessively
costly or shoddy repair than is a random garage that you go to in an emergency.
But what does your regular mechanic actually stand to gain from acquiring this
reputation if competition forces him to charge a price so low that he makes no
profit on any deal? His integrity in repairing your car must come at a price—you
have to be willing to let him charge you a little bit more than the rates that the
cheapest garage in the area might advertise.
The same reasoning also explains why, when you are away from home, you
might settle for the known quality of a restaurant chain instead of taking the risk
of going to an unknown local restaurant. And a department store that expands
into a new line of merchandise can use the reputation that it has acquired in its
existing lines to promise its customers the same high quality in the new line.
In games where credible promises by one or both parties can bring mutual
benefit, the players can agree and even cooperate in fostering the development
of reputation mechanisms. But if the interaction ends at a known finite time,
there is always the problem of the endgame.
In the Middle East peace process that started in 1993 with the Oslo Accord,
the early steps, in which Israel transferred some control over Gaza and small iso‑
lated areas of the West Bank to the Palestinian Authority and in which the latter
accepted the existence of Israel and reduced its anti-Israel rhetoric and violence,
continued well for a while. But as the final stages of the process approached,
mutual credibility of the next steps became problematic, and by 1998 the pro‑
cess stalled. Sufficiently attractive rewards could have come from the outside;
for example, the United States or Europe could have given both parties contin‑
gent offers of economic aid or prospects of expanded commerce to keep the
process going. The United States offered Egypt and Israel large amounts of aid in
this way to achieve the Camp David Accords in 1978. But such rewards were not
offered in the more recent situation and, at the date of this writing, prospects for
progress do not look bright.
ii. dividing the game into small steps Sometimes a single game can be divided into
a sequence of smaller games, thereby allowing the reputation mechanism
to come into effect. In home-construction projects, it is customary to pay by
6841D CH09 UG.indd 365
12/18/14 3:13 PM
3 6 6 [ C h . 9 ] s t r at e g i c m o v e s
installments as the work progresses. In the Middle East peace process, Israel
would never have agreed to a complete transfer of the West Bank to the Palestin‑
ian Authority in one fell swoop in return for a single promise to recognize Israel
and cease the terrorism. Proceeding in steps has enabled the process to go at
least part of the way. But this again illustrates the difficulty of sustaining mo‑
mentum as the endgame approaches.
iii. teamwork Teamwork is yet another way to embed one game into a larger game
to enhance the credibility of strategic moves. It requires a group of players to
monitor each other. If one fails to carry out a threat or a promise, others are
required to inflict punishment on him; failure to do so makes them in turn vul‑
nerable to similar punishment by others, and so on. Thus, a player’s payoffs in
the larger game are altered in a way that makes adhering to the team’s creed
credible.
Many universities have academic honor codes that act as credibility devices
for students. Examinations are not proctored by the faculty; instead, students
are required to report to a student committee if they see any cheating. Then the
committee holds a hearing and hands out punishment, as severe as suspension
for a year or outright expulsion, if it finds the accused student guilty of cheating.
Students are very reluctant to place their fellow students in such jeopardy. To
stiffen their resolve, such codes include the added twist that failure to report an
observed infraction is itself an offense against the code. Even then, the general
belief is that the system works only imperfectly. A poll conducted at Princeton
University last year found that only a third of students said that they would re‑
port an observed infraction, especially if they knew the guilty person.
iv. irrationality Your threat may lack credibility because the other player knows
that you are rational and that it is too costly for you to follow through with your
threatened action. Therefore, others believe you will not carry out the threat‑
ened action if you are put to the test. You can counter this problem by claiming
to be irrational so that others will believe that your payoffs are different from
what they originally perceived. Apparent irrationality can then turn into strate‑
gic rationality when the credibility of a threat is in question. Similarly, appar‑
ently irrational motives such as honor or saving face may make it credible that
you will deliver on a promise even when tempted to renege.
The other player may see through such rational irrationality. Therefore, if
you attempt to make your threat credible by claiming irrationality, he will not
readily believe you. You will have to acquire a reputation for irrationality, for
example, by acting irrationally in some related game. You could also use one of
the strategies discussed in Chapter 8 and do something that is a credible signal
of irrationality to achieve an equilibrium in which you can separate from the
falsely irrational.
6841D CH09 UG.indd 366
12/18/14 3:13 PM
a c q u i r i n g c r e d i b i l i t y 3 6 7
v. contracts You can make it costly to yourself to fail to carry out a threat or to
deliver on a promise by signing a contract under which you have to pay a
sufficiently large sum in that eventuality. If such a contract is written with suf‑
ficient clarity that it can be enforced by a court or some outside authority, the
change in payoffs makes it optimal to carry out the stipulated action, and the
threat or the promise becomes credible.
In regard to a promise, the other player can be the other party to the con‑
tract. It is in his interest that you deliver on the promise, so he will hold you to
the contract if you fail to fulfill the promise. A contract to enforce a threat is
more problematic. The other player does not want you to carry out the threat‑
ened action and will not enforce the contract unless he gets some longer-term
benefit in associated games from being subject to a credible threat in this one.
Therefore in regard to a threat, the contract has to be with a third party. But
when you bring in a third party and a contract merely to ensure that you will
carry out your threat if put to the test, the third party does not actually benefit
from your failure to act as stipulated. The contract thus becomes vulnerable to
any renegotiation that would provide the third-party enforcer with some posi‑
tive benefits. If the other player puts you to the test, you can say to the third
party, “Look, I don’t want to carry out the threat. But I am being forced to do
so by the prospect of the penalty in the contract, and you are not getting any‑
thing out of all this. Here is a real dollar in exchange for releasing me from the
contract.” Thus, the contract itself is not credible; therefore neither is the threat.
The third party must have its own longer-term reasons for holding you to the
contract, such as wanting to maintain its reputation, if the contract is to be
renegotiation-proof and therefore credible.
Written contracts are usually more binding than verbal ones, but even ver‑
bal ones may constitute commitments. When George H. W. Bush said “Read my
lips; no new taxes” in the presidential campaign of 1988, the American public
took this promise to be a binding contract; when Bush reneged on it in 1990, the
public held that against him in the election of 1992.
vi. brinkmanship In the U.S.–Japan trade-policy game, we found that a threat
might be too “large” to be credible. If a smaller but effective threat cannot be
found in a natural way, the size of the large threat can be reduced to a credi‑
ble level by making its fulfillment a matter of chance. The United States cannot
credibly say to Japan, “If you don’t keep your markets open to U.S. goods, we will
not defend you if the Russians or the Chinese attack you.” But it can credibly
say, “If you don’t keep your markets open to U.S. goods, the relations between
our countries will deteriorate, which will create the risk that, if you are faced
with an invasion, Congress at that time will not sanction U.S. military involve‑
ment in your aid.” As mentioned earlier, such deliberate creation of risk is called
brinkmanship. This is a subtle idea, difficult to put into practice. Brinkmanship
6841D CH09 UG.indd 367
12/18/14 3:13 PM
3 6 8 [ C h . 9 ] s t r at e g i c m o v e s
is best understood by seeing it in operation, and the detailed case study of the
Cuban missile crisis in Chapter 14 serves just that purpose.
We have described several devices for making one’s strategic moves cred‑
ible and examined how well they work. In conclusion, we want to emphasize
a feature common to the entire discussion. Credibility in practice is not an
all‑or‑nothing matter but one of degree. Even though the theory is stark—
rollback analysis shows either that a threat works or that it does not—practical
application must recognize that between these polar extremes lies a whole spec‑
trum of possibility and probability.
7 COUNTERING YOUR OPPONENT’S STRATEGIC MOVES
If your opponent can make a commitment or a threat that works to your disad‑
vantage, then, before he actually does so, you may be able to make a strategic
countermove of your own. You can do so by making his future strategic move less
effective, for example, by removing its irreversibility or undermining its credibil‑
ity. In this section, we examine some devices that can help achieve this purpose.
Some are similar to devices that the other side can use for its own needs.
A. Irrationality
Irrationality can work for the would-be receiver of a commitment or a threat
just as well as it does for the other player. If you are known to be so irrational
that you will not give in to any threat and will suffer the damage that befalls you
when your opponent carries out that threat, then he may as well not make the
threat in the first place, because having to carry it out will only end up hurting
him, too. Everything that we said earlier about the difficulties of credibly con‑
vincing the other side of your irrationality holds true here as well.
B. Cutting Off Communication
If you make it impossible for the other side to convey to you the message that
it has made a certain commitment or a threat, then your opponent will see no
point in doing so. Thomas Schelling illustrates this possibility with the story of
a child who is crying too loudly to hear his parent’s threats.6 Thus, it is pointless
for the parent to make any strategic moves; communication has effectively
been cut off.
6
Thomas C. Schelling, The Strategy of Conflict (Oxford: Oxford University Press, 1960), p. 146.
6841D CH09 UG.indd 368
12/18/14 3:13 PM
c o u n t e r i n g y o u r o pp o n e n t ’ s s t r at e g i c m o v e s 3 6 9
C. Leaving Escape Routes Open
If the other side can benefit by burning bridges to prevent its retreat, you can
benefit by dousing those fires or perhaps even by constructing new bridges or
roads by which your opponent can retreat. This device was also known to the
ancients. Sun Tzu said, “To a surrounded enemy, you must leave a way of es‑
cape.” The intent is not actually to allow the enemy to escape. Rather, “show him
there is a road to safety, and so create in his mind the idea that there is an alter‑
native to death. Then strike.”7
D. Undermining Your Opponent’s Motive to Uphold His Reputation
If the person threatening you says, “Look, I don’t want to carry out this threat,
but I must because I want to maintain my reputation with others,” you can
respond, “It is not in my interest to publicize the fact that you did not punish
me. I am only interested in doing well in this game. I will keep quiet; both of
us will avoid the mutually damaging outcome; and your reputation with others
will stay intact.” Similarly, if you are a buyer bargaining with a seller and he re‑
fuses to lower his price on the grounds that “if I do this for you, I would have to
do it for everyone else,” you can point out that you are not going to tell anyone
else. This may not work; the other player may suspect that you would tell a few
friends who would tell a few others, and so on.
E. Salami Tactics
Salami tactics are devices used to whittle down the other player’s threat in the
way that a salami is cut—one slice at a time. You fail to comply with the other’s
wishes (whether for deterrence or compellence) to a very small degree so that it
is not worth the other’s while to carry out the comparatively more drastic and
mutually harmful threatened action just to counter that small transgression. If
that works, you transgress a little more, and a little more again, and so on.
You know this perfectly well from your own childhood. Schelling8 gives a
wonderful description of the process:
Salami tactics, we can be sure, were invented by a child. . . . Tell a child not
to go in the water and he’ll sit on the bank and submerge his bare feet; he is
not yet “in” the water. Acquiesce, and he’ll stand up; no more of him is in the
water than before. Think it over, and he’ll start wading, not going any deeper.
Take a moment to decide whether this is different and he’ll go a little deeper,
arguing that since he goes back and forth it all averages out. Pretty soon we
7
8
6841D CH09 UG.indd 369
Sun Tzu, The Art of War, pp. 109–110.
Thomas C. Schelling, Arms and Influence (New Haven: Yale University Press, 1966), pp. 66–67.
12/18/14 3:13 PM
3 7 0 [ C h . 9 ] s t r at e g i c m o v e s
are calling to him not to swim out of sight, wondering whatever happened
to all our discipline.
Salami tactics work particularly well against compellence, because they can
take advantage of the time dimension. When your mother tells you to clean up
your room “or else,” you can put off the task for an extra hour by claiming that
you have to finish your homework, then for a half day because you have to go to
football practice, then for an evening because you can’t possibly miss The Simpsons on TV, and so on.
To counter the countermove of salami tactics, you must make a correspond‑
ingly graduated threat. There should be a scale of punishments that fits the scale
of noncompliance or procrastination. This can also be achieved by gradually
raising the risk of disaster, another application of brinkmanship.
Summary
Actions taken by players to fix the rules of later play are known as strategic moves.
These first moves must be observable and irreversible to be true first moves, and
they must be credible if they are to have their desired effect of altering the equi‑
librium outcome of the game. Commitment is an unconditional first move used
to seize a first-mover advantage when one exists. Such a move usually entails
committing to a strategy that would not have been one’s equilibrium strategy in
the original version of the game.
Conditional first moves such as threats and promises are response rules de‑
signed either to deter rivals’ actions and preserve the status quo or to compel
rivals’ actions and alter the status quo. Threats carry the possibility of mutual
harm but cost nothing if they work; threats that create only the risk of a bad
outcome fall under the classification of brinkmanship. Promises are costly only
to the maker and only if they are successful. Threats can be arbitrarily large, al‑
though excessive size compromises credibility, but promises are usually kept just
large enough to be effective. If the implicit promise (or threat) that accompanies
a threat (or promise) is not credible, players must make a move that combines
both a promise and a threat and see to it that both components are credible.
Credibility must be established for any strategic move. There are a number
of general principles to consider in making moves credible and a number of spe‑
cific devices that can be used to acquire credibility. They generally work either
by reducing your own future freedom to choose or by altering your own payoffs
from future actions. Specific devices of this kind include establishing a reputation, using teamwork, demonstrating apparent irrationality, burning bridges, and
making contracts, although the acquisition of credibility is often context specific.
Similar devices exist for countering strategic moves made by rival players.
6841D CH09 UG.indd 370
12/18/14 3:13 PM
E x e r c i s e s 3 7 1
Key Terms
brinkmanship (343)
commitment (344)
compellence (345)
contract (367)
deterrence (345)
doomsday device (362)
irreversible action (343)
observable action (343)
promise (345)
rational irrationality (366)
reputation (364)
response rule (345)
salami tactics (357)
strategic moves (342)
threat (345)
solved Exercises
S1.
“One could argue that the size of a promise is naturally bounded, while
in principle a threat can be arbitrarily severe so long as it is credible (and
error free).” First, briefly explain why the statement is true. Despite the
truth of the statement, players might find that an arbitrarily severe threat
might not be to their advantage. Explain why the latter statement is also
true.
S2.
For each of the following three games, answer these questions:
(a) What is the equilibrium if neither player can use any strategic
moves?
(b) Can one player improve his payoff by using a strategic move (com‑
mitment, threat, or promise) or a combination of such moves? If so,
which player makes what strategic move(s)?
(i)
COLUMN
ROW
Left
Right
Up
0, 0
2, 1
Down
1, 2
0, 0
(ii)
COLUMN
ROW
6841D CH09 UG.indd 371
Left
Right
Up
4, 3
3, 4
Down
2, 1
1, 2
12/18/14 3:13 PM
3 7 2 [ C h . 9 ] s t r at e g i c m o v e s
(iii)
COLUMN
ROW
9
Left
Right
Up
4, 1
2, 2
Down
3, 3
1, 4
S3.
In the classic film Mary Poppins, the Banks children are players in a
strategic game with a number of different nannies. In their view of the
world, nannies are inherently harsh, and playing tricks on nannies is
great fun. That is, they view themselves as playing a game in which the
nanny moves first, showing herself to be either Harsh or Nice, and the
children move second, choosing to be either Good or Mischievous. The
nanny prefers to have Good children to take care of but is also inherently
harsh, and so she gets her highest payoff of 4 from (Harsh, Good) and her
lowest payoff of 1 from (Nice, Mischievous), with (Nice, Good) yielding 3
and (Harsh, Mischievous) yielding 2. The children similarly most prefer
to have a Nice nanny and then to be Mischievous; they get their highest
two payoffs when the nanny is Nice (4 if Mischievous, 3 if Good) and their
lowest two payoffs when the nanny is Harsh (2 if Mischievous, 1 if Good).
(a) Draw the game tree for this game and find the subgame-perfect
equilibrium in the absence of any strategic moves.
(b) In the film, before the arrival of Mary Poppins, the children write
their own ad for a new nanny in which they state: “If you won’t scold
and dominate us, we will never give you cause to hate us; we won’t
hide your spectacles so you can’t see, put toads in your bed, or pep‑
per in your tea.” Use the tree from part (a) to argue that this state‑
ment constitutes a promise. What would the outcome of the game
be if the children keep their promise?
(c) What is the implied threat that goes with the promise in part (b)? Is
that implied threat automatically credible? Explain your answer.
(d) How could the children make the promise in part (b) credible?
(e) Is the promise in part (b) compellent or deterrent? Explain your
answer by referring to the status quo in the game—namely, what
would happen in the absence of the strategic move.
S4.
The following is an interpretation of the rivalry between the United
States and the Soviet Union for geopolitical influence during the 1970s
and 1980s.9 Each side has the choice of two strategies: Aggressive and
We thank political science professor Thomas Schwartz at UCLA for the idea for this exercise.
6841D CH09 UG.indd 372
12/18/14 3:13 PM
E x e r c i s e s 3 7 3
Restrained. The Soviet Union wants to achieve world domination, so
being Aggressive is its dominant strategy. The United States wants to pre‑
vent the Soviet Union from achieving world domination; it will match
Soviet aggressiveness with aggressiveness, and restraint with restraint.
Specifically, the payoff table is:
SOVIET UNION
UNITED
STATES
Restrained
Aggressive
Restrained
4, 3
1, 4
Aggressive
3, 1
2, 2
For each player, 4 is best and 1 is worst.
(a) Consider this game when the two countries move simultaneously.
Find the Nash equilibrium.
(b) Next consider three different and alternative ways in which the
game could be played with sequential moves: (i) The United States
moves first, and the Soviet Union moves second. (ii) The Soviet
Union moves first, and the United States moves second. (iii) The
Soviet Union moves first, and the United States moves second,
but the Soviet Union has a further move in which it can change
its first move. For each case, draw the game tree and find the
subgame-perfect equilibrium.
(c) What are the key strategic matters (commitment, credibility, and so
on) for the two countries?
S5.
6841D CH09 UG.indd 373
Consider the following games. In each case, (i) identify which player
can benefit from making a strategic move, (ii) identify the nature of the
strategic move appropriate for this purpose, (iii) discuss the concep‑
tual and practical difficulties that will arise in the process of making this
move credible, and (iv) discuss whether and how the difficulties can be
overcome.
(a) The other countries of the European Monetary Union (France, Ger‑
many, and so on) would like Britain to join the common currency
and the common central bank.
(b) The United States would like North Korea to stop exporting mis‑
siles and missile technology to countries such as Iran and would like
China to join the United States in working toward this aim.
(c) The United Auto Workers would like U.S. auto manufacturers not
to build plants in Mexico and would like the U.S. government to re‑
strict imports of autos made abroad.
12/18/14 3:13 PM
3 7 4 [ C h . 9 ] s t r at e g i c m o v e s
UNsolved Exercises
U1.
In a scene from the movie Manhattan Murder Mystery, Woody Allen and
Diane Keaton are at a hockey game in Madison Square Garden. She is ob‑
viously not enjoying herself, but he tells her: “Remember our deal. You
stay here with me for the entire hockey game, and next week I will come
to the opera with you and stay until the end.” Later, we see them com‑
ing out of the Met into the deserted Lincoln Center Plaza while inside
the music is still playing. Keaton is visibly upset: “What about our deal? I
stayed to the end of the hockey game, and so you were supposed to stay
till the end of the opera.” Allen answers: “You know I can’t listen to too
much Wagner. At the end of the first act, I already felt the urge to invade
Poland.” Comment on the strategic choices made here by using your
knowledge of the theory of strategic moves and credibility.
U2.
Consider a game between a parent and a child. The child can choose to
be good (G) or bad (B); the parent can punish the child (P) or not (N).
The child gets enjoyment worth a 1 from bad behavior, but hurt worth
22 from punishment. Thus, a child who behaves well and is not pun‑
ished gets a 0; one who behaves badly and is punished gets 1 2 2 5 21;
and so on. The parent gets 22 from the child’s bad behavior and 21
from inflicting punishment.
(a) Set up this game as a simultaneous-move game, and find the
equilibrium.
(b) Next, suppose that the child chooses G or B first and that the parent
chooses its P or N after having observed the child’s action. Draw the
game tree and find the subgame-perfect equilibrium.
(c) Now suppose that before the child acts, the parent can commit to
a strategy. For example, the threat “P if B” (“If you behave badly, I
will punish you”). How many such strategies does the parent have?
Write the table for this game. Find all pure-strategy Nash equilibria.
(d) How do your answers to parts (b) and (c) differ? Explain the reason
for the difference.
U3.
The general strategic game in Thucydides’ history of the Peloponnesian
War has been expressed in game-theoretic terms by Professor William
Charron of St. Louis University.10 Athens had acquired a large empire of
coastal cities around the Aegean as part of its leadership role in defend‑
ing the Greek world from Persian invasions. Sparta, fearing Athenian
10
William C. Charron, “Greeks and Games: Forerunners of Modern Game Theory,” Forum for
Social Economics, vol. 29, no. 2 (Spring 2000), pp. 1–32.
6841D CH09 UG.indd 374
12/18/14 3:13 PM
E x e r c i s e s 3 7 5
power, was contemplating war against Athens. If Sparta decided against
war, Athens would have to decide whether to retain or relinquish its em‑
pire. But Athens in turn feared that if it gave independence to the cit‑
ies, they could choose to join Sparta in a greatly strengthened alliance
against Athens and receive very favorable terms from Sparta for doing so.
Thus there are three players, Sparta, Athens, and Small cities, who move
in this order. There are four outcomes, and the payoffs are as follows (4
being best):
Sparta
Athens
Small cities
War
2
2
2
Athens retains empire
1
4
1
Small cities join Sparta
4
1
4
Small cities stay independent
3
3
3
Outcome
(a) Draw the game tree and find the rollback equilibrium. Is there an‑
other outcome that is better for all players?
(b) What strategic move or moves could attain the better outcome? Dis‑
cuss the credibility of such moves.
6841D CH09 UG.indd 375
U4.
It is possible to reconfigure the payoffs in the game in Exercise S3 so that
the children’s statement in their ad is a threat, rather than a promise.
(a) Redraw the tree from part (a) of Exercise S3 and fill in payoffs for
both players so that the children’s statement becomes a threat in the
full technical sense.
(b) Define the status quo in your game, and determine whether the
threat is deterrent or compellent.
(c) Explain why the threatened action is not automatically credible,
given your payoff structure.
(d) Explain why the implied promise is automatically credible.
(e) Explain why the children would want to make a threat in the first
place, and suggest a way in which they might make their threatened
action credible.
U5.
Answer the questions in Exercise S5 for the following situations:
(a) The students at your university or college want to prevent the ad‑
ministration from raising tuition.
(b) Most participants, as well as outsiders, want to achieve a durable
peace in Afghanistan, Iraq, Israel, and Palestine.
(c) Nearly all nations of the world want Iran to shut down its nuclear
program.
12/18/14 3:13 PM
3 7 6 [ C h . 9 ] s t r at e g i c m o v e s
U6.
6841D CH09 UG.indd 376
Write a brief description of a game in which you have participated, en‑
tailing strategic moves such as a commitment, threat, or promise and
paying special attention to the essential aspect of credibility. Provide an
illustration of the game if possible, and explain why the game that you
describe ended as it did. Did the players use sound strategic thinking in
making their choices?
12/18/14 3:13 PM
10
■
The Prisoners’ Dilemma
and Repeated Games
I
n this chapter ,
we continue our study of broad classes of games with an
analysis of the prisoners’ dilemma game. It is probably the classic example
of the theory of strategy and its implications for predicting the behavior of
game players, and most people who learn only a little bit of game theory
learn about it. Even people who know no game theory may know the basic story
behind this game or they may have at least heard that it exists. The prisoners’ dilemma is a game in which each player has a dominant strategy, but the equilibrium that arises when all players use their dominant strategies provides a worse
outcome for every player than would arise if they all used their dominated strategies instead. The paradoxical nature of this equilibrium outcome leads to several more complex questions about the nature of the interactions that only a
more thorough analysis can hope to answer. The purpose of this chapter is to
provide that additional thoroughness.
We already considered the prisoners’ dilemma in Section 3 of Chapter 4.
There we took note of the curious nature of the equilibrium that is actually a
“bad” outcome for the players. The “prisoners” can find another outcome that
both prefer to the equilibrium outcome, but they find it difficult to bring about.
The focus of this chapter is the potential for achieving that better outcome.
That is, we consider whether and how the players in a prisoners’ dilemma can
attain and sustain their mutually beneficial cooperative outcome, overcoming
their separate incentives to defect for individual gain. We first review the standard prisoners’ dilemma game and then develop three categories of solutions.
The first and most important method of solution consists of repetition of the
377
6841D CH10 UG.indd 377
12/18/14 3:13 PM
3 7 8 [ C h . 1 0 ] t h e p r i s o n e r s ’ d i l e m m a a n d r e p e at e d g a m e s
standard one-shot game. The general theory of repeated games was the contribution for which Robert Aumann was awarded the 2005 Nobel Prize in economics (jointly with Thomas Schelling). As usual at this introductory level, we
look at a few simple examples of this general theory. We then consider two other
potential solutions that rely on penalty (or reward) schemes and on the role of
leadership. A fourth potential solution to the dilemma incorporates asymmetric information into a finitely repeated dilemma game. We allude briefly to that
case but present the (quite) technical details only in our collection of online
appendixes.
This chapter concludes with a discussion of some of the experimental evidence regarding the prisoners’ dilemma as well as several examples of actual
dilemmas in action. Experiments generally put live players in a variety of prisoners’ dilemma–type games and show some perplexing as well as some more predictable behavior; experiments conducted with the use of computer simulations
yield additional interesting outcomes. Our examples of real-world dilemmas
that end the chapter are provided to give a sense of the diversity of situations in
which prisoners’ dilemmas arise and to show how, in at least one case, players
may be able to create their own solution to the dilemma.
1 THE BASIC GAME (REVIEW )
Before we consider methods for avoiding the “bad” outcome in the prisoners’
dilemma, we briefly review the basics of the game. Recall our example from
Chapter 4 of the husband and wife suspected of murder. Each is interrogated
separately and can choose to confess to the crime or to deny any involvement.
The payoff matrix that they face was originally presented as Figure 4.4 and
is reproduced here as Figure 10.1. The numbers shown indicate years in jail;
therefore low numbers are better for both players.
Both players here have a dominant strategy. Each does better to confess, regardless of what the other player does. The equilibrium outcome entails both
players deciding to confess and each getting 10 years in jail. If they both had
WIFE
Confess (Defect) Deny (Cooperate)
HUSBAND
Confess (Defect)
10 yr, 10 yr
1 yr, 25 yr
Deny (Cooperate)
25 yr, 1 yr
3 yr, 3 yr
FIGURE 10.1 Payoffs for the Standard Prisoners’ Dilemma
6841D CH10 UG.indd 378
12/18/14 3:13 PM
s o l u t i o n s i : r e p e t i t i o n 3 7 9
chosen to deny any involvement, however, they would have been better off, with
only 3 years of jail time to serve.
In any prisoners’ dilemma game, there is always a cooperative strategy and
a cheating or defecting strategy. In Figure 10.1, Deny is the cooperative strategy;
both players using that strategy yields the best outcome for the players. Confess
is the cheating or defecting strategy; when the players do not cooperate with one
another, they choose to Confess in the hope of attaining individual gain at the rival’s expense. Thus, players in a prisoners’ dilemma can always be labeled, according to their choice of strategy, as either defectors or cooperators. We will use this
labeling system throughout the discussion of potential solutions to the dilemma.
We want to emphasize that, although we speak of a cooperative strategy, the prisoners’ dilemma game is noncooperative in the sense explained in
Chapter 2—namely, the players make their decisions and implement their
choices individually. If the two players could discuss, choose, and play their
strategies jointly—as if, for example, the prisoners were in the same room and
could give a joint answer to the question of whether they were both going to
confess—there would be no difficulty about their achieving the outcome that
both prefer. The essence of the questions of whether, when, and how a prisoners’ dilemma can be resolved is the difficulty of achieving a cooperative (jointly
preferred) outcome through noncooperative (individual) actions.
2 SOLUTIONS I: REPETITION
Of all the mechanisms that can sustain cooperation in the prisoners’ dilemma,
the best known and the most natural is repeated play of the game. Repeated
or ongoing relationships between players imply special characteristics for the
games that they play against one another. In the prisoners’ dilemma, this result
manifests itself in the fact that each player fears that one instance of defecting
will lead to a collapse of cooperation in the future. If the value of future cooperation is large and exceeds what can be gained in the short term by defecting, then
the long-term individual interests of the players can automatically and tacitly
keep them from defecting, without the need for any additional punishments or
enforcement by third parties.
We consider the meal-pricing dilemma faced by the two restaurants, Xavier’s
Tapas and Yvonne’s Bistro, introduced in Chapter 5. For our purposes here, we
have chosen to simplify that game by supposing that only two choices of price
are available: the jointly best (collusive) price of $26 or the Nash equilibrium
price of $20. The payoffs (profits measured in hundreds of dollars per month)
for each restaurant can be calculated by using the quantity (demand) functions
6841D CH10 UG.indd 379
12/18/14 3:13 PM
3 8 0 [ C h . 1 0 ] t h e p r i s o n e r s ’ d i l e m m a a n d r e p e at e d g a m e s
YVONNE’S BISTRO
XAVIER’S
TAPAS
20 (Defect)
26 (Cooperate)
20 (Defect)
288, 288
360, 216
26 (Cooperate)
216, 360
324, 324
FIGURE 10.2 Prisoners’ Dilemma of Pricing ($100s per month)
in Section 1.A of Chapter 5; these payoffs are shown in Figure 10.2. As in any
prisoners’ dilemma, each store has a dominant strategy to defect and price its
meals at $20, although both stores would prefer the outcome in which each
cooperates and charges the higher price of $26 per meal.
Let us start our analysis by supposing that the two restaurants are initially in
the cooperative mode, each charging the higher price of $26. If one restaurant—
say, Xavier’s—deviates from this pricing strategy, it can increase its profit from
324 to 360 (from $32,400 to $36,000) for one month. But then cooperation has
dissolved and Xavier’s rival, Yvonne’s, will see no reason to cooperate from then
on. Once cooperation has broken down, presumably permanently, the profit for
Xavier’s is 288 ($28,800) each month instead of the 324 ($32,400) it would have
been if Xavier’s had never defected in the first place. By gaining 36 ($3,600) in
one month of defecting, Xavier’s gives up 36 ($3,600) each month thereafter by
destroying cooperation. Even if the relationship lasts as little as three months, it
seems that defecting is not in Xavier’s best interest. A similar argument can be
made for Yvonne’s. Thus, if the two restaurants competed on a regular basis for
at least three months, it seems that we might see cooperative behavior and high
prices rather than the defecting behavior and low prices predicted by theory for
the one-shot game.
A. Finite Repetition
But the solution of the dilemma is not actually that simple. What if the relationship did last exactly three months? Then strategic restaurants would
want to analyze the full three-month game and choose their optimal pricing
strategies. Each would use rollback to determine what price to charge each
month. Starting their analyses with the third month, they would realize that, at
that point, there was no future relationship to consider. Each restaurant would
find that it had a dominant strategy to defect. Given that, there is effectively no
future to consider in the second month either. Each player knows that there
will be mutual defecting in the third month, and therefore both will defect
in the second month; defecting is the dominant strategy in month 2 also.
Then the same argument applies to the first month as well. Knowing that both
6841D CH10 UG.indd 380
12/18/14 3:13 PM
s o l u t i o n s i : r e p e t i t i o n 3 8 1
will defect in months 2 and 3 anyway, there is no future value of cooperation
in the first month. Both players defect right from the start, and the dilemma
is alive and well.
This result is very general. As long as the relationship between the two players in a prisoners’ dilemma game lasts a fixed and known length of time, the
dominant-strategy equilibrium with defecting should prevail in the last period
of play. When the players arrive at the end of the game, there is never any value
to continued cooperation, and so they defect. Then rollback predicts mutual
defecting all the way back to the very first play. However, in practice, players in
finitely repeated prisoners’ dilemma games show a lot of cooperation; more on
this to come.
B. Infinite Repetition
Analysis of the finitely repeated prisoners’ dilemma shows that even repetition
of the game cannot guarantee the players a solution to their dilemma. But what
would happen if the relationship did not have a predetermined length? What if
the two restaurants expected to continue competing with one another indefinitely? Then our analysis must change to incorporate this new aspect of their
interaction, and we will see that the incentives of the players change also.
In repeated games of any kind, the sequential nature of the relationship
means that players c an adopt strategies that depend on behavior in preceding
plays of the games. Such strategies are known as contingent strategies, and several specific examples are used frequently in the theory of repeated games. Most
contingent strategies are trigger strategies. A player using a trigger strategy
plays cooperatively as long as her rival(s) do so, but any defection on their part
“triggers” a period of punishment, of specified length, in which she plays noncooperatively in response. Two of the best-known trigger strategies are the grim
strategy and tit-for-tat. The grim strategy entails cooperating with your rival
until such time as she defects from cooperation; once a defection has occurred,
you punish your rival (by choosing the Defect strategy) on every play for the rest
of the game.1 Tit-for-tat (TFT) is not so harshly unforgiving as the grim strategy and is famous (or infamous) for its ability to solve the prisoners’ dilemma
without requiring permanent punishment. Playing TFT involves cooperating
on the first play and then choosing, in each future period, the action chosen by
your rival in the preceding period of play. Thus, when playing TFT, you cooperate with your rival if she cooperated during the most recent play of the game
and defect (as punishment) if your rival defected. The punishment phase lasts
only as long as your rival continues to defect; you will return to cooperation one
period after she chooses to do so.
1
Defecting as retaliation under the requirements of a trigger strategy is often termed punishing to
distinguish it from the original decision to deviate from cooperation.
6841D CH10 UG.indd 381
12/18/14 3:13 PM
3 8 2 [ C h . 1 0 ] t h e p r i s o n e r s ’ d i l e m m a a n d r e p e at e d g a m e s
Let us consider how play might proceed in the repeated restaurant pricing
game if one of the players uses the contingent strategy tit-for-tat. We have already
seen that if Xavier’s Tapas defects one month, it could add 36 to its profits (360
instead of 324). But if Xavier’s rival is playing TFT, then such defecting would
induce Yvonne’s Bistro to punish Xavier’s the next month in retaliation. At that
point, Xavier’s has two choices. One option is to continue to defect by pricing
at $20 and to endure Yvonne’s continued punishment according to TFT; in this
case, Xavier’s loses 36 (288 rather than 324) for every month thereafter in the
foreseeable future. This option appears quite costly. But Xavier’s could get back to
cooperation, too, if it so desired. By reverting to the cooperative price of $26 after
one month’s defection, Xavier’s would incur only one month’s punishment from
Yvonne’s. During that month, Xavier’s would suffer a loss in profit of 108 (216
rather than the 324 that would have been earned without any defection). In the
second month after Xavier’s defection, both restaurants could be back at the cooperative price earning 324 each month. This one-time defection yields an extra
36 in profit but costs an additional 108 during the punishment, also apparently
quite costly to Xavier’s.
It is important to realize here, however, that Xavier’s extra $36 from defecting is gained in the first month. Its losses are ceded in the future. Therefore, the
relative importance of the two depends on the relative importance of the present versus the future. Here, because payoffs are calculated in dollar terms, an
objective comparison can be made. Generally, money (or profit) that is earned
today is better than money that is earned later because, even if you do not need
(or want) the money until later, you can invest it now and earn a return on it
until you need it. So Xavier’s should be able to calculate whether it is worthwhile
to defect on the basis of the total rate of return on its investment (including
capital gains and/or dividends and/or interest, depending on the type of investment). We use the symbol r to denote this rate of return. Thus, one dollar
invested generates r dollars of interest and/or dividends and/or capital gains,
or 100 dollars generates 100r, therefore the rate of return is sometimes also said
to be 100r %.
Note that we can calculate whether it is in Xavier’s interest to defect because
the firms’ payoffs are given in dollar terms, rather than as simple ratings of outcomes, as in some of the games in earlier chapters (the street-garden game in
Chapters 3 and 6, for example). This means that payoff values in different cells
are directly comparable; a payoff of 4 (dollars) is twice as good as a payoff of 2
(dollars) here, whereas a payoff of 4 is not necessarily exactly twice as good as
a payoff of 2 in any two-by-two game in which the four possible outcomes are
ranked from 1 (worst) to 4 (best). As long as the payoffs to the players are given
in measurable units, we can calculate whether defecting in a prisoners’ dilemma
game is worthwhile.
6841D CH10 UG.indd 382
12/18/14 3:13 PM
s o l u t i o n s i : r e p e t i t i o n 3 8 3
i. is it worthwhile to defect only once against a rival playing tft? One of Xavier’s options
when playing repeatedly against a rival using TFT is to defect just once from a
cooperative outcome and then to return to cooperating. This particular strategy
gains the restaurant 36 in the first month (the month during which it defects)
but loses it 108 in the second month. By the third month, cooperation is restored. Is defecting for only one month worth it?
We cannot directly compare the 36 gained in the first month with the 108
lost in the second month because the additional money value of time must be
incorporated into the calculation. That is, we need a way to determine how
much the 108 lost in the second month is worth during the first month. Then we
can compare that number with 36 to see whether defecting once is worthwhile.
What we are looking for is the present value (PV) of 108, or how much in profit
earned this month (in the present) is equivalent to (has the same value as) the
108 earned next month. We need to determine the number of dollars earned this
month that, with interest, would give us 108 next month; we call that number
PV, the present value of 108.
Given that the (monthly) total rate of return is r, getting PV this month and
investing it until next month yields a total next month of PV  rPV, where the
first term is the principal being paid back and the second term is the return
(interest or dividend or capital gain). When the total is exactly 108, then PV
equals the present value of 108. Setting PV  rPV  108 yields a solution for PV:
PV 
108
.
1r
For any value of r, we can now determine the exact number of dollars that,
earned this month, would be worth 108 next month.
From the perspective of Xavier’s Tapas, the question remains whether the
gain of 36 this month is offset by the loss of 108 next month. The answer depends on the value of PV. Xavier’s must compare the gain of 36 with the PV of the
loss of 108. To defect once (and then return to cooperation) is worthwhile only if
36 . 108(1  r). This is the same as saying that defecting once is beneficial only
if 36(1  r) . 108, which reduces to r . 2. Thus, Xavier’s should choose to defect
once against a rival playing TFT only if the monthly total rate of return exceeds
200%. This outcome is very unlikely; for example, prime lending rates rarely exceed 12% per year. This translates into a monthly interest rate of no more than
1% (compounded annually, not monthly), well below the 200% just calculated.
Here, it is better for Xavier’s to continue cooperating than to try a single instance
of defecting when Yvonne’s is playing TFT.
6841D CH10 UG.indd 383
12/18/14 3:13 PM
3 8 4 [ C h . 1 0 ] t h e p r i s o n e r s ’ d i l e m m a a n d r e p e at e d g a m e s
ii. is it worthwhile to defect forever against a rival playing tft? What about the possibil-
ity of defecting once and then continuing to defect forever? This second option of
Xavier’s gains the restaurant 36 in the first month but loses it 36 in every month
thereafter into the future if the rival restaurant plays TFT. To determine whether
such a strategy is in Xavier’s best interest again depends on the present value of
the losses incurred. But this time the losses are incurred over an infinite horizon of future months of competition.
Xavier’s option of defecting forever against a rival playing TFT yields a payoff
(profit) stream equivalent to what Xavier’s would get if it were to defect against a
rival using the grim trigger strategy. Recall that the grim strategy requires players
to punish any defection with retaliatory defection in all future periods. In that
case, it is not worthwhile for Xavier’s to attempt any return to cooperation after
its initial defection because the rival firm will be choosing to defect, as punishment, forever. Any defection on Xavier’s part against a grim-playing rival would
then lead to a gain of 36 in the first month and a loss of 36 in all future months,
exactly the same outcome as if it defected forever against a rival playing TFT.
The analysis below is therefore also the analysis one would complete to assess
whether it is worthwhile to defect at all against a rival playing the grim strategy.
To determine whether a defection of this type is worthwhile, we need to
figure out the present value of all of the 36s that are lost in future months, add
them all up, and compare them with the 36 gained during the month of defecting. The PV of the 36 lost during the first month of punishment and continued defecting on Xavier’s part is just 36(1  r); the calculation is identical
with that used in Section 2.B.i to find that the PV of 108 was 108(1  r). For
the next month, the PV must be the dollar amount needed this month that,
with two months of compound interest, would yield 36 in two months. If the
PV is invested now, then in one month the investor would have that principal
amount plus a return of r PV, for a total of PV  rPV, as before; leaving this total
amount invested for the second month means that at the end of two months,
the investor has the amount invested at the beginning of the second month
(PV  rPV) plus the return on that amount, which would be r(PV  rPV). The
PV of the 36 lost two months from now must then solve the equation: PV  rPV
 r (PV  rPV)  36. Working out the value of PV here yields PV(1  r)2  36, or
PV  36(1  r)2. You should see a pattern developing. The PV of the 36 lost in
the third month of continued defecting is 36(1  r)3, and the PV of the 36 lost
in the fourth month is 36(1  r)4. In fact, the PV of the 36 lost in the nth month
of continued defecting is just 36(1  r)n. Xavier’s loses an infinite sum of 36s,
and the PV of each of them gets smaller each month.
More precisely, Xavier’s loses the sum, from n  1 to n   (where n labels
the months of continued defecting after the initial month, which is month 0),
6841D CH10 UG.indd 384
12/18/14 3:13 PM
s o l u t i o n s i : r e p e t i t i o n 3 8 5
of 36(1  r)n. Mathematically, it is written as the sum of an infinite number of
terms2:
36
36
36
36
....
1r
(1 r )3
(1 r )4
(1 r )2
Because r is a rate of return and presumably a positive number, the ratio of
1(1  r) will be less than 1; this ratio is generally called the discount factor and is
referred to by the Greek letter  . With   1(1  r) , 1, the mathematical rule for
infinite sums tells us that this sum converges to a specific value, in this case 36r.
It is now possible to determine whether Xavier’s Tapas will choose to defect
forever. The restaurant compares its gain of 36 with the PV of all the lost 36s, or
36r. Then it defects forever only if 36 . 36r, or r . 1; defecting forever is beneficial in this particular game only if the monthly rate of return exceeds 100%,
another unlikely event. Thus, we would not expect Xavier’s to defect against a
cooperative rival when both are playing tit-for-tat. (Nor would we expect defection against a cooperative rival when both are playing grim.) When both
Yvonne’s Bistro and Xavier’s Tapas play TFT, the cooperative outcome in which
both price high is a Nash equilibrium of the game. Both playing TFT is a Nash
equilibrium, and use of this contingent strategy solves the prisoners’ dilemma
for the two restaurants.
Remember that tit-for-tat is only one of many trigger strategies that players
could use in repeated prisoners’ dilemmas. And it is one of the “nicer” ones. Thus,
if TFT can be used to solve the dilemma for the two restaurants, other, harsher
trigger strategies should be able to do the same. As noted, the grim strategy can
also be used to sustain cooperation in this infinitely repeated game and in others as well.
C. Games of Unknown Length
In addition to considering games of finite or infinite length, we can incorporate
a more sophisticated tool to deal with games of unknown length. It is possible
that, in some repeated games, players might not know for certain exactly how
long their interaction will continue. They may, however, have some idea of the
probability that the game will continue for another period. For example, our
restaurants might believe that their repeated competition will continue only as
long as their customers find prix fixe menus to be the dining-out experience of
choice; if there were some probability each month that à la carte dinners would
take over that role, then the nature of the game is altered.
2
6841D CH10 UG.indd 385
The appendix to this chapter contains a detailed discussion of the solution of infinite sums.
12/18/14 3:13 PM
3 8 6 [ C h . 1 0 ] t h e p r i s o n e r s ’ d i l e m m a a n d r e p e at e d g a m e s
Recall that the present value of a loss next month is already worth only
  1(1  r) times the amount earned. If in addition there is only a probability p
(less than 1) that the relationship will actually continue to the next month, then
next month’s loss is worth only p times  times the amount lost. For Xavier’s Tapas,
this means that the PV of the 36 lost with continued defecting is worth 36   [the
same as 36(1  r)] when the game is assumed to be continuing with certainty
but is worth only 36  p   when the game is assumed to be continuing with
probability p. Incorporating the probability that the game may end next period
means that the present value of the lost 36 is smaller, because p , 1, than it is
when the game is definitely expected to continue (when p is assumed to equal 1).
The effect of incorporating p is that we now effectively discount future
payoffs by the factor p   instead of simply by . We call this effective rate of
return R, where 1(1  R)  p  , and R depends on p and  as shown3:
1
p
1R
1 p(1 R)
R
1 p
.
p
With a 5% actual rate of return on investments (r  0.05, and so   11.05 
0.95) and a 50% chance that the game continues for an additional month (p 
0.5), then R  [1  (0.5)(0.95)](0.5)(0.95)  1.1, or 110%.
Now the high rates of return required to destroy cooperation (encourage
defection) in these examples seem more realistic if we interpret them as effective
rather than actual rates of return. It becomes conceivable that defecting forever,
or even once, might actually be to one’s benefit if there is a large enough probability that the game will end in the near future. Consider Xavier’s decision whether
to defect forever against a TFT-playing rival. Our earlier calculations showed that
permanent defecting is beneficial only when r exceeds 1, or 100%. If Xavier’s faces
the 5% actual rate of return and the 50% chance that the game will continue for an
­additional month, as we assumed in the preceding paragraph, then the effective
rate of return of 110% will exceed the critical value needed for it to continue defecting. Thus, the cooperative behavior sustained by the TFT strategy can break down
if there is a sufficiently large chance that the repeated game might be over by the
end of the next period of play—that is, by a sufficiently small value of p.
3
We could also express R in terms of r and p, in which case R  (1  r)p  1.
6841D CH10 UG.indd 386
12/18/14 3:13 PM
s o l u t i o n s i : r e p e t i t i o n 3 8 7
D. General Theory
We can easily generalize the ideas about when it is worthwhile to defect against
TFT-playing rivals so that you can apply them to any prisoners’ dilemma game that
you encounter. To do so, we use a table with general payoffs (delineated in appropriately measurable units) that satisfy the standard structure of payoffs in the dilemma
as in Figure 10.3. The payoffs in the table must satisfy the relation H . C . D . L for
the game to be a prisoners’ dilemma, where C is the cooperative outcome, D is the
payoff when both players defect from cooperation, H is the high payoff that goes to
the defector when one player defects while the other cooperates, and L is the low
payoff that goes to the loser (the cooperator) in the same situation.
In this general version of the prisoners’ dilemma, a player’s one-time gain
from defecting is (H  C). The single-period loss for being punished while you
return to cooperation is (C  L), and the per-period loss for perpetual defecting is (C  D). To be as general as possible, we will allow for situations in which
there is a probability p , 1 that the game continues beyond the next period and
so we will discount payoffs using an effective rate of return of R per period. If
p  1, as would be the case when the game is guaranteed to continue, then
R  r, the simple interest rate used in our preceding calculations. Replacing r
with R, we find that the results attained earlier generalize almost immediately.
We found earlier that a player defects exactly once against a rival playing
TFT if the one-time gain from defecting (H  C ) exceeds the present value of the
single-period loss from being punished (the PV of C  L). In this general game,
that means that a player defects once against a TFT-playing opponent only if
(H  C ) . (C  L)(1  R), or (1  R)(H  C ) . C  L, or
R
C L
 1.
H C
Similarly, we found that a player defects forever against a rival playing TFT only
if the one-time gain from defecting exceeds the present value of the infinite sum
of the per-period losses from perpetual defecting (where the per-period loss is
C  D). For the general game, then, a player defects forever against a TFT-­playing
COLUMN
Defect
Cooperate
Defect
D, D
H, L
Cooperate
L, H
C, C
ROW
FIGURE 10.3 General Version of the Prisoners’ Dilemma
6841D CH10 UG.indd 387
12/18/14 3:13 PM
3 8 8 [ C h . 1 0 ] t h e p r i s o n e r s ’ d i l e m m a a n d r e p e at e d g a m e s
opponent, or defects at all against a grim-playing opponent, only if (H  C ) .
(C  D)R, or
C D
R
.
H C
The three critical elements in a player’s decision to defect, as seen in these
two expressions, are the immediate gain from defection (H  C ), the future
losses from punishment (C  L or C  D per period of punishment), and the
value of the effective rate of return (R, which measures the importance of the
present relative to the future). Under what conditions on these various values do
players find it attractive to defect from cooperation?
First, assume that the values of the gains and losses from defecting are fixed.
Then changes in R determine whether a player defects, and defection is more
likely when R is large. Large values of R are associated with small values of p
and small values of  (and large values of r), so defection is more likely when the
probability of continuation is low or the discount factor is low (or the interest
rate is high). Another way to think about it is that defection is more likely when
the future is less important than the present or when there is little future to consider; that is, defection is more likely when players are impatient or when they
expect the game to end quickly.
Second, consider the case in which the effective rate of return is fixed, as
is the one-period gain from defecting. Then changes in the per-period losses
associated with punishment determine whether defecting is worthwhile. Here it
is smaller values of C  L or C  D that encourage defection. In this case, defection is more likely when punishment is not very severe.4
Finally, assume that the effective rate of return and the per-period losses associated with punishment are held constant. Now players are more likely to defect when the gains, H  C, are high. This situation is more likely when defecting
garners a player large and immediate benefits.
This discussion also highlights the importance of the detection of defecting.
Decisions about whether to continue along a cooperative path depend on how
long defecting might be able to go on before it is detected, on how accurately
it is detected, and on how long any punishment can be made to last before an
attempt is made to revert back to cooperation. Although our model does not incorporate these considerations explicitly, if defecting can be detected accurately
and quickly, its benefit will not last long, and the subsequent cost will have to
be paid more surely. Therefore, the success of any trigger strategy in resolving a
repeated prisoners’ dilemma depends on how well (both in speed and accuracy)
4
The costs associated with defection may also be smaller if information transmission is not perfect,
as might be the case if there are many players, and so difficulties might arise in identifying the defector and in coordinating a punishment scheme. Similarly, gains from defection may be larger if rivals
cannot identify a defection immediately.
6841D CH10 UG.indd 388
12/18/14 3:13 PM
s o l u t i o n s i I : p e n a lt i e s a n d r e wa r ds 3 8 9
players can detect defecting. This is one reason that the TFT strategy is often
considered dangerous; slight errors in the execution of actions or in the perception of those actions can send players into continuous rounds of punishment
from which they may not be able to escape for a long time, until a slight error of
the opposite kind occurs.
You can use all of these ideas to guide you in when to expect more cooperative behavior between rivals and when to expect more defecting and cutthroat
actions. If times are bad and an entire industry is on the verge of collapse, for
example, so that businesses feel that there is no future, competition may
become fiercer (less cooperative behavior may be observed) than in normal
times. Even if times are temporarily good but are not expected to last, firms
may want to make a quick profit while they can, so cooperative behavior
might again break down. Similarly, in an industry that emerges temporarily because of a quirk of fashion and is expected to collapse when fashion
changes, we should expect less cooperation. Thus, a particular beach resort
might become the place to go, but all the hotels there will know that such a
situation cannot last, and so they cannot afford to collude on pricing. If, in
contrast, the shifts in fashion are among products made by an unchanging
group of companies in long-term relationships with each other, cooperation
might persist. For example, even if all the children want cuddly bears one
year and Transformers Rescue Bots the next, collusion in pricing may occur
if the same small group of manufacturers makes both items.
In Chapter 11, we will look in more detail at prisoners’ dilemmas that arise
in games with many players. We examine when and how players can overcome
such dilemmas and achieve outcomes better for them all.
3 SOLUTIONS II: PENALTIES AND REWARDS
Although repetition is the major vehicle for the solution of the prisoners’
dilemma, there are also several others that can be used to achieve this purpose.
One of the simplest ways to avert the prisoners’ dilemma in the one-shot version of the game is to inflict some direct penalty on the players when they defect. When the payoffs have been altered to incorporate the cost of the penalty,
players may find that the dilemma has been resolved.5
Consider the husband-wife dilemma from Section 1. If only one player defects, the game’s outcome entails 1 year in jail for the defector and 25 years for
the cooperator. The defector, though, getting out of jail early, might find the cooperator’s friends waiting outside the jail. The physical harm caused by those
5
6841D CH10 UG.indd 389
Note that we get the same type of outcome in the repeated-game case considered in Section 2.
12/18/14 3:13 PM
3 9 0 [ C h . 1 0 ] t h e p r i s o n e r s ’ d i l e m m a a n d r e p e at e d g a m e s
WIFE
Confess
Deny
Confess
10 yr, 10 yr
21 yr, 25 yr
Deny
25 yr, 21 yr
3 yr, 3 yr
HUSBAND
FIGURE 10.4 Prisoners’ Dilemma with Penalty for the Lone Defector
friends might be equivalent to an additional 20 years in jail. If so, and if the
players account for the possibility of this harm, then the payoff structure of the
original game has changed.
The “new” game, with the physical penalty included in the payoffs, is illustrated in Figure 10.4. With the additional 20 years in jail added to each player’s
sentence when one player confesses while the other denies, the game is completely different.
A search for dominant strategies in Figure 10.4 shows that there are none. A
cell-by-cell check then shows that there are now two pure-strategy Nash equilibria. One of them is the (Confess, Confess) outcome; the other is the (Deny,
Deny) outcome. Now each player finds that it is in his or her best interest to
cooperate if the other is going to do so. The game has changed from being a
prisoners’ dilemma to an assurance game, which we studied in Chapter 4. Solving the new game requires selecting an equilibrium from the two that exist.
One of them—the cooperative outcome—is clearly better than the other from
the perspective of both players. Therefore, it may be easy to sustain it as a focal
point if some convergence of expectations can be achieved.
Notice that the penalty in this scenario is inflicted on a defector only when
his or her rival does not defect. However, stricter penalties can be incorporated
into the prisoners’ dilemma, such as penalties for any confession. Such discipline typically must be imposed by a third party with some power over the two
players, rather than by the other player’s friends, because the friends would have
little authority to penalize the first player when their associate also defects. If
both prisoners are members of a special organization (such as a gang or a crime
mafia) and the organization has a standing rule of never confessing to the police
under penalty of extreme physical harm, the game changes again to the one illustrated in Figure 10.5.
Now the equivalent of an additional 20 years in jail is added to all payoffs
associated with the Confess strategy. (Compare Figures 10.5 and 10.1.) In the
new game, each player has a dominant strategy, as in the original game. The difference is that the change in the payoffs makes Deny the dominant strategy for
each player. And (Deny, Deny) becomes the unique pure-strategy Nash equilibrium. The stricter penalty scheme achieved with third-party enforcement makes
6841D CH10 UG.indd 390
12/18/14 3:13 PM
s o l u t i o n s i I : p e n a lt i e s a n d r e wa r ds 3 9 1
WIFE
Confess
Deny
Confess
30 yr, 30 yr
21 yr, 25 yr
Deny
25 yr, 21 yr
3 yr, 3 yr
HUSBAND
FIGURE 10.5 Prisoners’ Dilemma with Penalty for Any Defecting
defecting so unattractive to players that the cooperative outcome becomes the
new equilibrium of the game.
In larger prisoners’ dilemma games, difficulties arise with the use of penalties. In particular, if there are many players and some uncertainty exists, penalty schemes may be more difficult to maintain. It becomes harder to decide
whether actual defecting is taking place or it’s just bad luck or a mistaken move.
In addition, if there really is defecting, it is often difficult to determine the identity of the defector from among the larger group. And if the game is one-shot,
there is no opportunity in the future to correct a penalty that is too severe or to
inflict a penalty once a defector has been identified. Thus, penalties may be less
successful in large one-shot games than in the two-person game we consider
here. We study prisoners’ dilemmas with a large number of players in greater detail in Chapter 11.
A further interesting possibility arises when a prisoners’ dilemma that has
been solved with a penalty scheme is considered in the context of the larger
society in which the game is played. It might be the case that, although the
dilemma equilibrium outcome is bad for the players, it is actually good for
the rest of society or for some subset of persons within the rest of society. If
so, social or political pressures might arise to try to minimize the ability of
players to break out of the dilemma. When third-party penalties are the solution to a prisoners’ dilemma, as is the case with crime mafias that enforce a
no-confession rule, for instance, society can come up with its own strategy to
reduce the effectiveness of the penalty mechanism. The Federal Witness Protection Program is an example of a system that has been set up for just this
purpose. The U.S. government removes the threat of penalty in return for confessions and testimony in court.
Similar situations can be seen in other prisoners’ dilemmas, such as the
pricing game between our two restaurants. The equilibrium there entailed
both firms charging the low price of $20 even though they enjoy higher profits
when charging the higher price of $26. Although the restaurants want to break
out of this “bad” equilibrium—and we have already seen how the use of trigger
strategies can help them do so—their customers are happier with the low price
offered in the Nash equilibrium of the one-shot game. The customers then have
6841D CH10 UG.indd 391
12/18/14 3:13 PM
3 9 2 [ C h . 1 0 ] t h e p r i s o n e r s ’ d i l e m m a a n d r e p e at e d g a m e s
an incentive to try to destroy the efficacy of any enforcement mechanism or
solution process the restaurants might use. For example, because some firms
facing prisoners’ dilemma pricing games attempt to solve the dilemma through
the use of a “meet the competition” or “price matching” campaign, customers
might want to press for legislation banning such policies. We analyze the effects
of such price-matching strategies in Section 6.B.
Just as a prisoners’ dilemma can be resolved by penalizing defectors, it can
also be resolved by rewarding cooperators. Because this solution is more difficult to implement in practice, we mention it only briefly.
The most important question is who is to pay the rewards. If it is a third party,
that person or group must have sufficient interest of its own in the cooperation
achieved by the prisoners to make it worth its while to pay out the rewards. A
rare example of this occurred when the United States brokered the Camp David
Accords between Israel and Egypt by offering large promises of aid to both.
If the rewards are to be paid by the players themselves to each other, the
trick is to make the rewards contingent (paid out only if the other player cooperates) and credible (guaranteed to be paid if the other player cooperates).
Meeting these criteria requires an unusual arrangement; for example, the
player making the promise should deposit the sum in advance in an escrow
account held by an honorable and neutral third party, who will hand the sum
over to the other player if she cooperates or return it to the promiser if the
other defects. An end-of-chapter exercise shows how this type of arrangement
can work.
4 SOLUTIONS III: LEADERSHIP
The third method of solution for the prisoners’ dilemma pertains to situations in
which one player takes on the role of leader in the interaction. In most examples
of the prisoners’ dilemma, the game is assumed to be symmetric. That is, all the
players stand to lose (and gain) the same amount from defecting (and cooperation). However, in actual strategic situations, one player may be relatively “large”
(a leader) and the other “small.” If the size of the payoffs is unequal enough,
so much of the harm from defecting may fall on the larger player that she acts
cooperatively, even while knowing that the other will defect. Saudi Arabia,
­
for example, played such a role as the “swing producer” in OPEC (Organization
of Petroleum Exporting Countries) for many years; to keep oil prices high, it cut
back on its output when one of the smaller producers, such as Libya, expanded.
6841D CH10 UG.indd 392
12/18/14 3:13 PM
s o l u t i o n s i II : l e a d e r s h i p 3 9 3
SOPORIA
Research
No Research
Research
–2, –2
–2, 0
No Research
0, –2
–1.6, –1.6
DORMINICA
FIGURE 10.6 Payoffs for Equal-Population SANE Research Game ($billions)
As with the OPEC example, leadership tends to be observed more often in
games between nations than in games between firms or individual persons.
Thus, our example for a game in which leadership may be used to solve the prisoners’ dilemma is one played between countries. Imagine that the populations
of two countries, Dorminica and Soporia, are threatened by a disease, Sudden
Acute Narcoleptic Episodes (SANE). This disease strikes 1 person in every 2,000,
or 0.05% of the population, and causes the victim to fall into a deep sleep state
for a year.6 There are no aftereffects of the disease, but the cost of a worker being
removed from the economy for a year is $32,000. Each country has a population of 100 million workers, so the expected number of cases in each is 50,000
(0.0005  100,000,000), and the expected cost of the disease is $1.6 billion to
each (50,000  32,000). The total expected cost of the disease worldwide—that
is, in both Dorminica and Soporia—is then $3.2 billion.
Scientists are confident that a crash research program costing $2 billion
will lead to a vaccine that is 100% effective. Comparing the cost of the research
program with the worldwide cost of the disease shows that, from the perspective of the entire population, the research program is clearly worth pursuing.
However, the government in each country must consider whether to fund the
full research program on its own. They make this decision separately, but their
decisions affect the outcomes for both countries. Specifically, if only one government chooses to fund the research, the population of the other country can
access the information and use the vaccine without cost. But each government’s
payoff depends only on the costs incurred by its own population.
The payoff matrix for the noncooperative game between Dorminica and
Soporia is shown in Figure 10.6. Each country chooses from two strategies, Research and No Research; payoffs show the costs to the countries, in billions of
dollars, of the various strategy combinations. It is straightforward to verify that
this game is a prisoners’ dilemma and that each country has a dominant strategy to do no research.
6
6841D CH10 UG.indd 393
Think of Rip Van Winkle or of Woody Allen in the movie Sleeper, but the duration is much shorter.
12/18/14 3:13 PM
3 9 4 [ C h . 1 0 ] t h e p r i s o n e r s ’ d i l e m m a a n d r e p e at e d g a m e s
SOPORIA
Research
No Research
Research
–2, –2
–2, 0
No Research
0, –2
–2.4, –0.8
DORMINICA
FIGURE 10.7 Payoffs for Unequal-Population SANE Research Game ($billions)
But now suppose that the populations of the two countries are unequal,
with 150 million in Dorminica and 50 million in Soporia. Then, if no research
is funded by either government, the cost to Dorminica of SANE will be $2.4 billion (0.0005  150,000,000  32,000) and the cost to Soporia will be $0.8 billion
(0.0005  50,000,000  32,000). The payoff matrix changes to the one illustrated
in Figure 10.7.
In this version of the game, No Research is still the dominant strategy for
Soporia. But Dorminica’s best response is now Research. What has happened to
change Dorminica’s choice of strategy? Clearly, the answer lies in the unequal
distribution of the population in this revised version of the game. Dorminica
now stands to suffer such a large portion of the total cost of the disease that it
finds it worthwhile to do the research on its own. This is true even though Dorminica knows full well that Soporia is going to be a free rider and get a share of the
full benefit of the research.
The research game in Figure 10.7 is no longer a prisoners’ dilemma. Here we
see that the dilemma has, in a sense, been “solved” by the size asymmetry. The
larger country chooses to take on a leadership role and provide the benefit for
the whole world.
Situations of leadership in what would otherwise be prisoners’ dilemma
games are common in international diplomacy. The role of leader often falls
naturally to the biggest or most well established of the players, a phenomenon
labeled “the exploitation of the great by the small.”7 For many decades after
World War II, for instance, the United States carried a disproportionate share of
the expenditures of our defense alliances such as NATO and maintained a policy
of relatively free international trade even when our partners, such as Japan and
Europe, were much more protectionist. In such situations, it might be reasonable
to suggest further that a large or well-established player may accept the role of
leader because its own interests are closely tied to those of the players as a whole;
if the large player makes up a substantial fraction of the whole group, such a convergence of interests would seem unmistakable. The large player would then be
expected to act more cooperatively than might otherwise be the case.
7
Mancur Olson, The Logic of Collective Action (Cambridge, Mass.: Harvard University Press, 1965), p. 29.
6841D CH10 UG.indd 394
12/18/14 3:13 PM
e x p e r i m e n ta l e v i d e n c e 3 9 5
5 EXPERIMENTAL EVIDENCE
Numerous people have conducted experiments in which subjects compete in
prisoners’ dilemma games against each other.8 Such experiments show that
cooperation can and does occur in such games, even in repeated versions of
k
Download