But, Ben, our monetary policy leaders aren’t stupid. They know what happened in the 1930s just as well as you do. Don’t they see that there is a strategic interaction at work here – a game, in the formal sense of the word – that requires them to take into account other leaders’ decision-making within their own decision-making process, understanding (and this is the crucial bit for game theory) that the other leaders are making exactly the same sort of contingent policy evaluations?

Yes, of course the Fed can see that there’s a strategic interaction here, and of course they’re playing the game as best as they can. But they’re playing the wrong game. They’re still playing a Coordination Game, which is ALWAYS the game that’s played in the immediate aftermath of a global crisis like a Great War or a Great Recession. They have yet to adopt the strategies necessary for a Competition Game, which is ALWAYS the game that’s played after you survive the post-apocalyptic period.

Here’s what a Coordination Game looks like in the typical game theoretic 2x2 matrix framework. If you want to read more about this look up the “Stag Hunt” game on Wikipedia or the like. It’s an old concept, first written about by Rousseau and Hume, and more recently explored (brilliantly, I think) by Brian Skyrms.

Fig. 1 Coordination Game (Stag Hunt)


The basic idea here is that each player can choose to either cooperate (hunt together for a stag, in Rousseau’s example) or defect (hunt independently for a rabbit, in Rousseau’s example), but neither player knows what the other player is going to choose. If you defect, you’re guaranteed to bag a rabbit (so, for example, if the Row Player chooses Defect, he gets 1 point regardless of Column Player’s choice), but if you cooperate, you get a big deer if the other player also cooperates (worth 2 points to both players) and nothing if the other player defects. There are two Nash equilibria for the Coordination Game, marked by the blue ovals in the figure above. A Nash equilibrium is a stable equilibrium because once both players get to that outcome, neither player has any incentive to change his strategy. If both players are defecting, both will get rabbits (bottom right quadrant), and neither player will change to a Cooperate strategy. But if both players are cooperating, both will share a stag (top left quadrant), and neither player will change to a Defect strategy, as you’d be worse off by only getting a rabbit instead of sharing a stag (the other player would be even more worse off if you switched to Defect, but you don’t care about that).

The point of the Coordination Game is that mutual cooperation is a stable outcome, so long as the payoffs from defecting are always less than the payoff of mutual cooperation. This is exactly the payoff structure we got in the aftermath of a Great Recession, as global trade volumes increased across the board, and every country could enjoy greater benefits from monetary policy coordination than by going it alone. As a result we got every politician and every central banker in the world – Missionaries, in game theory parlance – wagging their fingers at us and telling us how to think about the truly extraordinary monetary policies all countries adopted in unison.

But when global trade volumes begin to shrink, the payoffs from monetary policy defection are no longer always less than the payoff of monetary policy cooperation, and we get a game like this:

Fig. 2 Competition Game (Prisoner’s Dilemma)


Here, the payoff from defecting while everyone else continues to cooperate is no longer a mere 1 point rabbit, but is a truly extraordinary payoff where you get the “free rider” benefits of everyone else’s cooperation AND you go out to get a rabbit on your own. It’s essentially the payoff that Europe and Japan got in 2015 by seeing the euro and the yen depreciate against the dollar, and it’s the payoff that China hopes it can get through yuan devaluation in 2016. Ultimately, every country sees where this is going, and so every country stops cooperating and starts defecting, even though every country is worse off in the end, as no one gets the +3 payoff once everyone starts defecting. To make matters worse, the “everyone defect” outcome of the bottom right quadrant is a Nash equilibrium – the only Nash equilibrium in a Competition Game like the Prisoner’s Dilemma – meaning that once you get to this point you are well and truly stuck until you have another crisis that forces you back into the survival mode of a Coordination Game. Sigh.