GAME THEORY AND INDUSTRIAL ORGANIZATION
Updated A.Y. 2022-2023
Part I: Static and Dynamic Games of Complete Information
The course starts by introducing the very basics of Game Theory, that is, the notions of game, players, actions and payoffs. We will start our investigations in the simplest possible environment with complete information, namely, a static and complete information environment.
The main goal of the course is to provide solution concepts to predict the players' behavior and the games' outcome. We will first investigate a preliminary solution concept called Iterated Elimination of Strictly Dominated Strategies and we will argue that it is not completely satisfactory. We will then turn to the central solution concept of Game Theory, namely, the Nash equilibrium. Many examples will be provided and also applications to economic theory such as the study of Cournot or Bertrand competition.
We will then extend our notion of game to account for dynamics. We will therefore be able to investigate games in which players play sequentially and can observe the past actions of their opponents. With some adjustements, we will be able to apply the Nash equilibrium concept presented earlier. It will be made clear that this solution concept is not satisfactory in dynamic environments as it predicts behaviors that are deemed sequentially irrational. Introducing the notion of subgames, we will present the notion of Subgame-Perfect Nash Equilibrium and show that it appears to be a more satisfactory solution concept in dynamic environments. This extension will be applied to some economic problems and notably to the Stackelberg duopoly problem.
To conclude our study of complete information games, we will look at a specific class of dynamic games: Repeated games. A repeated game is simply the repetition of a basic static game for a finite or infinite number of periods. We will discuss how to solve those games and some of their properties. We will apply this to the sustainability of collusion in a repeated Bertrand setting.
Part II: Static and Dynamic Games of Incomplete Information
The second part of the course will be dedicated to the analysis of static and dynamic games in incomplete information settings, that is, we will explore games similar to the ones seen in the first part of while now allowing for some uncertainty. For that matter, we will have to enrich the previous models and develop new equilibrium concepts, namely, Bayesian Nash equilibrium and Perfect Bayesian equilibrium.
To account for the uncertainty, we will have to introduce several new concepts such as types and beliefs. That way, we will be able to define how players evaluate their payoffs in incomplete information settings. We will then present the equilibrium concept for static games of incomplete information: Bayesian Nash equilibrium.
Moving to dynamic games of incomplete information we will face new challenges in defining sequential rationality as players might be uninformed about other players' past moves. We will have to consider the notion of beliefs more carefully and we will see that they will now be part of the definition of the equilibrium itself. This will lead to our definition of our solution concepts for dynamic games of incomplete information: Perfect Bayesian Nash equilibrium.
Normal-form games. Nash Equilibrium. Mixed Strategies. Cournot and Bertrand duopoly.
Extensive-form games. Subgames. Subgame-perfect Nash equilibrium. Stackelberg Duopoly.
Repeated games. The Prisoner’s dilemma and coordination. The Folk theorem.
Static games of incomplete Information. Bayesian Nash equilibrium. An auction game. Harsanyi’ interpretation of mixed strategies.
Dynamic games of incomplete information. Perfect Bayesian equilibrium. Basics of signalling.
References and useful content
The reference book for the class is "Gibbons, R. (1992), Game theory for applied economists, Princeton University Press".
Check this YouTube channel: Game Theory Online