Summary: Consider an $n$-person game that is played repeatedly, but by different agents. In each period, $n$ players are drawn at random from a large finite population. Each player chooses an optimal strategy based on a sample of information about what others players have done in the past. The sampling defines a stochastic process that, for a large class of games that includes coordination games and common interest games, converges almost surely to a pure strategy Nash equilibrium. Such an equilibrium can be interpreted as the “conventional” way of playing the game. If, in addition, the players sometimes experiment or make mistakes, then society occasionally switches from one convention to another. As the likelihood of mistakes goes to zero, only some conventions (equilibria) have positive probability in the limit. These are known as stochastically stable equilibria. They are essentially the same as the risk dominant equilibria in $2\times 2$ games, but for general games the two concepts differ. The stochastically stable equilibria are computed by finding a path of least resistance from every equilibrium to every other, and then finding the equilibrium that has lowest overall resistance. This is a special case of a general theorem on perturbed Markov processes that characterizes their stochastically stable states graph-theoretically.
|91A20||Multistage and repeated games|
|92D15||Problems related to evolution|