Algorithmic Learning in Local and Global Public Goods Games

Research output: Working paperDiscussion paperOther research output

12 Downloads (Pure)

Abstract

In this paper, we consider two variations of an infinitely repeated public goods game. In the global version, the agents’ stage-game rewards depend on the contributions of all other agents, while in the local variant, introduced by Eshel, Samuelson, and Shaked (1998), these rewards depend only on the contributions of their neighbors. We define three nested solution concepts: one-shot deviation optimality (OSDO), Nash equilibrium in private stationary strategies (SNE), and subgame-perfect Nash equilibrium in private stationary strategies (SSPE), and derive all symmetric SSPEs for the local and global public goods games, depending on the altruism cost, discount factor, and tremble probability. For each solution concept, we develop a measure to determine how close a given strategy profile is to satisfying the solution concept. We then simulate the outcomes of interactions between Q-learning agents under different parameter settings. Almost no simulations converge to an SSPE. In the global variant, we obtain moderate convergence rates to an SNE, whereas, especially in the local variant, many simulation outcomes fail even to satisfy OSDO. Typically, Q-learners end up with conditional cooperation strategies supported by moderate punishment. Coordination on these strategies is easier in the global public goods game. Consequently, unlike learning-by-imitation, Q-learning results in higher cooperation levels in the global variant
Original languageEnglish
Place of PublicationTilburg
PublisherCentER, Center for Economic Research
Pages1-46
Volume2026-002
Publication statusPublished - 26 Jan 2026

Publication series

NameCentER Discussion Paper
Volume2026-002

Keywords

  • Public goods
  • network
  • equilibrium
  • Q-learning

Fingerprint

Dive into the research topics of 'Algorithmic Learning in Local and Global Public Goods Games'. Together they form a unique fingerprint.

Cite this