Strategic Experimentation: A Revision

P. Bolton, C. Harris

Research output: Working paperDiscussion paperOther research output

580 Downloads (Pure)

Abstract

This paper extends the classic two-armed bandit problem to a many-agent setting in which I players each face the same experi- mentation problem.The main change from the single-agent prob- lem is that an agent can now learn from the current experimentation of other agents.Information is therefore a public good, and a free- rider problem in experimentation naturally arises.More interestingly, the prospect of future experimentation by others encourages agents to increase current experimentation, in order to bring forward the time at which the extra information generated by such experimenta- tion becomes available.The paper provides an analysis of the set of stationary Markov equilibria in terms of the free-rider e ect and the encouragement e ect.The paper is a revision of our earlier paper, Bolton and Harris [7].The main modi cation concerns the formulation of randomization in continuous time.C.f.Harris [12].The earlier paper explored one formulation based on the idea of rapid alternation over the state space.The current paper explores a formulation which is the closest analogue of the discrete-time formulation.It is based on the idea of randomization at each instant of time.
Original languageEnglish
Place of PublicationTilburg
PublisherMicroeconomics
Number of pages60
Volume1996-27
Publication statusPublished - 1996

Publication series

NameCentER Discussion Paper
Volume1996-27

Keywords

  • game theory

Fingerprint

Dive into the research topics of 'Strategic Experimentation: A Revision'. Together they form a unique fingerprint.

Cite this