Strategic Experimentation: A Revision

P. Bolton, C. Harris

Research output: Working paperDiscussion paperOther research output

530 Downloads (Pure)


This paper extends the classic two-armed bandit problem to a many-agent setting in which I players each face the same experi- mentation problem.The main change from the single-agent prob- lem is that an agent can now learn from the current experimentation of other agents.Information is therefore a public good, and a free- rider problem in experimentation naturally arises.More interestingly, the prospect of future experimentation by others encourages agents to increase current experimentation, in order to bring forward the time at which the extra information generated by such experimenta- tion becomes available.The paper provides an analysis of the set of stationary Markov equilibria in terms of the free-rider e ect and the encouragement e ect.The paper is a revision of our earlier paper, Bolton and Harris [7].The main modi cation concerns the formulation of randomization in continuous time.C.f.Harris [12].The earlier paper explored one formulation based on the idea of rapid alternation over the state space.The current paper explores a formulation which is the closest analogue of the discrete-time formulation.It is based on the idea of randomization at each instant of time.
Original languageEnglish
Place of PublicationTilburg
Number of pages60
Publication statusPublished - 1996

Publication series

NameCentER Discussion Paper


  • game theory


Dive into the research topics of 'Strategic Experimentation: A Revision'. Together they form a unique fingerprint.

Cite this