This paper extends the classic two-armed bandit problem to a many-agent setting in which I players each face the same experi- mentation problem.The main change from the single-agent prob- lem is that an agent can now learn from the current experimentation of other agents.Information is therefore a public good, and a free- rider problem in experimentation naturally arises.More interestingly, the prospect of future experimentation by others encourages agents to increase current experimentation, in order to bring forward the time at which the extra information generated by such experimenta- tion becomes available.The paper provides an analysis of the set of stationary Markov equilibria in terms of the free-rider e ect and the encouragement e ect.The paper is a revision of our earlier paper, Bolton and Harris .The main modi cation concerns the formulation of randomization in continuous time.C.f.Harris .The earlier paper explored one formulation based on the idea of rapid alternation over the state space.The current paper explores a formulation which is the closest analogue of the discrete-time formulation.It is based on the idea of randomization at each instant of time.
|Place of Publication||Tilburg|
|Number of pages||60|
|Publication status||Published - 1996|
|Name||CentER Discussion Paper|