### Abstract

Original language | English |
---|---|

Pages (from-to) | 453-475 |

Number of pages | 23 |

Journal | Probability in the Engineering and Informational Sciences |

Volume | 3 |

Issue number | 4 |

DOIs | |

Publication status | Published - 1989 |

Externally published | Yes |

### Fingerprint

### Cite this

*Probability in the Engineering and Informational Sciences*,

*3*(4), 453-475. https://doi.org/10.1017/S0269964800001327

}

*Probability in the Engineering and Informational Sciences*, vol. 3, no. 4, pp. 453-475. https://doi.org/10.1017/S0269964800001327

**A Bayesian approach to simulated annealing.** / Laarhoven van, P.J.M.; Boender, C.G.E.; Aarts, E.H.L.; Rinnooy Kan, A.H.G.

Research output: Contribution to journal › Article › Scientific › peer-review

TY - JOUR

T1 - A Bayesian approach to simulated annealing

AU - Laarhoven van, P.J.M.

AU - Boender, C.G.E.

AU - Aarts, E.H.L.

AU - Rinnooy Kan, A.H.G.

PY - 1989

Y1 - 1989

N2 - Simulated annealing is a probabilistic algorithm for approximately solving large combinatorial optimization problems. The algorithm can mathematically be described as the generation of a series of Markov chainst in which each Markov chain can be viewed as the outcome of a random experiment with unknown parameters (the probability of sampling a cost function value). Assuming a probability distribution on the vaJues of the unknown parameters (the prior distribution) and given the sequence of configurations resulting from the generation of a Markov chain, we use Bayes's theorem to derive the posterior distribution on the values of the parameters. Numerical experiments are described whicb show that the posterior distribution can he used to predict accurately the behavior of tbe algorithm eorresponding to the next Markov chain. This information is also used to derive optimal rules for choosing same of the parameters governing the convergence of the algorithm.

AB - Simulated annealing is a probabilistic algorithm for approximately solving large combinatorial optimization problems. The algorithm can mathematically be described as the generation of a series of Markov chainst in which each Markov chain can be viewed as the outcome of a random experiment with unknown parameters (the probability of sampling a cost function value). Assuming a probability distribution on the vaJues of the unknown parameters (the prior distribution) and given the sequence of configurations resulting from the generation of a Markov chain, we use Bayes's theorem to derive the posterior distribution on the values of the parameters. Numerical experiments are described whicb show that the posterior distribution can he used to predict accurately the behavior of tbe algorithm eorresponding to the next Markov chain. This information is also used to derive optimal rules for choosing same of the parameters governing the convergence of the algorithm.

U2 - 10.1017/S0269964800001327

DO - 10.1017/S0269964800001327

M3 - Article

VL - 3

SP - 453

EP - 475

JO - Probability in the Engineering and Informational Sciences

JF - Probability in the Engineering and Informational Sciences

SN - 0269-9648

IS - 4

ER -