Over the last 40 years, the field of model-based performance and dependability evaluation has seen important developments, successes and scientific breakthroughs. However, the field has not matured into a key engineering discipline which is heavily called upon by computer system and software engineers, even though it is well-known that already the use of simple analytical models can result in better insight in system performance. In the area of communication system design, performance evaluation has become more of a mainstream activity, albeit almost exclusively using discrete-event simulation techniques. What circumstances made that almost all of our excellent work on analytical performance and dependability evaluation did not find the acceptance and use we think it deserves? On the basis of an historical account of the major developments in the area over the last 40 years, I will address probable reasons for the relatively moderate success and acceptance of model-based performance and dependability evaluation. What did we do right, what did we do wrong? Which circumstances led to successes, and where did we fail? Based on the gathered insights, I will discuss upcoming challenges for the field and recommend research directions for the decade to come.
- Dependability evaluation - performance evaluation - Markov chains - model checking - scalability - security - verification