Multi-objective hyperparameter optimization with performance uncertainty

Alejandro Morales-Hernández, Inneke Van Nieuwenhuyse, Gonzalo Nápoles

    Research output: Contribution to journalArticleScientific

    44 Downloads (Pure)

    Abstract

    The performance of any Machine Learning (ML) algorithm is impacted by the choice of its hyperparameters. As training and evaluating a ML algorithm is usually expensive, the hyperparameter optimization (HPO) method needs to be computationally efficient to be useful in practice. Most of the existing approaches on multi-objective HPO use evolutionary strategies and metamodel-based optimization. However, few methods have been developed to account for uncertainty in the performance measurements. This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of ML algorithms. We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise. Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR, achieved with respect to the hypervolume indicator.
    Original languageEnglish
    Pages (from-to)1-11
    Number of pages11
    JournalarXiv
    DOIs
    Publication statusPublished - 9 Sept 2022

    Keywords

    • cs.LG
    • cs.AI
    • Hyperparameter Optimization
    • Multi-objective Optimization
    • Bayesian Optimization

    Fingerprint

    Dive into the research topics of 'Multi-objective hyperparameter optimization with performance uncertainty'. Together they form a unique fingerprint.

    Cite this