solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看250次
A Tutorial on Bayesian Optimization. (arXiv:1807.02811v1 [stat.ML])
来源于:arXiv
Bayesian optimization is an approach to optimizing objective functions that
take a long time (minutes or hours) to evaluate. It is best-suited for
optimization over continuous domains of less than 20 dimensions, and tolerates
stochastic noise in function evaluations. It builds a surrogate for the
objective and quantifies the uncertainty in that surrogate using a Bayesian
machine learning technique, Gaussian process regression, and then uses an
acquisition function defined from this surrogate to decide where to sample. In
this tutorial, we describe how Bayesian optimization works, including Gaussian
process regression and three common acquisition functions: expected
improvement, entropy search, and knowledge gradient. We then discuss more
advanced techniques, including running multiple function evaluations in
parallel, multi-fidelity and multi-information source optimization,
expensive-to-evaluate constraints, random environmental conditions, multi-task
Bayesian optimization, and the in 查看全文>>