solidot新版网站常见问题，请点击这里查看。

Finite-time optimality of Bayesian predictors. (arXiv:1812.08292v1 [cs.LG])

The problem of sequential probability forecasting is considered in the most general setting: a model set C is given, and it is required to predict as well as possible if any of the measures (environments) in C is chosen to generate the data. No assumptions whatsoever are made on the model class C, in particular, no independence or mixing assumptions; C may not be measurable; there may be no predictor whose loss is sublinear, etc. It is shown that the cumulative loss of any possible predictor can be matched by that of a Bayesian predictor whose prior is discrete and is concentrated on C, up to an additive term of order \$\log n\$, where \$n\$ is the time step. The bound holds for every \$n\$ and every measure in C. This is the first non-asymptotic result of this kind. In addition, a non-matching lower bound is established: it goes to infinity with \$n\$ but may do so arbitrarily slow.查看全文

Solidot 文章翻译

 你的名字 留空匿名提交 你的Email或网站 用户可以联系你 标题 简单描述 内容 The problem of sequential probability forecasting is considered in the most general setting: a model set C is given, and it is required to predict as well as possible if any of the measures (environments) in C is chosen to generate the data. No assumptions whatsoever are made on the model class C, in particular, no independence or mixing assumptions; C may not be measurable; there may be no predictor whose loss is sublinear, etc. It is shown that the cumulative loss of any possible predictor can be matched by that of a Bayesian predictor whose prior is discrete and is concentrated on C, up to an additive term of order \$\log n\$, where \$n\$ is the time step. The bound holds for every \$n\$ and every measure in C. This is the first non-asymptotic result of this kind. In addition, a non-matching lower bound is established: it goes to infinity with \$n\$ but may do so arbitrarily slow.
﻿