This paper presents a new algorithm to perform regression estimation, in both the inductive and transductive setting. The estimator is defined as a linear combination of functions in a given dictionary. Coefficients of the combinations are computed sequentially using projection on some simple sets. These sets are defined as confidence regions provided by a deviation (PAC) inequality on an estimator in one-dimensional models. We prove that every projection the algorithm actually improves the performance...
We propose a feature selection method for density estimation with
quadratic loss. This method relies on the study of unidimensional
approximation models and on the definition of confidence regions for
the density thanks to these models. It is quite general and includes
cases of interest like detection of relevant wavelets coefficients
or selection of support vectors in SVM. In the general case, we
prove that every selected feature actually improves the performance
of the estimator. In the case...
We establish rates of convergences in statistical learning for time series forecasting. Using the PAC-Bayesian approach, slow rates of convergence √ d/n for the Gibbs estimator under the absolute loss were given in a previous work [7], where n is the sample size and d the dimension of the set of predictors. Under the same weak dependence conditions, we extend this result to any convex Lipschitz loss function. We also identify a condition on the parameter space that ensures similar rates for the...
Download Results (CSV)