On the Optimality of Sample-Based Estimates of the Expectation of the Empirical Minimizer***
Peter L. Bartlett; Shahar Mendelson; Petra Philips
ESAIM: Probability and Statistics (2010)
- Volume: 14, page 315-337
- ISSN: 1292-8100
Access Full Article
topAbstract
topHow to cite
topBartlett, Peter L., Mendelson, Shahar, and Philips, Petra. "On the Optimality of Sample-Based Estimates of the Expectation of the Empirical Minimizer***." ESAIM: Probability and Statistics 14 (2010): 315-337. <http://eudml.org/doc/250853>.
@article{Bartlett2010,
abstract = {
We study sample-based estimates of the expectation of the function
produced by the empirical minimization algorithm. We investigate the
extent to which one can estimate the rate of convergence of the
empirical minimizer in a data dependent manner. We establish three
main results. First, we provide an algorithm that upper bounds the
expectation of the empirical minimizer in a completely
data-dependent manner. This bound is based on a structural result
due to Bartlett and Mendelson, which relates expectations to sample
averages. Second, we show that these structural upper bounds can be
loose, compared to previous bounds. In particular, we demonstrate a
class for which the expectation of the empirical minimizer decreases
as O(1/n) for sample size n, although the upper bound based on
structural properties is Ω(1). Third, we show that this
looseness of the bound is inevitable: we present an example that
shows that a sharp bound cannot be universally recovered from
empirical data.
},
author = {Bartlett, Peter L., Mendelson, Shahar, Philips, Petra},
journal = {ESAIM: Probability and Statistics},
keywords = {Error bounds; empirical minimization;
data-dependent complexity; error bounds; empirical minimization; data-dependent complexity},
language = {eng},
month = {10},
pages = {315-337},
publisher = {EDP Sciences},
title = {On the Optimality of Sample-Based Estimates of the Expectation of the Empirical Minimizer***},
url = {http://eudml.org/doc/250853},
volume = {14},
year = {2010},
}
TY - JOUR
AU - Bartlett, Peter L.
AU - Mendelson, Shahar
AU - Philips, Petra
TI - On the Optimality of Sample-Based Estimates of the Expectation of the Empirical Minimizer***
JO - ESAIM: Probability and Statistics
DA - 2010/10//
PB - EDP Sciences
VL - 14
SP - 315
EP - 337
AB -
We study sample-based estimates of the expectation of the function
produced by the empirical minimization algorithm. We investigate the
extent to which one can estimate the rate of convergence of the
empirical minimizer in a data dependent manner. We establish three
main results. First, we provide an algorithm that upper bounds the
expectation of the empirical minimizer in a completely
data-dependent manner. This bound is based on a structural result
due to Bartlett and Mendelson, which relates expectations to sample
averages. Second, we show that these structural upper bounds can be
loose, compared to previous bounds. In particular, we demonstrate a
class for which the expectation of the empirical minimizer decreases
as O(1/n) for sample size n, although the upper bound based on
structural properties is Ω(1). Third, we show that this
looseness of the bound is inevitable: we present an example that
shows that a sharp bound cannot be universally recovered from
empirical data.
LA - eng
KW - Error bounds; empirical minimization;
data-dependent complexity; error bounds; empirical minimization; data-dependent complexity
UR - http://eudml.org/doc/250853
ER -
References
top- P.L. Bartlett and S. Mendelson, Empirical minimization. Probab. Theory Relat. Fields135 (2006) 311–334.
- P.L. Bartlett and M.H. Wegkamp, Classification with a reject option using a hinge loss. J. Machine Learn. Res.9 (2008) 1823–1840.
- P.L. Bartlett, O. Bousquet and S. Mendelson, Local Rademacher Complexities. Ann. Statist.33 (2005) 1497–1537.
- P.L. Bartlett, M.I. Jordan and J.D. McAuliffe, Convexity, classification, and risk bounds. J. Am. Statist. Assoc.101 (2006) 138–156.
- G. Blanchard, G. Lugosi and N. Vayatis, On the rate of convergence of regularized boosting classifiers. J. Mach. Learn. Res.4 (2003) 861–894.
- S. Boucheron, G. Lugosi and P. Massart, Concentration inequalities using the entropy method. Ann. Probab.31 (2003) 1583–1614.
- O. Bousquet, Concentration Inequalities and Empirical Processes Theory Applied to the Analysis of Learning Algorithms. Ph.D. thesis, École Polytechnique, 2002.
- R.M. Dudley, Uniform Central Limit Theorems, Cambridge University Press (1999).
- D. Haussler, Sphere Packing Numbers for Subsets of the Boolean n-cube with Bounded Vapnik-Chervonenkis Dimension. J. Combin. Theory Ser. A69 (1995) 217–232.
- T. Klein, Une inégalité de concentration gauche pour les processus empiriques. C. R. Math. Acad. Sci. Paris 334 (2002) 501–504.
- V. Koltchinskii, Local Rademacher Complexities and Oracle Inequalities in Risk Minimization. Ann. Statist.34 (2006).
- V. Koltchinskii and D. Panchenko, Rademacher processes and bounding the risk of function learning. High Dimensional Probability, Vol. II (2000) 443–459.
- M. Ledoux, The Concentration of Measure Phenomenon, volume 89 of Mathematical Surveys and Monographs. American Mathematical Society (2001).
- W.S. Lee, P.L. Bartlett and R.C. Williamson, The Importance of Convexity in Learning with Squared Loss. IEEE Trans. Informa. Theory44 (1998) 1974–1980.
- G. Lugosi and N. Vayatis, On the Bayes-risk consistency of regularized boosting methods (with discussion), Ann. Statist.32 (2004) 30–55.
- G. Lugosi and M. Wegkamp, Complexity regularization via localized random penalties. Ann. Statist.32 (2004) 1679–1697.
- P. Massart, The constants in Talagrand's concentration inequality for empirical processes. Ann. Probab.28 (2000) 863–884.
- P. Massart, Some applications of concentration inequalities to statistics. Ann. Fac. Sci. Toulouse Math.IX (2000) 245–303.
- P. Massart and E. Nédélec, Risk bounds for statistical learning. Ann. Statist.34 (2006) 2326–2366.
- S. Mendelson, Improving the sample complexity using global data. IEEE Trans. Inform. Theory48 (2002) 1977–1991.
- S. Mendelson, A few notes on Statistical Learning Theory. In Proc. of the Machine Learning Summer School, Canberra 2002, S. Mendelson and A. J. Smola (Eds.), LNCS 2600. Springer (2003).
- E. Rio, Inégalités de concentration pour les processus empiriques de classes de parties. Probab. Theory Relat. Fields119 (2001) 163–175.
- M. Rudelson and R. Vershynin, Combinatorics of random processes and sections of convex bodies. Ann. Math.164 (2006) 603–648.
- M. Talagrand, Sharper Bounds for Gaussian and Empirical Processes. Ann. Probab.22 (1994) 20–76.
- M. Talagrand, New concentration inequalities in product spaces. Inventiones Mathematicae126 (1996) 505–563.
- B. Tarigan and S.A. Van de Geer, Adaptivity of support vector machines with penalty. Technical Report MI 2004-14, University of Leiden (2004).
- A. Tsybakov, Optimal aggregation of classifiers in statistical learning. Ann. Statist.32 (2004) 135–166.
- S.A. Van de Geer, A new approach to least squares estimation, with applications. Ann. Statist.15 (1987) 587–602.
- S.A. Van de Geer, Empirical Processes in M-Estimation, Cambridge University Press (2000).
- A. van der Vaart and J. Wellner, Weak Convergence and Empirical Processes. Springer (1996).
- V.N. Vapnik and A.Y. Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab. Appl.16 (1971) 264–280.
NotesEmbed ?
topTo embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.