Theory of Classification: a Survey of Some Recent Advances
Stéphane Boucheron; Olivier Bousquet; Gábor Lugosi
ESAIM: Probability and Statistics (2010)
- Volume: 9, page 323-375
- ISSN: 1292-8100
Access Full Article
topAbstract
topHow to cite
topBoucheron, Stéphane, Bousquet, Olivier, and Lugosi, Gábor. "Theory of Classification: a Survey of Some Recent Advances." ESAIM: Probability and Statistics 9 (2010): 323-375. <http://eudml.org/doc/104340>.
@article{Boucheron2010,
abstract = {
The last few years have witnessed important new developments in
the theory and practice of pattern classification. We intend to
survey some of the main new ideas that have led to these
recent results.
},
author = {Boucheron, Stéphane, Bousquet, Olivier, Lugosi, Gábor},
journal = {ESAIM: Probability and Statistics},
keywords = {Pattern recognition; statistical learning theory;
concentration inequalities; empirical processes; model selection.; concentration inequalities; model selection},
language = {eng},
month = {3},
pages = {323-375},
publisher = {EDP Sciences},
title = {Theory of Classification: a Survey of Some Recent Advances},
url = {http://eudml.org/doc/104340},
volume = {9},
year = {2010},
}
TY - JOUR
AU - Boucheron, Stéphane
AU - Bousquet, Olivier
AU - Lugosi, Gábor
TI - Theory of Classification: a Survey of Some Recent Advances
JO - ESAIM: Probability and Statistics
DA - 2010/3//
PB - EDP Sciences
VL - 9
SP - 323
EP - 375
AB -
The last few years have witnessed important new developments in
the theory and practice of pattern classification. We intend to
survey some of the main new ideas that have led to these
recent results.
LA - eng
KW - Pattern recognition; statistical learning theory;
concentration inequalities; empirical processes; model selection.; concentration inequalities; model selection
UR - http://eudml.org/doc/104340
ER -
References
top- R. Ahlswede, P. Gács and J. Körner, Bounds on conditional probabilities with applications in multi-user communication. Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete34 (1976) 157–177. (correction in 39 (1977) 353–354).
- M.A. Aizerman, E.M. Braverman and L.I. Rozonoer, The method of potential functions for the problem of restoring the characteristic of a function converter from randomly observed points. Automat. Remote Control25 (1964) 1546–1556.
- M.A. Aizerman, E.M. Braverman and L.I. Rozonoer, The probability problem of pattern recognition learning and the method of potential functions. Automat. Remote Control25 (1964) 1307–1323.
- M.A. Aizerman, E.M. Braverman and L.I. Rozonoer, Theoretical foundations of the potential function method in pattern recognition learning. Automat. Remote Control25 (1964) 917–936.
- M.A. Aizerman, E.M. Braverman and L.I. Rozonoer, Method of potential functions in the theory of learning machines. Nauka, Moscow (1970).
- H. Akaike, A new look at the statistical model identification. IEEE Trans. Automat. Control19 (1974) 716–723.
- S. Alesker, A remark on the Szarek-Talagrand theorem. Combin. Probab. Comput.6 (1997) 139–144.
- N. Alon, S. Ben-David, N. Cesa-Bianchi and D. Haussler, Scale-sensitive dimensions, uniform convergence, and learnability. J. ACM44 (1997) 615–631.
- M. Anthony and P.L. Bartlett, Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge (1999).
- M. Anthony and N. Biggs, Computational Learning Theory. Cambridge Tracts in Theoretical Computer Science (30). Cambridge University Press, Cambridge (1992).
- M. Anthony and J. Shawe-Taylor, A result of Vapnik with applications. Discrete Appl. Math.47 (1993) 207–217.
- A Antos, L. Devroye and L. Györfi, Lower bounds for Bayes error estimation. IEEE Trans. Pattern Anal. Machine Intelligence21 (1999) 643–645.
- A. Antos, B. Kégl, T. Linder and G. Lugosi, Data-dependent margin-based generalization bounds for classification. J. Machine Learning Res.3 (2002) 73–98.
- A. Antos and G. Lugosi, Strong minimax lower bounds for learning. Machine Learning30 (1998) 31–56.
- P. Assouad, Densité et dimension. Annales de l'Institut Fourier33 (1983) 233–282.
- J.-Y. Audibert and O. Bousquet, Pac-Bayesian generic chaining, in Advances in Neural Information Processing Systems16, L. Saul, S. Thrun and B. Schölkopf Eds., Cambridge, Mass., MIT Press (2004).
- J.-Y. Audibert, PAC-Bayesian Statistical Learning Theory. Ph.D. Thesis, Université Paris 6, Pierre et Marie Curie (2004).
- K. Azuma, Weighted sums of certain dependent random variables. Tohoku Math. J.68 (1967) 357–367.
- Y. Baraud, Model selection for regression on a fixed design. Probability Theory and Related Fields117 (2000) 467–493.
- A.R. Barron, L. Birgé and P. Massart, Risks bounds for model selection via penalization. Probab. Theory Related Fields113 (1999) 301–415.
- A.R. Barron, Logically smooth density estimation. Technical Report TR 56, Department of Statistics, Stanford University (1985).
- A.R. Barron, Complexity regularization with application to artificial neural networks, in Nonparametric Functional Estimation and Related Topics, G. Roussas Ed. NATO ASI Series, Kluwer Academic Publishers, Dordrecht (1991) 561–576.
- A.R. Barron and T.M. Cover, Minimum complexity density estimation. IEEE Trans. Inform. Theory37 (1991) 1034–1054.
- P. Bartlett, S. Boucheron and G. Lugosi, Model selection and error estimation. Machine Learning48 (2001) 85–113.
- P. Bartlett, O. Bousquet and S. Mendelson, Localized Rademacher complexities. Ann. Statist.33 (2005) 1497–1537.
- P.L. Bartlett and S. Ben-David, Hardness results for neural network approximation problems. Theoret. Comput. Sci.284 (2002) 53–66.
- P.L. Bartlett, M.I. Jordan and J.D. McAuliffe, Convexity, classification, and risk bounds. J. Amer. Statis. Assoc., to appear (2005).
- P.L. Bartlett and W. Maass, Vapnik-Chervonenkis dimension of neural nets, in Handbook Brain Theory Neural Networks, M.A. Arbib Ed. MIT Press, second edition. (2003) 1188–1192.
- P.L. Bartlett and S. Mendelson, Rademacher and gaussian complexities: risk bounds and structural results. J. Machine Learning Res.3 (2002) 463–482.
- P. L. Bartlett, S. Mendelson and P. Philips, Local Complexities for Empirical Risk Minimization, in Proc. of the 17th Annual Conference on Learning Theory (COLT), Springer (2004).
- O. Bashkirov, E.M. Braverman and I.E. Muchnik, Potential function algorithms for pattern recognition learning machines. Automat. Remote Control25 (1964) 692–695.
- S. Ben-David, N. Eiron and H.-U. Simon, Limitations of learning via embeddings in Euclidean half spaces. J. Machine Learning Res.3 (2002) 441–461.
- G. Bennett, Probability inequalities for the sum of independent random variables. J. Amer. Statis. Assoc.57 (1962) 33–45.
- S.N. Bernstein, The Theory of Probabilities. Gostehizdat Publishing House, Moscow (1946).
- L. Birgé, An alternative point of view on Lepski's method, in State of the art in probability and statistics (Leiden, 1999), Inst. Math. Statist., Beachwood, OH, IMS Lecture Notes Monogr. Ser.36 (2001) 113–133.
- L. Birgé and P. Massart, Rates of convergence for minimum contrast estimators. Probab. Theory Related Fields97 (1993) 113–150.
- L. Birgé and P. Massart, From model selection to adaptive estimation, in Festschrift for Lucien Le Cam: Research papers in Probability and Statistics, E. Torgersen D. Pollard and G. Yang Eds., Springer, New York (1997) 55–87.
- L. Birgé and P. Massart, Minimum contrast estimators on sieves: exponential bounds and rates of convergence. Bernoulli4 (1998) 329–375.
- G. Blanchard, O. Bousquet and P. Massart, Statistical performance of support vector machines. Ann. Statist., to appear (2006).
- G. Blanchard, G. Lugosi and N. Vayatis, On the rates of convergence of regularized boosting classifiers. J. Machine Learning Res.4 (2003) 861–894.
- A. Blumer, A. Ehrenfeucht, D. Haussler and M.K. Warmuth, Learnability and the Vapnik-Chervonenkis dimension. J. ACM36 (1989) 929–965.
- S. Bobkov and M. Ledoux, Poincaré's inequalities and Talagrands's concentration phenomenon for the exponential distribution. Probab. Theory Related Fields107 (1997) 383–400.
- B. Boser, I. Guyon and V.N. Vapnik, A training algorithm for optimal margin classifiers, in Proc. of the Fifth Annual ACM Workshop on Computational Learning Theory (COLT). Association for Computing Machinery, New York, NY (1992) 144–152.
- S. Boucheron, O. Bousquet, G. Lugosi and P. Massart, Moment inequalities for functions of independent random variables. Ann. Probab.33 (2005) 514–560.
- S. Boucheron, G. Lugosi and P. Massart, A sharp concentration inequality with applications. Random Structures Algorithms16 (2000) 277–292.
- S. Boucheron, G. Lugosi and P. Massart, Concentration inequalities using the entropy method. Ann. Probab.31 (2003) 1583–1614.
- O. Bousquet, A Bennett concentration inequality and its application to suprema of empirical processes. C. R. Acad. Sci. Paris334 (2002) 495–500.
- O. Bousquet, Concentration inequalities for sub-additive functions using the entropy method, in Stochastic Inequalities and Applications, C. Houdré E. Giné and D. Nualart Eds., Birkhauser (2003).
- O. Bousquet and A. Elisseeff, Stability and generalization. J. Machine Learning Res.2 (2002) 499–526.
- O. Bousquet, V. Koltchinskii and D. Panchenko, Some local measures of complexity of convex hulls and generalization bounds, in Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT), Springer (2002) 59–73.
- L. Breiman, Arcing classifiers. Ann. Statist.26 (1998) 801–849.
- L. Breiman, Some infinite theory for predictor ensembles. Ann. Statist.32 (2004) 1–11.
- L. Breiman, J.H. Friedman, R.A. Olshen and C.J. Stone, Classification and Regression Trees. Wadsworth International, Belmont, CA (1984).
- P. Bühlmann and B. Yu, Boosting with the l2-loss: Regression and classification. J. Amer. Statis. Assoc.98 (2004) 324–339.
- A. Cannon, J.M. Ettinger, D. Hush and C. Scovel, Machine learning with data dependent hypothesis classes. J. Machine Learning Res.2 (2002) 335–358.
- G. Castellan, Density estimation via exponential model selection. IEEE Trans. Inform. Theory49 (2003) 2052–2060.
- O. Catoni, Randomized estimators and empirical complexity for pattern recognition and least square regression. Preprint PMA-677.
- O. Catoni, Statistical learning theory and stochastic optimization. École d'été de Probabilités de Saint-Flour XXXI. Springer-Verlag. Lect. Notes Math.1851 (2004).
- O. Catoni, Localized empirical complexity bounds and randomized estimators (2003). Preprint.
- N. Cesa-Bianchi and D. Haussler, A graph-theoretic generalization of the Sauer-Shelah lemma. Discrete Appl. Math.86 (1998) 27–35.
- M. Collins, R.E. Schapire and Y. Singer, Logistic regression, AdaBoost and Bregman distances. Machine Learning48 (2002) 253–285.
- C. Cortes and V.N. Vapnik, Support vector networks. Machine Learning20 (1995) 1–25.
- T.M. Cover, Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Trans. Electronic Comput.14 (1965) 326–334.
- P. Craven and G. Wahba, Smoothing noisy data with spline functions: estimating the correct degree of smoothing by the method of generalized cross-validation. Numer. Math.31 (1979) 377–403.
- N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, Cambridge, UK (2000).
- I. Csiszár, Large-scale typicality of Markov sample paths and consistency of MDL order estimators. IEEE Trans. Inform. Theory48 (2002) 1616–1628.
- I. Csiszár and P. Shields, The consistency of the BIC Markov order estimator. Ann. Statist.28 (2000) 1601–1619.
- F. Cucker and S. Smale, On the mathematical foundations of learning. Bull. Amer. Math. Soc. (2002) 1–50.
- A. Dembo, Information inequalities and concentration of measure. Ann. Probab.25 (1997) 927–939.
- P.A. Devijver and J. Kittler, Pattern Recognition: A Statistical Approach. Prentice-Hall, Englewood Cliffs, NJ (1982).
- L. Devroye, Automatic pattern recognition: A study of the probability of error. IEEE Trans. Pattern Anal. Machine Intelligence10 (1988) 530–543.
- L. Devroye, L. Györfi and G. Lugosi, A Probabilistic Theory of Pattern Recognition. Springer-Verlag, New York (1996).
- L. Devroye and G. Lugosi, Lower bounds in pattern recognition and learning. Pattern Recognition28 (1995) 1011–1018.
- L. Devroye and T. Wagner, Distribution-free inequalities for the deleted and holdout error estimates. IEEE Trans. Inform. Theory25(2) (1979) 202–207.
- L. Devroye and T. Wagner, Distribution-free performance bounds for potential function rules. IEEE Trans. Inform. Theory25(5) (1979) 601–604.
- D.L. Donoho and I.M. Johnstone, Ideal spatial adaptation by wavelet shrinkage. Biometrika81(3) (1994) 425–455.
- R.O. Duda and P.E. Hart, Pattern Classification and Scene Analysis. John Wiley, New York (1973).
- R.O. Duda, P.E. Hart and D.G. Stork, Pattern Classification. John Wiley and Sons (2000).
- R.M. Dudley, Central limit theorems for empirical measures. Ann. Probab.6 (1978) 899–929.
- R.M. Dudley, Balls in Rk do not cut all subsets of k + 2 points. Advances Math.31 (3) (1979) 306–308.
- R.M. Dudley, Empirical processes, in École de Probabilité de St. Flour 1982. Lect. Notes Math.1097 (1984).
- R.M. Dudley, Universal Donsker classes and metric entropy. Ann. Probab.15 (1987) 1306–1326.
- R.M. Dudley, Uniform Central Limit Theorems. Cambridge University Press, Cambridge (1999).
- R.M. Dudley, E. Giné and J. Zinn, Uniform and universal Glivenko-Cantelli classes. J. Theoret. Probab.4 (1991) 485–510.
- B. Efron, Bootstrap methods: another look at the jackknife. Ann. Statist.7 (1979) 1–26.
- B. Efron, The jackknife, the bootstrap, and other resampling plans. SIAM, Philadelphia (1982).
- B. Efron and R.J. Tibshirani, An Introduction to the Bootstrap. Chapman and Hall, New York (1994).
- A. Ehrenfeucht, D. Haussler, M. Kearns and L. Valiant, A general lower bound on the number of examples needed for learning. Inform. Comput.82 (1989) 247–261.
- T. Evgeniou, M. Pontil and T. Poggio, Regularization networks and support vector machines, in Advances in Large Margin Classifiers, A.J. Smola, P.L. Bartlett B. Schölkopf and D. Schuurmans, Eds., Cambridge, MA, MIT Press. (2000) 171–203.
- P. Frankl, On the trace of finite sets. J. Combin. Theory, Ser. A34 (1983) 41–45.
- Y. Freund, Boosting a weak learning algorithm by majority. Inform. Comput.121 (1995) 256–285.
- Y. Freund, Self bounding learning algorithms, in Proceedings of the 11th Annual Conference on Computational Learning Theory (1998) 127–135.
- Y. Freund, Y. Mansour and R.E. Schapire, Generalization bounds for averaged classifiers (how to be a Bayesian without believing). Ann. Statist. (2004).
- Y. Freund and R. Schapire, A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci.55 (1997) 119–139.
- J. Friedman, T. Hastie and R. Tibshirani, Additive logistic regression: a statistical view of boosting. Ann. Statist.28 (2000) 337–374.
- M. Fromont, Some problems related to model selection: adaptive tests and bootstrap calibration of penalties. Thèse de doctorat, Université Paris-Sud (December 2003).
- K. Fukunaga, Introduction to Statistical Pattern Recognition. Academic Press, New York (1972).
- E. Giné, Empirical processes and applications: an overview. Bernoulli2 (1996) 1–28.
- E. Giné and J. Zinn, Some limit theorems for empirical processes. Ann. Probab.12 (1984) 929–989.
- E. Giné, Lectures on some aspects of the bootstrap, in Lectures on probability theory and statistics (Saint-Flour, 1996). Lect. Notes Math.1665 (1997) 37–151.
- P. Goldberg and M. Jerrum, Bounding the Vapnik-Chervonenkis dimension of concept classes parametrized by real numbers. Machine Learning18 (1995) 131–148.
- U. Grenander, Abstract inference. John Wiley & Sons Inc., New York (1981).
- P. Hall, Large sample optimality of least squares cross-validation in density estimation. Ann. Statist.11 (1983) 1156–1174.
- T. Hastie, R. Tibshirani and J. Friedman, The Elements of Statistical Learning. Springer Series in Statistics. Springer-Verlag, New York (2001).
- D. Haussler, Decision theoretic generalizations of the pac model for neural nets and other learning applications. Inform. Comput.100 (1992) 78–150.
- D. Haussler, Sphere packing numbers for subsets of the boolean n-cube with bounded Vapnik-Chervonenkis dimension. J. Combin. Theory, Ser. A69 (1995) 217–232.
- D. Haussler, N. Littlestone and M. Warmuth, Predicting {0,1} functions from randomly drawn points, in Proc. of the 29th IEEE Symposium on the Foundations of Computer Science, IEEE Computer Society Press, Los Alamitos, CA (1988) 100–109.
- R. Herbrich and R.C. Williamson, Algorithmic luckiness. J. Machine Learning Res.3 (2003) 175–212.
- W. Hoeffding, Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc.58 (1963) 13–30.
- P. Huber, The behavior of the maximum likelihood estimates under non-standard conditions, in Proc. Fifth Berkeley Symposium on Probability and Mathematical Statistics, Univ. California Press (1967) 221–233.
- W. Jiang, Process consistency for adaboost. Ann. Statist.32 (2004) 13–29.
- D.S. Johnson and F.P. Preparata, The densest hemisphere problem. Theoret. Comput. Sci.6 (1978) 93–107.
- I. Johnstone, Function estimation and gaussian sequence models. Technical Report. Department of Statistics, Stanford University (2002).
- M. Karpinski and A. Macintyre, Polynomial bounds for vc dimension of sigmoidal and general pfaffian neural networks. J. Comput. Syst. Sci.54 (1997).
- M. Kearns, Y. Mansour, A.Y. Ng and D. Ron, An experimental and theoretical comparison of model selection methods, in Proc. of the Eighth Annual ACM Workshop on Computational Learning Theory, Association for Computing Machinery, New York (1995) 21–30.
- M.J. Kearns and D. Ron, Algorithmic stability and sanity-check bounds for leave-one-out cross-validation. Neural Comput.11(6) (1999) 1427–1453.
- M.J. Kearns and U.V. Vazirani, An Introduction to Computational Learning Theory. MIT Press, Cambridge, Massachusetts (1994).
- A.G. Khovanskii, Fewnomials. Translations of Mathematical Monographs 88, American Mathematical Society (1991).
- J.C. Kieffer, Strongly consistent code-based identification and order estimation for constrained finite-state model classes. IEEE Trans. Inform. Theory39 (1993) 893–902.
- G.S. Kimeldorf and G. Wahba, A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Ann. Math. Statist.41 (1970) 495–502.
- P. Koiran and E.D. Sontag, Neural networks with quadratic vc dimension. J. Comput. Syst. Sci.54 (1997).
- A.N. Kolmogorov, On the representation of continuous functions of several variables by superposition of continuous functions of one variable and addition. Dokl. Akad. Nauk SSSR114 (1957) 953–956.
- A.N. Kolmogorov and V.M. Tikhomirov, ε-entropy and ε-capacity of sets in functional spaces. Amer. Math. Soc. Transl., Ser. 217 (1961) 277–364.
- V. Koltchinskii, Rademacher penalties and structural risk minimization.IEEE Trans. Inform. Theory47 (2001) 1902–1914.
- V. Koltchinskii, Local Rademacher complexities and oracle inequalities in risk minimization. Manuscript (September 2003).
- V. Koltchinskii and D. Panchenko, Rademacher processes and bounding the risk of function learning, in High Dimensional Probability II, E. Giné, D.M. Mason and J.A. Wellner, Eds. (2000) 443–459.
- V. Koltchinskii and D. Panchenko, Empirical margin distributions and bounding the generalization error of combined classifiers. Ann. Statist.30 (2002).
- S. Kulkarni, G. Lugosi and S. Venkatesh, Learning pattern classification – a survey. IEEE Trans. Inform. Theory44 (1998) 2178–2206. Information Theory: 1948–1998. Commemorative special issue.
- S. Kutin and P. Niyogi, Almost-everywhere algorithmic stability and generalization error, in UAI-2002: Uncertainty in Artificial Intelligence (2002).
- J. Langford and M. Seeger, Bounds for averaging classifiers. CMU-CS 01-102, Carnegie Mellon University (2001).
- M. Ledoux, Isoperimetry and gaussian analysis in Lectures on Probability Theory and Statistics, P. Bernard Ed., École d'Été de Probabilités de St-Flour XXIV-1994 (1996) 165–294.
- M. Ledoux, On Talagrand's deviation inequalities for product measures. ESAIM: PS1 (1997) 63–87.
- M. Ledoux and M. Talagrand, Probability in Banach Space. Springer-Verlag, New York (1991).
- W.S. Lee, P.L. Bartlett and R.C. Williamson, The importance of convexity in learning with squared loss. IEEE Trans. Inform. Theory44 (1998) 1974–1980.
- O.V. Lepskiĭ, E. Mammen and V.G. Spokoiny, Optimal spatial adaptation to inhomogeneous smoothness: an approach based on kernel estimates with variable bandwidth selectors. Ann. Statist.25 (1997) 929–947.
- O.V. Lepskiĭ, A problem of adaptive estimation in Gaussian white noise. Teor. Veroyatnost. i Primenen.35 (1990) 459–470.
- O.V. Lepskiĭ, Asymptotically minimax adaptive estimation. I. Upper bounds. Optimally adaptive estimates. Teor. Veroyatnost. i Primenen.36 (1991) 645–659.
- Y. Li, P.M. Long and A. Srinivasan, Improved bounds on the sample complexity of learning. J. Comput. Syst. Sci.62 (2001) 516–527.
- Y. Lin, A note on margin-based loss functions in classification. Technical Report 1029r, Department of Statistics, University Wisconsin, Madison (1999).
- Y. Lin, Some asymptotic properties of the support vector machine. Technical Report 1044r, Department of Statistics, University of Wisconsin, Madison (1999).
- Y. Lin, Support vector machines and the bayes rule in classification. Data Mining and Knowledge Discovery6 (2002) 259–275.
- F. Lozano, Model selection using Rademacher penalization, in Proceedings of the Second ICSC Symposia on Neural Computation (NC2000). ICSC Adademic Press (2000).
- M.J. Luczak and C. McDiarmid, Concentration for locally acting permutations. Discrete Math.265 (2003) 159–171.
- G. Lugosi, Pattern classification and learning theory, in Principles of Nonparametric Learning, L. Györfi Ed., Springer, Wien (2002) 5–62.
- G. Lugosi and A. Nobel, Adaptive model selection using empirical complexities. Ann. Statist.27 (1999) 1830–1864.
- G. Lugosi and N. Vayatis, On the Bayes-risk consistency of regularized boosting methods. Ann. Statist.32 (2004) 30–55.
- G. Lugosi and M. Wegkamp, Complexity regularization via localized random penalties. Ann. Statist.2 (2004) 1679–1697.
- G. Lugosi and K. Zeger, Concept learning using complexity regularization. IEEE Trans. Inform. Theory42 (1996) 48–54.
- A. Macintyre and E.D. Sontag, Finiteness results for sigmoidal “neural” networks, in Proc. of the 25th Annual ACM Symposium on the Theory of Computing, Association of Computing Machinery, New York (1993) 325–334.
- C.L. Mallows, Some comments on Cp. Technometrics15 (1997) 661–675.
- E. Mammen and A. Tsybakov, Smooth discrimination analysis. Ann. Statist.27(6) (1999) 1808–1829.
- S. Mannor and R. Meir, Weak learners and improved convergence rate in boosting, in Advances in Neural Information Processing Systems 13: Proc. NIPS'2000 (2001).
- S. Mannor, R. Meir and T. Zhang, The consistency of greedy algorithms for classification, in Proceedings of the 15th Annual Conference on Computational Learning Theory (2002).
- K. Marton, A simple proof of the blowing-up lemma. IEEE Trans. Inform. Theory32 (1986) 445–446.
- K. Marton, Bounding -distance by informational divergence: a way to prove measure concentration. Ann. Probab.24 (1996) 857–866.
- K. Marton, A measure concentration inequality for contracting Markov chains. Geometric Functional Analysis6 (1996) 556–571. Erratum: 7 (1997) 609–613.
- L. Mason, J. Baxter, P.L. Bartlett and M. Frean, Functional gradient techniques for combining hypotheses, in Advances in Large Margin Classifiers, A.J. Smola, P.L. Bartlett, B. Schölkopf and D. Schuurmans Eds., MIT Press, Cambridge, MA (1999) 221–247.
- P. Massart, Optimal constants for Hoeffding type inequalities. Technical report, Mathematiques, Université de Paris-Sud, Report 98.86, 1998.
- P. Massart, About the constants in Talagrand's concentration inequalities for empirical processes. Ann. Probab.28 (2000) 863–884.
- P. Massart, Some applications of concentration inequalities to statistics. Ann. Fac. Sci. ToulouseIX (2000) 245–303.
- P. Massart, École d'Eté de Probabilité de Saint-Flour XXXIII, chapter Concentration inequalities and model selection, LNM. Springer-Verlag (2003).
- P. Massart and E. Nédélec, Risk bounds for statistical learning, Ann. Statist., to appear.
- D.A. McAllester, Some pac-Bayesian theorems, in Proc. of the 11th Annual Conference on Computational Learning Theory, ACM Press (1998) 230–234.
- D.A. McAllester, pac-Bayesian model averaging, in Proc. of the 12th Annual Conference on Computational Learning Theory. ACM Press (1999).
- D.A. McAllester, PAC-Bayesian stochastic model selection. Machine Learning51 (2003) 5–21.
- C. McDiarmid, On the method of bounded differences, in Surveys in Combinatorics 1989, Cambridge University Press, Cambridge (1989) 148–188.
- C. McDiarmid, Concentration, in Probabilistic Methods for Algorithmic Discrete Mathematics, M. Habib, C. McDiarmid, J. Ramirez-Alfonsin and B. Reed Eds., Springer, New York (1998) 195–248.
- C. McDiarmid, Concentration for independent permutations. Combin. Probab. Comput.2 (2002) 163–178.
- G.J. McLachlan, Discriminant Analysis and Statistical Pattern Recognition. John Wiley, New York (1992).
- S. Mendelson, Improving the sample complexity using global data. IEEE Trans. Inform. Theory48 (2002) 1977–1991.
- S. Mendelson, A few notes on statistical learning theory, in Advanced Lectures in Machine Learning. Lect. Notes Comput. Sci.2600, S. Mendelson and A. Smola Eds., Springer (2003) 1–40.
- S. Mendelson and P. Philips, On the importance of “small” coordinate projections. J. Machine Learning Res.5 (2004) 219–238.
- S. Mendelson and R. Vershynin, Entropy and the combinatorial dimension. Inventiones Mathematicae152 (2003) 37–55.
- V. Milman and G. Schechman, Asymptotic theory of finite-dimensional normed spaces, Springer-Verlag, New York (1986).
- B.K. Natarajan, Machine Learning: A Theoretical Approach, Morgan Kaufmann, San Mateo, CA (1991).
- D. Panchenko, A note on Talagrand's concentration inequality. Electron. Comm. Probab.6 (2001).
- D. Panchenko, Some extensions of an inequality of Vapnik and Chervonenkis. Electron. Comm. Probab.7 (2002).
- D. Panchenko, Symmetrization approach to concentration inequalities for empirical processes. Ann. Probab.31 (2003) 2068–2081.
- T. Poggio, S. Rifkin, S. Mukherjee and P. Niyogi, General conditions for predictivity in learning theory. Nature428 (2004) 419–422.
- D. Pollard, Convergence of Stochastic Processes, Springer-Verlag, New York (1984).
- D. Pollard, Uniform ratio limit theorems for empirical processes. Scand. J. Statist.22 (1995) 271–278.
- W. Polonik, Measuring mass concentrations and estimating density contour clusters–an excess mass approach. Ann. Statist.23(3) (1995) 855–881.
- E. Rio, Inégalités de concentration pour les processus empiriques de classes de parties. Probab. Theory Related Fields119 (2001) 163–175.
- E. Rio, Une inegalité de Bennett pour les maxima de processus empiriques, in Colloque en l'honneur de J. Bretagnolle, D. Dacunha-Castelle et I. Ibragimov, Annales de l'Institut Henri Poincaré (2001).
- B.D. Ripley, Pattern Recognition and Neural Networks, Cambridge University Press (1996).
- W.H. Rogers and T.J. Wagner, A finite sample distribution-free performance bound for local discrimination rules. Ann. Statist.6 (1978) 506–514.
- M. Rudelson, R. Vershynin, Combinatorics of random processes and sections of convex bodies. Ann. Math, to appear (2004).
- N. Sauer, On the density of families of sets. J. Combin. Theory, Ser A13 (1972) 145–147.
- R.E. Schapire, The strength of weak learnability. Machine Learning5 (1990) 197–227.
- R.E. Schapire, Y. Freund, P. Bartlett and W.S. Lee, Boosting the margin: a new explanation for the effectiveness of voting methods. Ann. Statist.26 (1998) 1651–1686.
- B. Schölkopf and A. J. Smola, Learning with Kernels. MIT Press, Cambridge, MA (2002).
- D. Schuurmans, Characterizing rational versus exponential learning curves, in Computational Learning Theory: Second European Conference. EuroCOLT'95, Springer-Verlag (1995) 272–286.
- C. Scovel and I. Steinwart, Fast rates for support vector machines. Los Alamos National Laboratory Technical Report LA-UR 03-9117 (2003).
- M. Seeger, PAC-Bayesian generalisation error bounds for gaussian process classification. J. Machine Learning Res.3 (2002) 233–269.
- J. Shawe-Taylor, P.L. Bartlett, R.C. Williamson and M. Anthony, Structural risk minimization over data-dependent hierarchies. IEEE Trans. Inform. Theory44 (1998) 1926–1940.
- S. Shelah, A combinatorial problem: Stability and order for models and theories in infinity languages. Pacific J. Mathematics41 (1972) 247–261.
- G.R. Shorack and J. Wellner, Empirical Processes with Applications in Statistics. Wiley, New York (1986).
- H.U. Simon, General lower bounds on the number of examples needed for learning probabilistic concepts, in Proc. of the Sixth Annual ACM Conference on Computational Learning Theory, Association for Computing Machinery, New York (1993) 402–412.
- A.J. Smola, P.L. Bartlett, B. Schölkopf and D. Schuurmans Eds, Advances in Large Margin Classifiers. MIT Press, Cambridge, MA (2000).
- A.J. Smola, B. Schölkopf and K.-R. Müller, The connection between regularization operators and support vector kernels. Neural Networks11 (1998) 637–649.
- D.F. Specht, Probabilistic neural networks and the polynomial Adaline as complementary techniques for classification. IEEE Trans. Neural Networks1 (1990) 111–121.
- J.M. Steele, Existence of submatrices with all possible columns. J. Combin. Theory, Ser. A28 (1978) 84–88.
- I. Steinwart, On the influence of the kernel on the consistency of support vector machines. J. Machine Learning Res. (2001) 67–93.
- I. Steinwart, Consistency of support vector machines and other regularized kernel machines. IEEE Trans. Inform. Theory51 (2005) 128–142.
- I. Steinwart, Support vector machines are universally consistent. J. Complexity18 (2002) 768–791.
- I. Steinwart, On the optimal parameter choice in v-support vector machines. IEEE Trans. Pattern Anal. Machine Intelligence25 (2003) 1274–1284.
- I. Steinwart, Sparseness of support vector machines. J. Machine Learning Res.4 (2003) 1071–1105.
- S.J. Szarek and M. Talagrand, On the convexified Sauer-Shelah theorem. J. Combin. Theory, Ser. B69 (1997) 183–192.
- M. Talagrand, The Glivenko-Cantelli problem. Ann. Probab.15 (1987) 837–870.
- M. Talagrand, Sharper bounds for Gaussian and empirical processes. Ann. Probab.22 (1994) 28–76.
- M. Talagrand, Concentration of measure and isoperimetric inequalities in product spaces. Publications Mathématiques de l'I.H.E.S.81 (1995) 73–205.
- M. Talagrand, The Glivenko-Cantelli problem, ten years later. J. Theoret. Probab.9 (1996) 371–384.
- M. Talagrand, Majorizing measures: the generic chaining. Ann. Probab.24 (1996) 1049–1103. (Special Invited Paper).
- M. Talagrand, New concentration inequalities in product spaces. Inventiones Mathematicae126 (1996) 505–563.
- M. Talagrand, A new look at independence. Ann. Probab.24 (1996) 1–34. (Special Invited Paper).
- M. Talagrand, Vapnik-Chervonenkis type conditions and uniform Donsker classes of functions. Ann. Probab.31 (2003) 1565–1582.
- M. Talagrand, The generic chaining: upper and lower bounds for stochastic processes. Springer-Verlag, New York (2005).
- A. Tsybakov. On nonparametric estimation of density level sets. Ann. Stat.25 (1997) 948–969.
- A.B. Tsybakov, Optimal aggregation of classifiers in statistical learning. Ann. Statist.32 (2004) 135–166.
- A.B. Tsybakov, Introduction à l'estimation non-paramétrique. Springer (2004).
- A. Tsybakov and S. van de Geer, Square root penalty: adaptation to the margin in classification and in edge estimation. Ann. Statist., to appear (2005).
- S. Van de Geer, A new approach to least-squares estimation, with applications. Ann. Statist.15 (1987) 587–602.
- S. Van de Geer, Estimating a regression function. Ann. Statist.18 (1990) 907–924.
- S. van de Geer, Empirical Processes in M-Estimation. Cambridge University Press, Cambridge, UK (2000).
- A.W. van der Waart and J.A. Wellner, Weak convergence and empirical processes. Springer-Verlag, New York (1996).
- V. Vapnik and A. Lerner, Pattern recognition using generalized portrait method. Automat. Remote Control24 (1963) 774–780.
- V.N. Vapnik, Estimation of Dependencies Based on Empirical Data. Springer-Verlag, New York (1982).
- V.N. Vapnik, The Nature of Statistical Learning Theory. Springer-Verlag, New York (1995).
- V.N. Vapnik, Statistical Learning Theory. John Wiley, New York (1998).
- V.N. Vapnik and A.Ya. Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities. Theory Probab. Appl.16 (1971) 264–280.
- V.N. Vapnik and A.Ya. Chervonenkis, Theory of Pattern Recognition. Nauka, Moscow (1974). (in Russian); German translation: Theorie der Zeichenerkennung, Akademie Verlag, Berlin (1979).
- V.N. Vapnik and A.Ya. Chervonenkis, Necessary and sufficient conditions for the uniform convergence of means to their expectations. Theory Probab. Appl.26 (1981) 821–832.
- M. Vidyasagar, A Theory of Learning and Generalization. Springer, New York (1997).
- V. Vu, On the infeasibility of training neural networks with small mean squared error. IEEE Trans. Inform. Theory44 (1998) 2892–2900.
- M. Wegkamp, Model selection in nonparametric regression. Ann. Statist.31(1) (2003) 252–273.
- R.S. Wenocur and R.M. Dudley, Some special Vapnik-Chervonenkis classes. Discrete Math.33 (1981) 313–318.
- Y. Yang, Minimax nonparametric classification. I. Rates of convergence. IEEE Trans. Inform. Theory45(7) (1999) 2271–2284.
- Y. Yang, Minimax nonparametric classification. II. Model selection for adaptation. IEEE Trans. Inform. Theory45(7) (1999) 2285–2292.
- Y. Yang, Adaptive estimation in pattern recognition by combining different procedures. Statistica Sinica10 (2000) 1069–1089.
- V.V. Yurinksii, Exponential bounds for large deviations. Theory Probab. Appl.19 (1974) 154–155.
- V.V. Yurinksii, Exponential inequalities for sums of random vectors. J. Multivariate Anal.6 (1976) 473–499.
- T. Zhang, Statistical behavior and consistency of classification methods based on convex risk minimization. Ann. Statist.32 (2004) 56–85.
- D.-X. Zhou, Capacity of reproducing kernel spaces in learning theory. IEEE Trans. Inform. Theory49 (2003) 1743–1752.
NotesEmbed ?
topTo embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.