PAC Learning under Helpful Distributions

François Denis; Rémi Gilleron

RAIRO - Theoretical Informatics and Applications (2010)

  • Volume: 35, Issue: 2, page 129-148
  • ISSN: 0988-3754

Abstract

top
A PAC teaching model -under helpful distributions -is proposed which introduces the classical ideas of teaching models within the PAC setting: a polynomial-sized teaching set is associated with each target concept; the criterion of success is PAC identification; an additional parameter, namely the inverse of the minimum probability assigned to any example in the teaching set, is associated with each distribution; the learning algorithm running time takes this new parameter into account. An Occam razor theorem and its converse are proved. Some classical classes of boolean functions, such as Decision Lists, DNF and CNF formulas are proved learnable in this model. Comparisons with other teaching models are made: learnability in the Goldman and Mathias model implies PAC learnability under helpful distributions. Note that Decision lists and DNF are not known to be learnable in the Goldman and Mathias model. A new simple PAC model, where "simple" refers to Kolmogorov complexity, is introduced. We show that most learnability results obtained within previously defined simple PAC models can be simply derived from more general results in our model.

How to cite

top

Denis, François, and Gilleron, Rémi. "PAC Learning under Helpful Distributions." RAIRO - Theoretical Informatics and Applications 35.2 (2010): 129-148. <http://eudml.org/doc/222076>.

@article{Denis2010,
abstract = { A PAC teaching model -under helpful distributions -is proposed which introduces the classical ideas of teaching models within the PAC setting: a polynomial-sized teaching set is associated with each target concept; the criterion of success is PAC identification; an additional parameter, namely the inverse of the minimum probability assigned to any example in the teaching set, is associated with each distribution; the learning algorithm running time takes this new parameter into account. An Occam razor theorem and its converse are proved. Some classical classes of boolean functions, such as Decision Lists, DNF and CNF formulas are proved learnable in this model. Comparisons with other teaching models are made: learnability in the Goldman and Mathias model implies PAC learnability under helpful distributions. Note that Decision lists and DNF are not known to be learnable in the Goldman and Mathias model. A new simple PAC model, where "simple" refers to Kolmogorov complexity, is introduced. We show that most learnability results obtained within previously defined simple PAC models can be simply derived from more general results in our model. },
author = {Denis, François, Gilleron, Rémi},
journal = {RAIRO - Theoretical Informatics and Applications},
keywords = {PAC learning; teaching model; Kolmogorov complexity.; PAC teaching model; learnability},
language = {eng},
month = {3},
number = {2},
pages = {129-148},
publisher = {EDP Sciences},
title = {PAC Learning under Helpful Distributions},
url = {http://eudml.org/doc/222076},
volume = {35},
year = {2010},
}

TY - JOUR
AU - Denis, François
AU - Gilleron, Rémi
TI - PAC Learning under Helpful Distributions
JO - RAIRO - Theoretical Informatics and Applications
DA - 2010/3//
PB - EDP Sciences
VL - 35
IS - 2
SP - 129
EP - 148
AB - A PAC teaching model -under helpful distributions -is proposed which introduces the classical ideas of teaching models within the PAC setting: a polynomial-sized teaching set is associated with each target concept; the criterion of success is PAC identification; an additional parameter, namely the inverse of the minimum probability assigned to any example in the teaching set, is associated with each distribution; the learning algorithm running time takes this new parameter into account. An Occam razor theorem and its converse are proved. Some classical classes of boolean functions, such as Decision Lists, DNF and CNF formulas are proved learnable in this model. Comparisons with other teaching models are made: learnability in the Goldman and Mathias model implies PAC learnability under helpful distributions. Note that Decision lists and DNF are not known to be learnable in the Goldman and Mathias model. A new simple PAC model, where "simple" refers to Kolmogorov complexity, is introduced. We show that most learnability results obtained within previously defined simple PAC models can be simply derived from more general results in our model.
LA - eng
KW - PAC learning; teaching model; Kolmogorov complexity.; PAC teaching model; learnability
UR - http://eudml.org/doc/222076
ER -

References

top
  1. D. Angluin, Learning Regular Sets from Queries and Counterexamples. Inform. and Comput.75 (1987) 87-106.  
  2. D. Angluin, Queries and Concept Learning. Machine Learning2 (1988) 319-342.  
  3. G.M. Benedek and A. Itai, Nonuniform Learnability, in ICALP (1988) 82-92.  
  4. A. Blumer, A. Ehrenfeucht, D. Haussler and M.K. Warmuth, Occam's Razor. Inform. Process. Lett.24 (1987) 377-380.  
  5. R. Board and L. Pitt, On the Necessity of Occam Algorithms. Theoret. Comput. Sci.100 (1992) 157-184.  
  6. N.H. Bshouty, Exact Learning Boolean Function via the Monotone Theory. Inform. and Comput.123 (1995) 146-153.  
  7. J. Castro and J.L. Balcázar, Simple PAC learning of simple decision lists, in ALT 95, 6th International Workshop on Algorithmic Learning Theory. Springer, Lecture Notes in Comput. Sci. 997 (1995) 239-250.  
  8. J. Castro and D. Guijarro, PACS, simple-PAC and query learning. Inform. Process. Lett.73 (2000) 11-16.  
  9. F. Denis, Learning regular languages from simple positive examples, Machine Learning. Technical Report LIFL 321 - 1998; http://www.lifl.fr/denis (to appear).  
  10. F. Denis, C. D'Halluin and R. Gilleron, PAC Learning with Simple Examples, in 13th Annual Symposium on Theoretical Aspects of Computer Science. Springer-Verlag, Lecture Notes in Comput. Sci. 1046 (1996) 231-242.  
  11. F. Denis and R. Gilleron, PAC learning under helpful distributions, in Proc. of the 8th International Workshop on Algorithmic Learning Theory (ALT-97), edited by M. Li and A. Maruoka. Springer-Verlag, Berlin, Lecture Notes in Comput. Sci. 1316 (1997) 132-145.  
  12. E.M. Gold, Complexity of Automaton Identification from Given Data. Inform. and Control37 (1978) 302-320.  
  13. S.A. Goldman and M.J. Kearns, On the Complexity of Teaching. J. Comput. System Sci.50 (1995) 20-31.  
  14. S.A. Goldman and H.D. Mathias, Teaching a Smarter Learner. J. Comput. System Sci.52 (1996) 255-267.  
  15. T. Hancock, T. Jiang, M. Li and J. Tromp, Lower Bounds on Learning Decision Lists and Trees. Inform. and Comput.126 (1996) 114-122.  
  16. D. Haussler, M. Kearns, N. Littlestone and M.K. Warmuth, Equivalence of Models for Polynomial Learnability. Inform. and Comput.95 (1991) 129-161.  
  17. C.D.L. Higuera, Characteristic Sets for Polynomial Grammatical Inference. Machine Learning27 (1997) 125-137.  
  18. M. Kearns, M. Li, L. Pitt and L.G. Valiant, Recent Results on Boolean Concept Learning, in Proc. of the Fourth International Workshop on Machine Learning (1987) 337-352.  
  19. M.J. Kearns and U.V. Vazirani, An Introduction to Computational Learning Theory. MIT Press (1994).  
  20. M. Li and P.M.B. Vitányi, Learning simple concepts under simple distributions. SIAM J. Comput.20 (1991) 911-935.  
  21. M. Li and P. Vitányi, An introduction to Kolmogorov complexity and its applications, 2nd Edition. Springer-Verlag (1997).  
  22. D.H. Mathias, DNF: If You Can't Learn 'em, Teach 'em: An Interactive Model of Teaching, in Proc. of the 8th Annual Conference on Computational Learning Theory (COLT'95). ACM Press, New York (1995) 222-229.  
  23. B.K. Natarajan, Machine Learning: A Theoretical Approach. Morgan Kaufmann, San Mateo, CA (1991).  
  24. B.K. Natarajan, On Learning Boolean Functions, in Proc. of the 19th Annual ACM Symposium on Theory of Computing. ACM Press (1987) 296-304.  
  25. J. Oncina and P. Garcia, Inferring regular languages in polynomial update time, in Pattern Recognition and Image Analysis (1992) 49-61.  
  26. R. Parekh and V. Honavar, On the Relationships between Models of Learning in Helpful Environments, in Proc. Fifth International Conference on Grammatical Inference (2000).  
  27. R. Parekh and V. Honavar, Learning DFA from simple examples, in Proc. of the 8th International Workshop on Algorithmic Learning Theory (ALT-97), edited by M. Li and A. Maruoka. Springer, Berlin, Lecture Notes in Artificial Intelligence 1316 (1997) 116-131.  
  28. R. Parekh and V. Honavar, Simple DFA are polynomially probably exactly learnable from simple examples, in Proc. 16th International Conf. on Machine Learning (1999) 298-306.  
  29. R.L. Rivest, Learning Decision Lists. Machine Learning2 (1987) 229-246.  
  30. K. Romanik, Approximate Testing and Learnability, in Proc. of the 5th Annual ACM Workshop on Computational Learning Theory, edited by D. Haussler. ACM Press, Pittsburgh, PA (1992) 327-332.  
  31. S. Salzberg, A. Delcher, D. Heath and S. Kasif, Learning with a Helpful Teacher, in Proc. of the 12th International Joint Conference on Artificial Intelligence, edited by R. Myopoulos and J. Reiter. Morgan Kaufmann, Sydney, Australia (1991) 705-711.  
  32. R.E. Schapire, The Strength of Weak Learnability. Machine Learning5 (1990) 197-227.  
  33. A. Shinohara and S. Miyano, Teachability in Computational Learning. NEWGEN: New Generation Computing 8 (1991).  
  34. L.G. Valiant, A Theory of the Learnable. Commun. ACM27 (1984) 1134-1142.  

NotesEmbed ?

top

You must be logged in to post comments.

To embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.

Only the controls for the widget will be shown in your chosen language. Notes will be shown in their authored language.

Tells the widget how many notes to show per page. You can cycle through additional notes using the next and previous controls.

    
                

Note: Best practice suggests putting the JavaScript code just before the closing </body> tag.