Towards a theory of practice in metaheuristics design: A machine learning perspective
Mauro Birattari; Mark Zlochin; Marco Dorigo
RAIRO - Theoretical Informatics and Applications (2006)
- Volume: 40, Issue: 2, page 353-369
- ISSN: 0988-3754
Access Full Article
topAbstract
topHow to cite
topBirattari, Mauro, Zlochin, Mark, and Dorigo, Marco. " Towards a theory of practice in metaheuristics design: A machine learning perspective ." RAIRO - Theoretical Informatics and Applications 40.2 (2006): 353-369. <http://eudml.org/doc/249732>.
@article{Birattari2006,
abstract = {
A number of methodological papers published during the last years
testify that a need for a thorough revision of the research
methodology is felt by the operations research community – see, for
example, [Barr et al., J. Heuristics1 (1995) 9–32; Eiben and Jelasity,
Proceedings of the 2002 Congress on Evolutionary Computation
(CEC'2002) 582–587; Hooker,
J. Heuristics1 (1995) 33–42; Rardin and Uzsoy,
J. Heuristics7 (2001) 261–304]. In particular, the
performance evaluation of nondeterministic methods, including widely
studied metaheuristics such as evolutionary computation and ant colony
optimization, requires the definition of new experimental protocols.
A careful and thorough analysis of the problem of evaluating
metaheuristics reveals strong similarities between this problem and
the problem of evaluating learning methods in the machine learning
field.
In this paper, we show that several conceptual tools commonly used in
machine learning – such as, for example, the probabilistic notion of
class of instances and the separation between the training and the
testing datasets – fit naturally in the context of metaheuristics
evaluation.
Accordingly, we propose and discuss some principles inspired by the
experimental practice in machine learning for guiding the performance
evaluation of optimization algorithms.
Among these principles, a clear separation between the instances that
are used for tuning algorithms and those that are used in the actual
evaluation is particularly important for a proper assessment.
},
author = {Birattari, Mauro, Zlochin, Mark, Dorigo, Marco},
journal = {RAIRO - Theoretical Informatics and Applications},
language = {eng},
month = {7},
number = {2},
pages = {353-369},
publisher = {EDP Sciences},
title = { Towards a theory of practice in metaheuristics design: A machine learning perspective },
url = {http://eudml.org/doc/249732},
volume = {40},
year = {2006},
}
TY - JOUR
AU - Birattari, Mauro
AU - Zlochin, Mark
AU - Dorigo, Marco
TI - Towards a theory of practice in metaheuristics design: A machine learning perspective
JO - RAIRO - Theoretical Informatics and Applications
DA - 2006/7//
PB - EDP Sciences
VL - 40
IS - 2
SP - 353
EP - 369
AB -
A number of methodological papers published during the last years
testify that a need for a thorough revision of the research
methodology is felt by the operations research community – see, for
example, [Barr et al., J. Heuristics1 (1995) 9–32; Eiben and Jelasity,
Proceedings of the 2002 Congress on Evolutionary Computation
(CEC'2002) 582–587; Hooker,
J. Heuristics1 (1995) 33–42; Rardin and Uzsoy,
J. Heuristics7 (2001) 261–304]. In particular, the
performance evaluation of nondeterministic methods, including widely
studied metaheuristics such as evolutionary computation and ant colony
optimization, requires the definition of new experimental protocols.
A careful and thorough analysis of the problem of evaluating
metaheuristics reveals strong similarities between this problem and
the problem of evaluating learning methods in the machine learning
field.
In this paper, we show that several conceptual tools commonly used in
machine learning – such as, for example, the probabilistic notion of
class of instances and the separation between the training and the
testing datasets – fit naturally in the context of metaheuristics
evaluation.
Accordingly, we propose and discuss some principles inspired by the
experimental practice in machine learning for guiding the performance
evaluation of optimization algorithms.
Among these principles, a clear separation between the instances that
are used for tuning algorithms and those that are used in the actual
evaluation is particularly important for a proper assessment.
LA - eng
UR - http://eudml.org/doc/249732
ER -
References
top- R.S Barr, B.L. Golden, J.P. Kelly, M.G.C. Resende and W.R. Stewart, Designing and reporting computational experiments with heuristic methods. J. Heuristics1 (1995) 9–32.
- M. Birattari, The Problem of Tuning Metaheuristics, as Seen from a Machine Learning Perspective. Ph.D. thesis, Université Libre de Bruxelles, Brussels, Belgium (2004).
- M. Birattari, T. Stützle, L. Paquete and K. Varrentrapp, A racing algorithm for configuring metaheuristics, in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO-2002), edited by W.B. Langdon, et al. Morgan Kaufmann Publishers, San Francisco, CA (2002) 11–18.
- M. Demenage, P. Grisoni and V.Th. Paschos, Differential approximation algorithms for some combinatorial optimization problems. Theoret. Comput. Sci.209 (1998) 107–122.
- M. Dorigo and C. Blum, Ant colony optimization theory: A survey. Theoret. Comput. Sci.344 (2005) 243–278.
- M. Dorigo and G. Di Caro, The Ant Colony Optimization meta-heuristic, in New Ideas in Optimization, edited by D. Corne, M. Dorigo and F. Glover. McGraw Hill, London, UK (1999) 11–32.
- M. Dorigo, G. Di Caro and L. M. Gambardella, Ant algorithms for discrete optimization. Artificial Life5 (1999) 137–172.
- M. Dorigo, V. Maniezzo and A. Colorni, Ant System: Optimization by a colony of cooperating agents. IEEE Trans. Systems, Man, and Cybernetics – Part B26 (1996) 29–41.
- M. Dorigo and T. Stützle, Ant Colony Optimization. MIT Press, Cambridge, MA (2004).
- A.E. Eiben and M. Jelasity, A Critical Note on Experimental Research Methodology in EC, in Proceedings of the 2002 Congress on Evolutionary Computation (CEC'2002), Piscataway, NJ, IEEE Press (2002) 582–587.
- L.J. Fogel, A.J. Owens and M.J. Walsh, Artificial Intelligence Through Simulated Evolution. John Wiley & Sons, New York, NY (1966).
- M.R. Garey and D.S. Johnson, Computers and Intractability: A Guide to the Theory of -Completeness. Freeman, San Francisco, CA (1979).
- F. Glover, Tabu search – part I. ORSA J. Comput.1 (1989) 190–206.
- F. Glover, Tabu search – part II. ORSA J. Comput.2 (1990) 4–32.
- D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading, MA (1989).
- R. Hassin and S. Khuller, Z-approximations. J. Algorithms41 (2001) 429–442.
- J. Holland, Adaptation in Natural and Artificial Systems. University of Michigan Press, Ann Arbor, MI (1975).
- J.N. Hooker, Testing heuristics: we have it all wrong. J. Heuristics1 (1995) 33–42.
- D.S. Johnson, A theoretician's guide to the experimental analysis of algorithms, in Data structures, near neighbor searches, and methodology: 5th and 6th DIMACS implementation challenges. American Mathematical Society, Providence, RI (2002) 215–250.
- S.A. Kauffman, The Origins of Order. Self-Organization and Selection in Evolution. Oxford University Press, Oxford, UK (1993).
- S. Kirkpatrick, C.D. Gelatt Jr. and M.P. Vecchi, Optimization by simulated annealing. Science220 (1983) 671–680.
- K. Liang, X. Yao and C. Newton, Adapting self-adaptive parameters in evolutionary algorithms. Appl. Intell.15 (2001) 171–180.
- H.R. Lourenço, O. Martin and T. Stützle, Iterated local search, in Handbook of Metaheuristics. International Series in Operations Research & Management Science, edited by F. Glover and G. Kochenberger. Kluwer Academic Publishers, Norwell, MA 57 (2002) 321–353.
- D.G. Luenberger, Introduction to Linear and Nonlinear Programming. Addison-Wesley Publishing Company, Reading, MA (1973).
- O. Maron and A.W. Moore, Hoeffding races: Accelerating model selection search for classification and function approximation, in Advances in Neural Information Processing Systems, edited by J.D. Cowan, G. Tesauro and J. Alspector. Morgan Kaufmann Publishers, San Francisco, CA 6 (1994) 59–66.
- C.C. McGeogh, Toward an experimental method for algorithm simulation. INFORMS J. Comput.2 (1996) 1–15.
- Z. Michalewicz and D.B. Fogel, How to Solve it: Modern Heuristics. Springer-Verlag, Berlin, Germany (2000).
- A.W. Moore and M.S. Lee, Efficient algorithms for minimizing cross validation error, in International Conference on Machine Learning. Morgan Kaufmann Publishers, San Francisco, CA (1994) 190–198.
- B.L. Nelson, J. Swann, D. Goldsman and W. Song, Simple procedures for selecting the best simulated system when the number of alternatives is large. Oper. Res.49 (2001) 950–963.
- R.R. Rardin and R. Uzsoy, Experimental evaluation of heuristic optimization algorithms: A tutorial. J. Heuristics7 (2001) 261–304.
- I. Rechenberg, Evolutionsstrategie – Optimierung technischer Systeme nach Prinzipien der biologischen Information. Fromman Verlag, Freiburg, Germany (1973).
- H.-P. Schwefel, Numerical Optimization of Computer Models. John Wiley & Sons, Chichester, UK (1981).
- I. Sommerville, Software Engineering. Addison Wesley, Harlow, UK, sixth edition (2001).
- M. Toussaint, Self-adaptive exploration in evolutionary search. Technical Report IRINI-2001-05, Institut für Neuroinformatik, Ruhr-Universität Bochum, Bochum, Germany (2001).
- D.H. Wolpert and W.G. Macready, No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation1 (1997) 67–82.
- E. Zemel, Measuring the quality of approximate solutions to zero-one programming problems. Math. Oper. Res.6 (1981) 319–332.
- M. Zlochin, M. Birattari, N. Meuleau and M. Dorigo, Model-based search for combinatorial optimization: A critical survey. Ann. Oper. Res.131 (2004) 375–395.
- M. Zlochin and M. Dorigo, Model based search for combinatorial optimization: A comparative study, in Parallel Problem Solving from Nature – PPSN VII, edited by M. Guervós, J.J. et al. Springer Verlag, Berlin, Germany (2002) 651–661.
NotesEmbed ?
topTo embed these notes on your page include the following JavaScript code on your page where you want the notes to appear.