×

zbMATH — the first resource for mathematics

COCO: a platform for comparing continuous optimizers in a black-box setting. (English) Zbl 1464.90001
Summary: We introduce COCO, an open-source platform for Comparing Continuous Optimizers in a black-box setting. COCO aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. The platform and the underlying methodology allow to benchmark in the same framework deterministic and stochastic solvers for both single and multiobjective optimization. We present the rationals behind the (decade-long) development of the platform as a general proposition for guidelines towards better benchmarking. We detail underlying fundamental concepts of COCO such as the definition of a problem as a function instance, the underlying idea of instances, the use of target values, and runtime defined by the number of function calls as the central performance measure. Finally, we give a quick overview of the basic code structure and the currently available test suites.
MSC:
90-04 Software, source code, etc. for problems pertaining to operations research and mathematical programming
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Audet, C.; Hare, W., Derivative-Free and Blackbox Optimization (2017), Springer: Springer, Cham · Zbl 1391.90001
[2] Auger, A., Brockhoff, D., and Hansen, N., Benchmarking the Local Metamodel CMA-ES on the Noiseless BBOB’2013 Test Bed, in GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB’2013), ACM, Amsterdam, 2013, pp. 1225-1232.
[3] Auger, A., Brockhoff, D., Hansen, N., Tušar, D., Tušar, T., and Wagner, T., Benchmarking MATLAB’s gamultiobj (NSGA-II) on the Bi-objective BBOB-2016 Test Suite, in GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB’2016), ACM, Denver, 2016, pp. 1233-1239.
[4] Auger, A., Brockhoff, D., Hansen, N., Tušar, D., Tušar, T., and Wagner, T., Benchmarking RM-MEDA on the Bi-objective BBOB-2016 Test Suite, in GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB’2016), ACM, Denver, 2016, pp. 1241-1247.
[5] Auger, A., Brockhoff, D., Hansen, N., Tušar, D., Tušar, T., and Wagner, T., The Impact of Search Volume on the Performance of RANDOMSEARCH on the Bi-objective BBOB-2016 Test Suite, in GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB’2016), ACM, Denver, 2016, pp. 1257-1264.
[6] Auger, A., Brockhoff, D., Hansen, N., Tušar, D., Tušar, T., and Wagner, T., The Impact of Variation Operators on the Performance of SMS-EMOA on the Bi-objective BBOB-2016 Test Suite, in GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB’2016), ACM, Denver, 2016, pp. 1225-1232.
[7] Auger, A. and Hansen, N., Performance Evaluation of an Advanced Local Search Evolutionary Algorithm, in Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2005), Edinburgh, 2005, pp. 1777-1784.
[8] Auger, A. and Ros, R., Benchmarking the pure random search on the BBOB-2009 testbed, in Genetic and Evolutionary Computation Conference, GECCO 2009, Proceedings, Companion Material, F. Rothlauf, ed., ACM, Montreal, 2009, pp. 2479-2484.
[9] Barr, R. S.; Golden, B. L.; Kelly, J. P.; Resende, M. G.; Stewart, W. R., Designing and reporting on computational experiments with heuristic methods, J. Heuristics, 1, 9-32 (1995) · Zbl 0853.68154
[10] Beiranvand, V.; Hare, W.; Lucet, Y., Best practices for comparing optimization algorithms, Optim. Eng., 18, 815-848 (2017) · Zbl 1390.90601
[11] Beume, N.; Naujoks, B.; Emmerich, M., SMS-EMOA: Multiobjective selection based on dominated hypervolume, Eur. J. Oper. Res., 181, 1653-1669 (2007) · Zbl 1123.90064
[12] Blelly, A., Felipe-Gomes, M., Auger, A., and Brockhoff, D., Stopping Criteria, Initialization, and Implementations of BFGS and their Effect on the BBOB Test Suite, in GECCO (Companion) Workshop on Black-Box Optimization Benchmarking (BBOB’2009), Kyoto, 2018.
[13] Bleuler, S., Laumanns, M., Thiele, L., and Zitzler, E., PISA: A Platform and Programming Language Independent Interface for Search Algorithms, in Conference on Evolutionary Multi-Criterion Optimization (EMO 2003), C.M. Fonseca, et al., eds., LNCS Vol. 2632, Springer, Berlin, 2003, pp. 494-508. · Zbl 1037.68743
[14] Bossek, J., Performance Assessment of Multi-objective Evolutionary Algorithms with the R Package ECR, in Companion Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2018), ACM, Kyoto, 2018, pp. 1350-1356.
[15] Brockhoff, D., Tran, T.D., and Hansen, N., Benchmarking Numerical Multiobjective Optimizers revisited, in Genetic and Evolutionary Computation Conference (GECCO 2015), ACM, Madrid, 2015, pp. 639-646.
[16] Brockhoff, D., Tušar, T., Auger, A., and Hansen, N., Using well-understood single-objective functions in multiobjective black-box optimization test suites, preprint (2019). Available at https://arxiv.org/abs/1604.00359v3. Last updated January 4, 2019.
[17] Brockhoff, D., Tušar, T., Tušar, D., Wagner, T., Hansen, N., and Auger, A., Biobjective performance assessment with the COCO platform, preprint (2016). Available at https://arxiv.org/abs/1605.01746.
[18] Broyden, C. G., The convergence of a class of double-rank minimization algorithms, J. Inst. Math. Appl., 6, 76-90 (1970) · Zbl 0223.65023
[19] Bubeck, S., Convex optimization: Algorithms and complexity, 2014. · Zbl 1365.90196
[20] Bussieck, M. R.; Dirkse, S. P.; Vigerske, S., Paver 2.0: An open source environment for automated performance analysis of benchmarking data, J. Global Optim., 59, 259-275 (2014) · Zbl 1300.90003
[21] Deb, K.; Pratap, A.; Agarwal, S.; Meyarivan, T., A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Trans. Evol. Comput., 6, 182-197 (2002)
[22] Doerr, B., Fouz, M., Schmidt, M., and Wahlström, M., BBOB: Nelder-Mead with Resize and Halfruns, in Genetic and Evolutionary Computation Conference, GECCO 2009, Proceedings, Companion Material, F. Rothlauf, ed., ACM, Montreal, 2009, pp. 2239-2246.
[23] Doerr, C., Wang, H., Ye, F., van Rijn, S., and Bäck, T., IOHprofiler: A benchmarking and profiling tool for iterative optimization heuristics, preprint (2018). Available at https://arxiv.org/abs/1810.05281.
[24] Dolan, E. D.; Moré, J. J., Benchmarking optimization software with performance profiles, Math. Program., 91, 201-213 (2002) · Zbl 1049.90004
[25] Durillo, J. J.; Nebro, A. J., jMetal: A java framework for multi-objective optimization, Adv. Eng. Softw., 42, 760-771 (2011)
[26] Efron, B.; Tibshirani, R. J., An Introduction to the Bootstrap (1994), CRC Press: CRC Press, New York
[27] El-Mihoub, T. A.; Hopgood, A. A.; Nolle, L.; Battersby, A., Hybrid genetic algorithms: A review, Eng. Lett., 13, 124-137 (2006)
[28] Finck, S., Hansen, N., Ros, R., and Auger, A., Real-parameter black-box optimization benchmarking 2009: Presentation of the noiseless functions, Tech. Rep. 2009/20, Research Center PPE, 2009.
[29] Fletcher, R., A new approach to variable metric algorithms, Comput. J., 13, 317-322 (1970) · Zbl 0207.17402
[30] Gaviano, M.; Kvasov, D.; Lera, D.; Sergeyev, Y. D., Software for generation of classes of test functions with known local and global minima for global optimization, ACM Trans. Math. Softw., 29, 469-480 (2003) · Zbl 1068.90600
[31] Georges, A., Gleixner, A., Gojic, G., Gottwald, R.L., Haley, D., Hendel, G., and Matejczyk, B., Feature-based algorithm selection for mixed integer programming, Tech. Rep. 18-17, ZIB, Takustr. 7, 14195 Berlin, 2018.
[32] Goldfarb, D., A family of variable metric updates derived by variational means, Math. Comput., 24, 23-26 (1970) · Zbl 0196.18002
[33] Gould, N.; Scott, J., A note on performance profiles for benchmarking software, ACM Trans. Math. Softw. 43 (TOMS), 43 (2016) · Zbl 1369.65202
[34] Hansen, N.; Ostermeier, A., Completely derandomized self-adaptation in evolution strategies, Evol. Comput., 9, 159-195 (2001)
[35] Hansen, N., Auger, A., Finck, S., and Ros, R., Real-parameter black-box optimization benchmarking 2009: Experimental setup, Tech. Rep. RR-6828, INRIA, 2009. Available at http://hal.inria.fr/inria-00362649/en/.
[36] Hansen, N., Finck, S., Ros, R., and Auger, A., Real-parameter black-box optimization benchmarking 2009: Noiseless functions definitions, Tech. Rep. RR-6829, INRIA, 2009. Available at http://hal.inria.fr/inria-00362633/en/.
[37] Hansen, N., Finck, S., Ros, R., and Auger, A., Real-parameter black-box optimization benchmarking 2009: Noisy functions definitions, Tech. Rep. RR-6869, INRIA, 2009. Available at http://hal.inria.fr/inria-00369466/en.
[38] Hansen, N., Auger, A., Ros, R., Finck, S., and Pošík, P., Comparing Results of 31 Algorithms from the Black-box Optimization Benchmarking BBOB-2009, in GECCO ’10: Proceedings of the 12th Annual conference Comp on Genetic and Evolutionary Computation, New York, ACM, 2010, pp. 1689-1696.
[39] Hansen, N., Auger, A., Finck, S., and Ros, R., Real-parameter black-box optimization benchmarking 2010: Experimental setup, Tech. Rep. RR-7215, INRIA, 2010. Available at http://coco.gforge.inria.fr/bbob2010-downloads.
[40] Hansen, N., Auger, A., Mersmann, O., Tušar, T., and Brockhoff, D., COCO: A platform for comparing continuous optimizers in a black-box setting, preprint (2016). Available at https://arxiv.org/abs/1603.08785.
[41] Hansen, N., Tušar, T., Mersmann, O., Auger, A., and Brockhoff, D., COCO: The experimental procedure, preprint (2016). Available at https://arxiv.org/abs/1603.08776.
[42] Hansen, N., Auger, A., Brockhoff, D., Tušar, D., and Tušar, T., COCO: Performance assessment, preprint (2016). Available at https://arxiv.org/abs/1605.03560.
[43] Harik, G. and Lobo, F., A Parameter-Less Genetic Algorithm, in Genetic and Evolutionary Computation Conference (GECCO 1999), Vol. 1. ACM, Orlando, 1999, pp. 258-265.
[44] Hock, W. and Schittkowski, K., Test Examples for Nonlinear Programming Codes, Lecture Notes in Economics and Mathematical Systems Vol. 187, Springer, Berlin, 1981. · Zbl 0452.90038
[45] Hoffman, A.; Mannos, M.; Sokolowsky, D.; Wiegmann, N., Computational experience in solving linear programs, J. Soc. Ind. Appl. Math., 1, 17-33 (1953) · Zbl 0053.41805
[46] Hooker, J. N., Testing heuristics: We have it all wrong, J. Heuristics, 1, 33-42 (1995) · Zbl 0853.68155
[47] Hoos, H. and Stützle, T., Evaluating Las Vegas Algorithms - Pitfalls and Remedies, in Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence (UAI-98), Madison, 1998, pp. 238-245.
[48] Hunter, J. D., Matplotlib: A 2d graphics environment, Comput. Sci. Eng., 9, 90 (2007)
[49] Huyer, W.; Neumaier, A., Global optimization by multilevel coordinate search, J. Global Optim., 14, 331-355 (1999) · Zbl 0956.90045
[50] Huyer, W. and Neumaier, A., Benchmarking of MCS on the noiseless function testbed, 2009. Available at http://www.mat.univie.ac.at/∼neum/papers.html.
[51] Johnson, D.S., A Theoretician’s Guide to the Experimental Analysis of Algorithms, Data Structures, Near neighbor Searches, and Methodology: Fifth and Sixth DIMACS Implementation Challenges 59, 2002, pp. 215-250.
[52] Loshchilov, I., Schoenauer, M., and Sebag, M., Black-Box Optimization Benchmarking of NIPOP-aCMA-ES and NBIPOP-aCMA-ES on the BBOB-2012 Noiseless Testbed, in Companion Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2012), T. Soule, ed., ACM, Philadelphia, 2012, pp. 269-276.
[53] Mersmann, O.; Preuss, M.; Trautmann, H.; Bischl, B.; Weihs, C., Analyzing the bbob results by means of benchmarking concepts, Evol. Comput., 23, 161-185 (2015)
[54] Moré, J. J.; Wild, S. M., Benchmarking derivative-free optimization algorithms, SIAM J. Optim., 20, 172-191 (2009) · Zbl 1187.90319
[55] Moré, J. J.; Garbow, B. S.; Hillstrom, K. E., Testing unconstrained optimization software, ACM Trans. Math. Softw. (TOMS), 7, 17-41 (1981) · Zbl 0454.65049
[56] Nelder, J.; Mead, R., The downhill simplex method, Comput. J., 7, 308-313 (1965) · Zbl 0229.65053
[57] Nemirovski, A., Information-based Complexity of Convex Programming, Lecture Notes, 1995.
[58] Nesterov, Y., Lectures on Convex Optimization, 137 (2018), Springer: Springer, Cham · Zbl 1427.90003
[59] Powell, M. J.D., The NEWUOA software for unconstrained optimization without derivatives, Large Scale Nonlinear Optim., 83, 255-297 (2006) · Zbl 1108.90005
[60] Price, K., Differential Evolution vs. the Functions of The Second ICEO, in Proceedings of the IEEE International Congress on Evolutionary Computation, Piscataway, NJ, IEEE, 1997, pp. 153-157.
[61] Robič, T. and Filipič, B., Differential evolution for multiobjective optimization, in Evolutionary Multi-Criterion Optimization (EMO 2005), Carlos A. Coello Coello, Arturo Hernandez Aguirre, and Eckart Zitzler. Springer, Guanajuato, 2005, pp. 520-533. · Zbl 1109.68633
[62] Ros, R., Benchmarking the NEWUOA on the BBOB-2009 function testbed, in Genetic and Evolutionary Computation Conference, GECCO 2009, Proceedings, Companion Material, F. Rothlauf, ed., ACM, Montreal, 2009, pp. 2421-2428.
[63] Rothlauf, F. (ed.), Genetic and Evolutionary Computation Conference, GECCO 2009, Proceedings, Companion Material, ACM, Montreal, 2009.
[64] Schittkowski, K., More Test Examples for Nonlinear Programming Codes (1987), Springer: Springer, Berlin · Zbl 0658.90060
[65] Shanno, D. F., Conditioning of quasi-Newton methods for function minimization, Math. Comput., 24, 647-656 (1970) · Zbl 0225.65073
[66] Stevens, S. S., On the theory of scales of measurement, Science, 103, 677-680 (1946) · Zbl 1226.91050
[67] Tian, Y.; Cheng, R.; Zhang, X.; Jin, Y., Platemo: A matlab platform for evolutionary multi-objective optimization [educational forum], IEEE. Comput. Intell. Mag., 12, 73-87 (2017)
[68] Tušar, T. and Filipič, B., Performance of the DEMO Algorithm on the Bi-objective BBOB Test Suite, in Companion Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2016), ACM, Denver, 2016, pp. 1249-1256.
[69] Tušar, T., Hansen, N., and Brockhoff, D., Anytime Benchmarking of Budget-Dependent Algorithms with the COCO Platform, in International Multiconference Information Society (IS 2017), ACM, Ljubljana, 2017, pp. 47-50.
[70] Tušar, T., Brockhoff, D., and Hansen, N., Mixed-integer Benchmark Problems for Single- and Bi-objective Optimization, in Genetic and Evolutionary Computation Conference (GECCO 2019), ACM, Prague, 2019, pp. 718-726.
[71] Varelas, K., Auger, A., Brockhoff, D., Hansen, N., ElHara, O.A., Semet, Y., Kassab, R., and Barbaresco, F., A Comparative Study of Large-Scale Variants of CMA-ES, in International Conference on Parallel Problem Solving from Nature, Springer, Coimbra, 2018, pp. 3-15.
[72] Volz, V., Naujoks, B., Kerschke, P., and Tušar, T., Single- and Multi-Objective Game-Benchmark for Evolutionary Algorithms, in Genetic and Evolutionary Computation Conference (GECCO 2019), ACM, Prague, 2019, pp. 647-655.
[73] Whitley, D.; Rana, S.; Dzubera, J.; Mathias, K. E., Evaluating evolutionary algorithms, Artif. Intell., 85, 245-276 (1996)
[74] Zhang, Q.; Zhou, A.; Jin, Y., RM-MEDA: A regularity model-based multiobjective estimation of distribution algorithm, IEEE Trans. Evol. Comput., 12, 41-63 (2008)
[75] Zitzler, E. and Thiele, L., Multiobjective Optimization Using Evolutionary Algorithms: A Comparative Case Study, in Conference on Parallel Problem Solving from Nature (PPSN V), LNCS Vol. 1498, Amsterdam, 1998, pp. 292-301.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.