×

zbMATH — the first resource for mathematics

Discovering the suitability of optimisation algorithms by learning from evolved instances. (English) Zbl 1236.49008
Summary: The suitability of an optimisation algorithm selected from within an algorithm portfolio depends upon the features of the particular instance to be solved. Understanding the relative strengths and weaknesses of different algorithms in the portfolio is crucial for effective performance prediction, automated algorithm selection, and to generate knowledge about the ideal conditions for each algorithm to influence better algorithm design. Relying on well-studied benchmark instances, or randomly generated instances, limits our ability to truly challenge each of the algorithms in a portfolio and determine these ideal conditions. Instead we use an evolutionary algorithm to evolve instances that are uniquely easy or hard for each algorithm, thus providing a more direct method for studying the relative strengths and weaknesses of each algorithm. The proposed methodology ensures that the metadata are sufficient to be able to learn the features of the instances that uniquely characterise the ideal conditions for each algorithm. A case study is presented based on a comprehensive study of the performance of two heuristics on the Travelling Salesman Problem. The results show that prediction of search effort as well as the best performing algorithm for a given instance can be achieved with high accuracy.

MSC:
49-04 Software, source code, etc. for problems pertaining to calculus of variations and optimal control
90C27 Combinatorial optimization
68Q25 Analysis of algorithms and problem complexity
68Q87 Probability in computer science (algorithm analysis, random structures, phase transitions, etc.)
68T05 Learning and adaptive systems in artificial intelligence
68T20 Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.)
90B99 Operations research and management science
Software:
Hyperheuristics
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Applegate, D., Cook, W., Rohe, A.: Chained Lin-Kernighan for large traveling salesman problems. INFORMS J. Comput. 15(1), 82–92 (2003) · Zbl 1238.90125
[2] Bachelet, V.: Métaheuristiques parallèles hybrides: application au problème d’affectation quadratique. Ph.D. thesis, Universite des Sciences et Technologies de Lille (1999)
[3] Battiti, R.: Using mutual information for selecting features in supervised neural net learning. IEEE Trans. Neural Netw 5(4), 537–550 (1994)
[4] Burke, E., Kendall, G., Newall, J., Hart, E., Ross, P., Schulenburg, S.: Hyper-heuristics: an emerging direction in modern search technology. International Series in Operations Research and Management Science, pp. 457–474 (2003) · Zbl 1102.90377
[5] Cheeseman, P., Kanefsky, B., Taylor, W.: Where the really hard problems are. In: Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI), pp. 331–337 (1991) · Zbl 0747.68064
[6] Cho, Y., Moore, J., Hill, R., Reilly, C.: Exploiting empirical knowledge for bi-dimensional knapsack problem heuristics. International Journal of Industrial and Systems Engineering 3(5), 530–548 (2008)
[7] Corne, D., Reynolds, A.: Optimisation and generalisation: footprints in instance space. Parallel Problem Solving from Nature–PPSN XI, pp. 22–31 (2010)
[8] Gaertner, D., Clark, K.: On optimal parameters for ant colony optimization algorithms. In: Proceedings of the 2005 International Conference on Artificial Intelligence, vol. 1, pp. 83–89 (2005)
[9] Gent, I., Walsh, T.: The TSP phase transition. Artif. Intell. 88(1–2), 349–358 (1996) · Zbl 0907.68177
[10] Gras, R.: How efficient are genetic algorithms to solve high epistasis deceptive problems? In: IEEE Congress on Evolutionary Computation. CEC 2008. (IEEE World Congress on Computational Intelligence), pp. 242–249 (2008)
[11] Hall, N., Posner, M.: Performance prediction and preselection for optimization and heuristic solution procedures. Oper. Res. 55(4), 703–716 (2007) · Zbl 1167.90689
[12] van Hemert, J.: Property analysis of symmetric travelling salesman problem instances acquired through evolution. In: Proceedings of the European Conference on Evolutionary Computation in Combinatorial Optimization (EvoCop 2005). LNCS, vol. 3448, pp. 122–131, Springer (2005) · Zbl 1119.90368
[13] van Hemert, J.: Evolving combinatorial problem instances that are difficult to solve. Evol. Comput. 14(4), 433–462 (2006) · Zbl 05412892
[14] van Hemert, J., Urquhart, N.: Phase transition properties of clustered travelling salesman problem instances generated with evolutionary computation. In: Parallel Problem Solving from Nature-PPSN VIII. LNCS, vol. 3242, pp. 151–160, Springer (2004)
[15] Johnson, D., McGeoch, L.: The traveling salesman problem: a case study. In: Aarts, E., Lenstra, J. (eds.) Local Search in Combinatorial Optimization, chap. 8, pp. 215–310. John Wiley & Sons, Inc (1997) · Zbl 0947.90612
[16] Kilby, P., Slaney, J., Walsh, T.: The backbone of the travelling salesperson. In: International Joint Conference on Artificial Intelligence, vol. 19, pp. 175–180 (2005)
[17] Kohonen, T.: Self-organization maps. Proc. IEEE 78, 1464–1480 (1990)
[18] Kratica, J., Ljubić, I., Tošic, D.: A genetic algorithm for the index selection problem. In: Raidl, G., et al. (eds.) Applications of Evolutionary Computation, vol. 2611, pp. 281–291. Springer-Verlag (2003)
[19] Leyton-Brown, K., Nudelman, E., Shoham, Y.: Learning the empirical hardness of optimization problems: The case of combinatorial auctions. In: Principles and Practice of Constraint Programming-CP 2002. Lecture Notes in Computer Science. vol. 2470, pp. 556–572, Springer (2002)
[20] Leyton-Brown, K., Nudelman, E., Shoham, Y.: Empirical hardness models: Methodology and a case study on combinatorial auctions. J. ACM (JACM) 56(4), 1–52 (2009) · Zbl 1325.68110
[21] Lin, S., Kernighan, B.: An efficient heuristic algorithm for the traveling salesman problem. Oper. Res. 21(2), 498–516 (1973) · Zbl 0256.90038
[22] Locatelli, M., Wood, G.: Objective Function Features Providing Barriers to Rapid Global Optimization. J. Glob. Optim. 31(4), 549–565 (2005) · Zbl 1093.90093
[23] Macready, W., Wolpert, D.: What makes an optimization problem hard. Complexity 5, 40–46 (1996) · Zbl 1455.90156
[24] Nudelman, E., Leyton-Brown, K., Hoos, H., Devkar, A., Shoham, Y.: Understanding random SAT: beyond the clauses-to-variables ratio. Principles and Practice of Constraint Programming–CP, 2004. Lecture Notes in Computer Science, vol. 3258, pp. 438–452 (2004) · Zbl 1152.68569
[25] Pfahringer, B., Bensusan, H., Giraud-Carrier, C.: Meta-learning by landmarking various learning algorithms. In: Proceedings of the Seventeenth International Conference on Machine Learning table of contents, pp. 743–750. Morgan Kaufmann Publishers Inc. San Francisco, CA, USA (2000)
[26] Reeves, C.: Landscapes, operators and heuristic search. Ann. Oper. Res. 86, 473–490 (1999) · Zbl 0921.90095
[27] Rice, J.: The Algorithm Selection Problem. Adv. Comput. 15, 65–118 (1976)
[28] Ridge, E., Kudenko, D.: An analysis of problem difficulty for a class of optimisation heuristics. Evolutionary Computation in Combinatorial Optimization. Lecture Notes in Computer Science, vol. 4446, pp. 198 (2007) · Zbl 1159.90529
[29] Sander, J., Ester, M., Kriegel, H., Xu, X.: Density-based clustering in spatial databases: The algorithm gdbscan and its applications. Data Mining and Knowledge Discovery 2(2), 169–194 (1998) · Zbl 05470544
[30] Schiavinotto, T., Stützle, T.: A review of metrics on permutations for search landscape analysis. Comput. Oper. Res. 34(10), 3143–3153 (2007) · Zbl 1185.90115
[31] Smith-Miles, K.: Towards insightful algorithm selection for optimisation using meta-learning concepts. In: IEEE International Joint Conference on Neural Networks, 2008. IJCNN 2008. (IEEE World Congress on Computational Intelligence), pp. 4118–4124 (2008)
[32] Smith-Miles, K., van Hemert, J., Lim, X.: Understanding TSP difficulty by learning from evolved instances. In: Proceedings of the 4th Learning and Intelligent Optimization conference. Lecture Notes in Computer Science, vol. 6073, pp. 266–280 (2010)
[33] Smith-Miles, K., James, R., Giffin, J., Tu, Y.: A knowledge discovery approach to understanding relationships between scheduling problem structure and heuristic performance. In: Proceedings of the 3rd Learning and Intelligent Optimization conference. Lecture Notes in Computer Science, vol. 5851, pp. 89–103 (2009)
[34] Smith-Miles, K.A., Lopes, L.B.: Measuring Instance Difficulty for Combinatorial Optimization Problems. Computers and Operations Research, under revision (2011) · Zbl 1251.90339
[35] SOMine, V.: Enterprise Edition Version 3.0. Eudaptics Software Gmbh (1999)
[36] Stadler, P., Schnabl, W.: The landscape of the traveling salesman problem. Phys. Lett. A 161(4), 337–344 (1992) · Zbl 0979.90509
[37] Thiebaux, S., Slaney, J., Kilby, P.: Estimating the hardness of optimisation. In: Proceedings of the European Conference on Artificial Intelligence, pp. 123–130 (2000)
[38] Vasconcelos, N.: Feature selection by maximum marginal diversity: optimality and implications for visual recognition. In: 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings, vol. 1 (2003)
[39] Xin, B., Chen, J., Pan, F.: Problem difficulty analysis for particle swarm optimization: deception and modality. In: Proceedings of the first ACM/SIGEVO Summit on Genetic and Evolutionary Computation, pp. 623–630 (2009)
[40] Xu, L., Hutter, F., Hoos, H., Leyton-Brown, K.: SATzilla-07: The design and analysis of an algorithm portfolio for SAT. In: Proceedings of the 13th International Conference on Principles and Practice of Constraint Programming. Lecture Notes in Computer Science, vol. 4741, pp. 712–727 (2007)
[41] Zhang, W.: Phase transitions and backbones of the asymmetric traveling salesman problem. J. Artif. Intell. Res. 21, 471–497 (2004) · Zbl 1081.90055
[42] Zhang, W., Korf, R.: A study of complexity transitions on the asymmetric traveling salesman problem. Artif. Intell. 81(1–2), 223–239 (1996)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.