×

Solving dual problems using a coevolutionary optimization algorithm. (English) Zbl 1286.90141

Summary: In solving certain optimization problems, the corresponding Lagrangian dual problem is often solved simply because in these problems the dual problem is easier to solve than the original primal problem. Another reason for their solution is the implication of the weak duality theorem which suggests that under certain conditions the optimal dual function value is smaller than or equal to the optimal primal objective value. The dual problem is a special case of a bilevel programming problem involving Lagrange multipliers as upper-level variables and decision variables as lower-level variables. Another interesting aspect of dual problems is that both lower and upper-level optimization problems involve only box constraints and no other equality of inequality constraints. In this paper, we propose a coevolutionary dual optimization (CEDO) algorithm for co-evolving two populations – one involving Lagrange multipliers and the other one involving decision variables – to find the dual solution. On 11 test problems taken from the optimization literature, we demonstrate the efficacy of the CEDO algorithm by comparing it with a couple of nested smooth and nonsmooth algorithms and a couple of previously suggested coevolutionary algorithms. The performance of the CEDO algorithm is also compared with two classical methods involving nonsmooth (bundle) optimization methods. As a by-product, we analyze the test problems to find their associated duality gap and classify them into three categories having zero, finite or infinite duality gaps. The development of a coevolutionary approach, revealing the presence or absence of duality gaps in a number of commonly-used test problems, and the efficacy of the proposed coevolutionary algorithm compared to usual nested smooth and nonsmooth algorithms and other existing coevolutionary approaches remain as the hallmark of the current study.

MSC:

90C30 Nonlinear programming
90C46 Optimality conditions and duality in mathematical programming
90C59 Approximation methods and heuristics in mathematical programming

Software:

DGM; LMBM; LDGB
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Bagirov A., Karasozen B., Sezer M.: Discrete gradient method: derivative-free method for nonsmooth optimization. J. Optim. Theory Appl. 137(2), 317-334 (2008) · Zbl 1165.90021 · doi:10.1007/s10957-007-9335-5
[2] Barbosa, H.J.C.: A genetic algorithm for min-max problems. In: Proceedings of the First International Conference on Evolutionary Computation and Its Application (EvCA’96), pp. 99-109 (1996) · Zbl 0537.90081
[3] Barbosa, H.J.C.: A coevolutionary genetic algorithm for constraint optimization. In: Proceedings of the 1999 Congress on Evolutionary Computation (CEC-99), pp. 1605-1611 (1999) · Zbl 1015.90089
[4] Bazaraa M.S., Sherali H.D., Shetty C.M.: Nonlinear programming: Theory and algorithms. Wiley, Singapore (2004) · Zbl 1140.90040
[5] Bradley S., Hax A., Magnanti T.: Applied Mathematical Programming. Addison-Wesley, Reading (1977)
[6] Branke, J., Rosenbusch, J.: New approaches to coevolutionary worst-case optimization. In: Proceedings of the Parallel Problem Solving from Nature Conference (PPSN-X), (LNCS 5199), pp. 144-153 (2008)
[7] Deb K.: Optimization for Engineering Design: Algorithms and Examples. Prentice-Hall, New Delhi (1995)
[8] Deb K.: Multi-Objective Optimization Using Evolutionary Algorithms. Wiley, Chichester (2001) · Zbl 0970.90091
[9] Deb K., Agrawal R.B.: Simulated binary crossover for continuous search space. Complex Syst. 9(2), 115-148 (1995) · Zbl 0843.68023
[10] Deb, K., Datta, R.: A fast and accurate solution of constrained optimization problems using a hybrid bi-objective and penalty function approach. In: Proceedings of the IEEE World Congress on Computational Intelligence (WCCI-2010), pp. 165-172 (2010) · Zbl 1059.90146
[11] Goldberg D.E., Deb K., Clark J. H.: Genetic algorithms, noise, and the sizing of populations. Complex Syst. 6(4), 333-362 (1992) · Zbl 0800.68600
[12] Goldfarb D., Idnani A.: A numerically stable dual method for solving strictly convex quadratic programs. Math. Program. 27, 1-33 (1983) · Zbl 0537.90081 · doi:10.1007/BF02591962
[13] Haarala M., Miettinen K., Mäelä M.M.: New limited memory bundle method for large-scale nonsmooth optimization. Optim. Methods Softw. 19(6), 673-692 (2004) · Zbl 1068.90101 · doi:10.1080/10556780410001689225
[14] Herrmann, J.W.: A genetic algorithm for min-max optimization problems. In: Proceedings of IEEE Congress on Evolutionary Computation (CEC-09), pp. 1099-1103 (2009)
[15] Jensen, M.T.: A new look at solving minmax problems with coevolution. In: Proceedings of the Fourth Metaheuristics International Conference (MIC-2001), pp. 103-107 (2001) · Zbl 1059.90146
[16] Karmitsa, N.: LMBM FORTRAN subroutines for large-scale nonsmooth minimization: user’s manual. Technical Report TUCS Technical Report, No. 856, Turku Centre for Computer Science, Turku, Finland (2007) · Zbl 1079.65069
[17] Kiwiel, K.C.: Restricted step and Levenberg-Marquardt techniques in proximal bundle methods in nondifferentiable optimization. Technical Report, Polish Academy of Sciences (1994) · Zbl 0846.65028
[18] Lemarechal, C., Zowe, J.: A condensed introduction to bundle methods in nonsmooth optimization, algorithms for continuous optimization: the state of the art. In: Proceedings of the NATO Advanced Study Institute, pp. 357-382 (1993) · Zbl 0828.90124
[19] Lewis R.M., Kolda T.G., Torczon V.: Optimization by direct search: new perspectives on some classical and modern methods. SIAM Rev. 45, 385-482 (2003) · Zbl 1059.90146 · doi:10.1137/S003614450242889
[20] Malek A., Yari A.: Primal-dual solution for the linear programming problems using neural networks. Appl. Math. Comput. 167(1), 198-211 (2005) · Zbl 1079.65069 · doi:10.1016/j.amc.2004.06.081
[21] Miettinen K., Mäkelä M. M.: Interactive bundle-based method for nondifferentiable multiobjective optimization: NIMBUS. Optimization 34, 231-246 (1995) · Zbl 0855.90114 · doi:10.1080/02331939508844109
[22] Mordukhovich B.S., Nam N. M.: Subgradients of distance functions at out-of-set points. Taiwan. J. Math. 10(2), 299-326 (2006) · Zbl 1103.49008
[23] Oduguwa, V., Roy, R.: Bi-level optimisation using genetic algorithm. In: Proceedings of the 2002 IEEE International Conference on Artificial Intelligence Systems (ICAIS-02), pp. 322-327 (2002)
[24] Rao, S.S.: Genetic algorithmic approach for multiobjective optimization of structures. In: Proceedings of the ASME Annual Winter Meeting on Structures and Controls Optimization, vol. 38, pp. 29-38 (1993) · Zbl 0855.90114
[25] Reklaitis G.V., Ravindran A., Ragsdell K.M.: Engineering Optimization Methods and Applications. Wiley, New York (1983)
[26] Rockafellar R.T.: Convex Analysis. Princeton University Press, Princeton (1996) · Zbl 0878.49012
[27] Rudolph G.: Convergence analysis of canonical genetic algorithms. IEEE Trans. Neural Netw. 5(1), 96-101 (1994) · doi:10.1109/72.265964
[28] Ruszczyński A.: Nonlinear Optimization. Princeton University Press, Princeton (2006) · Zbl 1108.90001
[29] Schramm H., Zowe J.: A version of the bundle idea for minimizing a nonsmooth function: conceptual idea, convergence analysis, numerical results. SIAM J. Optim. 2, 121-152 (1992) · Zbl 0761.90090 · doi:10.1137/0802008
[30] Tulshyan R., Arora, R., Deb, K., Dutta, J.: Investigating EA solutions for approximate KKT conditions for smooth problems. In: Proceedings of Genetic and Evolutionary Algorithms Conference (GECCO-2010), pp. 689-696 (2010)
[31] Zhao X., Luh P.B.: New bundle methods for solving Lagrangian relaxation dual problems. J. Optim. Theory Appl. 113(2), 373-397 (2002) · Zbl 1015.90089 · doi:10.1023/A:1014839227049
[32] Zhou, A., Zhang, Q.: A surrogate assisted evolutionary algorithm for minimax optimization. In: Proceedings of the 2010 Congress on Evolutionary Computation (CEC-10) (2010)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.