Recent zbMATH articles in MSC 90Chttps://zbmath.org/atom/cc/90C2022-09-13T20:28:31.338867ZUnknown authorWerkzeugBook review of: L. Lovász, Graphs and geometryhttps://zbmath.org/1491.000162022-09-13T20:28:31.338867Z"Meunier, Frédéric"https://zbmath.org/authors/?q=ai:meunier.fredericReview of [Zbl 1425.05001].Book review of: H. Sato, Riemannian optimization and its applicationshttps://zbmath.org/1491.000292022-09-13T20:28:31.338867Z"Sembach, Lena"https://zbmath.org/authors/?q=ai:sembach.lenaReview of [Zbl 1456.90001].The relation between polynomial calculus, Sherali-Adams, and sum-of-squares proofshttps://zbmath.org/1491.030772022-09-13T20:28:31.338867Z"Berkholz, Christoph"https://zbmath.org/authors/?q=ai:berkholz.christophSummary: We relate different approaches for proving the unsatisfiability of a system of real polynomial equations over Boolean variables. On the one hand, there are the static proof systems Sherali-Adams and sum-of-squares (a.k.a. Lasserre), which are based on linear and semi-definite programming relaxations. On the other hand, we consider polynomial calculus, which is a dynamic algebraic proof system that models Gröbner basis computations.
Our first result is that sum-of-squares simulates polynomial calculus: any polynomial calculus refutation of degree \(d\) can be transformed into a sum-of-squares refutation of degree \(2d\) and only polynomial increase in size. In contrast, our second result shows that this is not the case for Sherali-Adams: there are systems of polynomial equations that have polynomial calculus refutations of degree 3 and polynomial size, but require Sherali-Adams refutations of large degree \(\Omega(\sqrt{n}/\log n)\) and exponential size.
A corollary of our first result is that the proof systems Positivstellensatz and Positivstellensatz Calculus, which have been separated over non-Boolean polynomials, simulate each other in the presence of Boolean axioms.
For the entire collection see [Zbl 1381.68010].On the facets of stable set polytopes of circular interval graphshttps://zbmath.org/1491.051582022-09-13T20:28:31.338867Z"Oriolo, Gianpaolo"https://zbmath.org/authors/?q=ai:oriolo.gianpaolo"Stauffer, Gautier"https://zbmath.org/authors/?q=ai:stauffer.gautierSummary: A linear description of the stable set polytope \(STAB (G)\) of a quasi-line graph \(G\) is given in [\textit{F. Eisenbrand} et al., Combinatorica 28, No. 1, 45--67 (2008; Zbl 1246.05138)], where the so called Ben Rebea Theorem [\textit{G. Oriolo}, Discrete Appl. Math. 132, No. 1--3, 185--201 (2003; Zbl 1052.90108)] is proved. Such a theorem establishes that, for quasi-line graphs, \( STAB (G)\) is completely described by non-negativity constraints, clique inequalities, and clique family inequalities (CFIs). As quasi-line graphs are a superclass of line graphs, Ben Rebea Theorem can be seen as a generalization of Edmonds' characterization of the matching polytope [\textit{J. Edmonds}, J. Res. Natl. Bur. Stand., Sect. B 69, 125--130 (1965; Zbl 0141.21802)], showing that the matching polytope can be described by non-negativity constraints, degree constraints and odd-set inequalities. Unfortunately, the description given by the Ben Rebea Theorem is not minimal, i.e., it is not known which are the (non-rank) clique family inequalities that are facet defining for \(STAB (G)\). To the contrary, it would be highly desirable to have a minimal description of \(STAB (G)\), pairing that of \textit{W. Pulleyblank} and \textit{J. Edmonds} [Lect. Notes Math. 411, 214--242 (1974; Zbl 0317.05119)] for the matching polytope. In this paper, we start the investigation of a minimal linear description for the stable set polytope of quasi-line graphs. We focus on circular interval graphs, a subclass of quasi-line graphs that is central in the proof of the Ben Rebea Theorem. For this class of graphs, we move an important step forward, showing some strong sufficient conditions for a CFI to induce a facet of \(STAB (G)\). In particular, such conditions come out to be related to the existence of certain proper circulant graphs as subgraphs of \(G\). These results allows us to settle two conjectures on the structure of facet defining inequalities of the stable set polytope of circulant graphs [\textit{A. Pêcher} and \textit{A. K. Wagler}, Math. Program. 105, No. 2--3 (B), 311--328 (2006; Zbl 1083.05032)] and of (fuzzy) circular graphs [\textit{G. Oriolo} and \textit{G. Stauffer}, Math. Program. 115, No. 2 (A), 291--317 (2008; Zbl 1176.90655)], and to slightly refine the Ben Rebea Theorem itself.Arbitrarily regularizable graphshttps://zbmath.org/1491.051832022-09-13T20:28:31.338867Z"Bozzo, Enrico"https://zbmath.org/authors/?q=ai:bozzo.enrico"Franceschet, Massimo"https://zbmath.org/authors/?q=ai:franceschet.massimoSummary: A graph is regularizable if it is possible to assign weights to its edges so that all nodes have the same degree. Weights can be positive, nonnegative or arbitrary as soon as the regularization degree is not null. Positive and nonnegative regularizable graphs have been thoroughly investigated in the literature. In this work, we propose and study arbitrarily regularizable graphs. In particular, we investigate necessary and sufficient regularization conditions on the topology of the graph and of the corresponding adjacency matrix. Moreover, we study the computational complexity of the regularization problem and characterize it as a linear programming model.Systems of polynomials with at least one positive real zerohttps://zbmath.org/1491.120012022-09-13T20:28:31.338867Z"Wang, Jie"https://zbmath.org/authors/?q=ai:wang.jie|wang.jie.2|wang.jie.1|wang.jie.3|wang.jie.4Convergence of asymptotic costs for random Euclidean matching problemshttps://zbmath.org/1491.351382022-09-13T20:28:31.338867Z"Goldman, Michael"https://zbmath.org/authors/?q=ai:goldman.michael"Trevisan, Dario"https://zbmath.org/authors/?q=ai:trevisan.darioThe authors prove the existence of the thermodynamic limit for the matching problem of a Poisson point process to reference measure. Then they give the analog theorem for the average minimum cost of a bipartite matching between two Poisson point processes on the unit cube in \(d\) dimensions (\(d\ge 3\)). Stronger convergence results for the corresponding deterministic problem are finally presented.
Reviewer: Rodica Luca (Iaşi)An orthogonalization-free parallelizable framework for all-electron calculations in density functional theoryhttps://zbmath.org/1491.354022022-09-13T20:28:31.338867Z"Gao, Bin"https://zbmath.org/authors/?q=ai:gao.bin"Hu, Guanghui"https://zbmath.org/authors/?q=ai:hu.guanghui"Kuang, Yang"https://zbmath.org/authors/?q=ai:kuang.yang"Liu, Xin"https://zbmath.org/authors/?q=ai:liu.xin.1Conditions of discreteness of the spectrum for Schrödinger operator and some optimization problems for capacity and measureshttps://zbmath.org/1491.470362022-09-13T20:28:31.338867Z"Zelenko, Leonid"https://zbmath.org/authors/?q=ai:zelenko.leonidSummary: For the Schrödinger operator \(H=-\Delta+V(\mathbf{x})\cdot\), acting in the space \(L_2(\mathbb{R}^d)\) (\(d\ge 3\)), with \(V(\mathbf{x})\ge 0\) and \(V(\cdot)\in L_{1,\mathrm{loc}}(\mathbb{R}^d)\), we obtain some constructive conditions for discreteness of its spectrum. Basing on the Mazya-Shubin criterion for discreteness of the spectrum of \(H\) and using the isocapacity inequality and the concept of base polyhedron for the harmonic capacity, we have estimated from below the cost functional of an optimization problem, involved in this criterion, replacing a submodular constraint (in terms of the harmonic capacity) by a weaker but additive constraint (in terms of a measure). In this way, we obtain an optimization problem which can be considered as an infinite-dimensional analogue of the optimal covering problem. We have solved this problem for the case of a non-atomic measure. This approach enables us to obtain for the operator \(H\) some sufficient conditions for discreteness of its spectrum in terms of non-increasing rearrangements with respect to measures from the base polyhedron, for some functions connected with the potential \(V(\mathbf{x})\). We construct some counterexamples, which permit to compare our results between themselves and with results of other authors.On circumcenter mappings induced by nonexpansive operatorshttps://zbmath.org/1491.470422022-09-13T20:28:31.338867Z"Bauschke, Heinz H."https://zbmath.org/authors/?q=ai:bauschke.heinz-h"Ouyang, Hui"https://zbmath.org/authors/?q=ai:ouyang.hui"Wang, Xianfu"https://zbmath.org/authors/?q=ai:wang.xianfuSummary: We introduce the circumcenter mapping induced by a set of (usually nonexpansive) operators. One prominent example of a circumcenter mapping is the celebrated Douglas-Rachford splitting operator. Our study is motivated by the circumcentered-Douglas-Rachford method recently introduced by \textit{R. Behling} et al. [Numer. Algorithms 78, No. 3, 759--776 (2018; Zbl 1395.49023)] in order to accelerate the Douglas-Rachford method for solving certain classes of feasibility problems. We systematically explore the properness of the circumcenter mapping induced by reflectors or projectors. Numerous examples are presented. We also present a version of Browder's demiclosedness principle for circumcenter mappings.Split systems of general nonconvex variational inequalities and fixed point problemshttps://zbmath.org/1491.470582022-09-13T20:28:31.338867Z"Chen, Jin-Zuo"https://zbmath.org/authors/?q=ai:chen.jinzuoSummary: The purpose of this paper is to introduce and study split systems of general nonconvex variational inequalities. Taking advantage of the projection technique over uniformly prox-regularity sets and utilizing two nonlinear operators, we propose and analyze an iterative scheme for solving the split systems of general nonconvex variational inequalities and fixed point problems. We prove that the sequence generated by the suggested iterative algorithm converges strongly to a common solution of the foregoing split problem and fixed point problem. The result presented in this paper extends and improves some well-known results in the literature. Numerical example illustrates the theoretical result.Accelerated modified inertial Mann and viscosity algorithms to find a fixed point of \(\alpha\)-inverse strongly monotone operatorshttps://zbmath.org/1491.470622022-09-13T20:28:31.338867Z"Hammad, Hasanen A."https://zbmath.org/authors/?q=ai:hammad.hasanen-abuelmagd"Rehman, Habib ur"https://zbmath.org/authors/?q=ai:rehman.habib-ur"de la Sen, Manuel"https://zbmath.org/authors/?q=ai:de-la-sen.manuelSummary: In this paper, strong convergence results for \(\alpha \)-inverse strongly monotone operators under new algorithms in the framework of Hilbert spaces are discussed. Our algorithms are the combination of the inertial Mann forward-backward method with the CQ-shrinking projection method and viscosity algorithm. Our methods lead to an acceleration of modified inertial Mann Halpern and viscosity algorithms. Later on, numerical examples to illustrate the applications, performance, and effectiveness of our algorithms are presented.A splitting algorithm for equilibrium problem given by the difference of two bifunctionshttps://zbmath.org/1491.470692022-09-13T20:28:31.338867Z"Pham Ky Anh"https://zbmath.org/authors/?q=ai:pham-ky-anh."Trinh Ngoc Hai"https://zbmath.org/authors/?q=ai:trinh-ngoc-hai.Summary: In this paper, we introduce a splitting algorithm for solving equilibrium problems given by the difference of two bifunctions in a real Hilbert space. Under suitable assumptions on component bifunctions, we prove strong convergence of the proposed algorithm. In contrast to most existing projection-type methods for equilibrium problems, our algorithm does not require any convexity or monotonicity conditions on the resulting bifunction. Some numerical experiments and comparisons are given to illustrate the efficiency of the proposed algorithm.Strong convergence analysis of a monotone projection algorithm in a Banach spacehttps://zbmath.org/1491.470712022-09-13T20:28:31.338867Z"Qin, Xiaolong"https://zbmath.org/authors/?q=ai:qin.xiaolong"Bin Dehaish, B. A."https://zbmath.org/authors/?q=ai:bin-dehaish.buthinah-abdullatif"Latif, Abdul"https://zbmath.org/authors/?q=ai:latif.abdul"Cho, Sun Young"https://zbmath.org/authors/?q=ai:cho.sun-youngSummary: An uncountable infinite family of generalized asymptotically quasi-\(\phi\)-nonexpansive mappings and bifunctions are investigated based on a monotone projection algorithm in this article. Strong convergence of the algorithm is obtained in the framework of Banach spaces.Quasi-variational problems with non-self map on Banach spaces: existence and applicationshttps://zbmath.org/1491.490072022-09-13T20:28:31.338867Z"Allevi, Elisabetta"https://zbmath.org/authors/?q=ai:allevi.elisabetta"De Giuli, Maria Elena"https://zbmath.org/authors/?q=ai:de-giuli.maria-elena"Milasi, Monica"https://zbmath.org/authors/?q=ai:milasi.monica"Scopelliti, Domenico"https://zbmath.org/authors/?q=ai:scopelliti.domenicoSummary: This paper focuses on the analysis of generalized quasi-variational inequality problems with non-self constraint map. To study such problems, in [\textit{D. Aussel} et al., J. Optim. Theory Appl. 170, No. 3, 818--837 (2016; Zbl 1350.49006)] the authors introduced the concept of the projected solution and proved its existence in finite-dimensional spaces. The main contribution of this paper is to prove the existence of a projected solution for generalized quasi-variational inequality problems with non-self constraint map on real Banach spaces. Then, following the multistage stochastic variational approach introduced in [\textit{R. T. Rockafellar} and \textit{R. J B Wets}, Math. Program. 165, No. 1 (B), 331--360 (2017; Zbl 1378.49010)], we introduce the concept of the projected solution in a multistage stochastic setting, and we prove the existence of such a solution. We apply this theoretical result in studying an electricity market with renewable power sources.Adjoint-based SQP method with block-wise quasi-Newton Jacobian updates for nonlinear optimal controlhttps://zbmath.org/1491.490152022-09-13T20:28:31.338867Z"Hespanhol, Pedro"https://zbmath.org/authors/?q=ai:hespanhol.pedro"Quirynen, Rien"https://zbmath.org/authors/?q=ai:quirynen.rienSummary: Nonlinear model predictive control (NMPC) generally requires the solution of a non-convex dynamic optimization problem at each sampling instant under strict timing constraints, based on a set of differential equations that can often be stiff and/or that may include implicit algebraic equations. This paper provides a local convergence analysis for the recently proposed adjoint-based sequential quadratic programming (SQP) algorithm that is based on a block-structured variant of the two-sided rank-one (TR1) quasi-Newton update formula to efficiently compute Jacobian matrix approximations in a sparsity preserving fashion. A particularly efficient algorithm implementation is proposed in case an implicit integration scheme is used for discretization of the optimal control problem, in which matrix factorization and matrix-matrix operations can be avoided entirely. The convergence analysis results as well as the computational performance of the proposed optimization algorithm are illustrated for two simulation case studies of NMPC.McKean-Vlasov optimal control: the dynamic programming principlehttps://zbmath.org/1491.490182022-09-13T20:28:31.338867Z"Djete, Mao Fabrice"https://zbmath.org/authors/?q=ai:djete.mao-fabrice"Possamaï, Dylan"https://zbmath.org/authors/?q=ai:possamai.dylan"Tan, Xiaolu"https://zbmath.org/authors/?q=ai:tan.xiaoluThe paper is devoted to the problem of optimal control of McKean-Vlasov stochastic differential equations. The authors develope a general theory for the non-Markovian case of McKean-Vlasov stochastic control problem with common noise. They propose and investigate weak and strong formulations of the McKean-Vlasov optimal control problem and show their equivalence. The dynamic programming principle is established for weak and strong formulations as well as for strong formulation where the control is adapted to the common noise. Based on their results for the non-Markovian case, the authors obtain the dynamic programming principle for the Markovian control problem. Associated Hamilton-Jacobi-Bellman equations are derived for the common noise strong formulation and for the general strong formulation.
Reviewer: Svetlana A. Kravchenko (Minsk)Algebraic core and convex calculus without topologyhttps://zbmath.org/1491.520032022-09-13T20:28:31.338867Z"Van Cuong, Dang"https://zbmath.org/authors/?q=ai:van-cuong.dang"Mordukhovich, Boris S."https://zbmath.org/authors/?q=ai:mordukhovich.boris-s"Nam, Nguyen Mau"https://zbmath.org/authors/?q=ai:nam.nguyen-mau"Cartmell, Addison"https://zbmath.org/authors/?q=ai:cartmell.addisonThe authors present a new approach, concentrating on the concept of algebraic core for convex sets in general vector spaces without any topological structure and then present its applications to problems of convex analysis and optimization. They use a proper version of convex separation theorem, which holds in any vector space and is formulated via algebraic core conditions instead of the conventional interiority assumptions in topological settings. They show also that a simple `extreme' version of this result is equivalent the Hahn Banach theorem in vector spaces; this allows them to develop a geometric approach to generalized differential calculus for convex sets, set-valued mappings, and extended real-valued functions with qualification conditions formulated in terms of algebraic cores for such objects. They also obtain a precise formula for computing the subdifferential of optimal value functions associated with convex problems of parametric optimization in vector spaces. They emphasize that functions of this type play a crucial role in many aspects of convex optimization and its applications.
Reviewer: Ugur Sengul (İstanbul)Extension complexity of low-dimensional polytopeshttps://zbmath.org/1491.520122022-09-13T20:28:31.338867Z"Kwan, Matthew"https://zbmath.org/authors/?q=ai:kwan.matthew"Sauermann, Lisa"https://zbmath.org/authors/?q=ai:sauermann.lisa"Zhao, Yufei"https://zbmath.org/authors/?q=ai:zhao.yufeiSummary: Sometimes, it is possible to represent a complicated polytope as a projection of a much simpler polytope. To quantify this phenomenon, the extension complexity of a polytope \(P\) is defined to be the minimum number of facets of a (possibly higher-dimensional) polytope from which \(P\) can be obtained as a (linear) projection. This notion is motivated by its relevance to combinatorial optimisation, and has been studied intensively for various specific polytopes associated with important optimisation problems. In this paper we study extension complexity as a parameter of general polytopes, more specifically considering various families of low-dimensional polytopes. First, we prove that for a fixed dimension \(d\), the extension complexity of a random \(d\)-dimensional polytope (obtained as the convex hull of random points in a ball or on a sphere) is typically on the order of the square root of its number of vertices. Second, we prove that any cyclic \(n\)-vertex polygon (whose vertices lie on a circle) has extension complexity at most \(24\sqrt n\). This bound is tight up to the constant factor 24. Finally, we show that there exists an \(n^{o(1)}\)-dimensional polytope with at most \(n\) vertices and extension complexity \(n^{1-o(1)}\). Our theorems are proved with a range of different techniques, which we hope will be of further interest.On some properties of intuitionistic fuzzy soft boundaryhttps://zbmath.org/1491.540082022-09-13T20:28:31.338867Z"Hussain, Sabir"https://zbmath.org/authors/?q=ai:hussain.sabirSummary: The concept of intuitionistic fuzzy soft sets in a decision making problem and the problem is solved with the help of 'similarity measurement' technique. The purpose of this paper is to initiate the concept of Intuitionistic Fuzzy(IF) soft boundary. We discuss and explore the characterizations and properties of IF soft boundary in general as well as in terms of IF soft interior and IF soft closure. Examples and counter examples are also presented to validate the discussed results.Optimal stopping time on semi-Markov processes with finite horizonhttps://zbmath.org/1491.600552022-09-13T20:28:31.338867Z"Chen, Fang"https://zbmath.org/authors/?q=ai:chen.fang"Guo, Xianping"https://zbmath.org/authors/?q=ai:guo.xianping"Liao, Zhong-Wei"https://zbmath.org/authors/?q=ai:liao.zhongweiSummary: In this paper, we consider the optimal stopping problems on semi-Markov processes (sMPs) with finite horizon and aim to establish the existence and algorithm of optimal stopping times. The key method is the equivalence between optimal stopping problems on sMPs and a special class of semi-Markov decision processes (sMDPs). We first introduce the optimality equation and show the existence of the optimal policies of finite-horizon sMDPs with additional terminal costs. Based on the optimal stopping problems on sMPs, we give an explicit construction of sMDPs such that the optimal stopping times of sMPs are equivalent to the optimal policies of the constructed sMDPs. Then, using the results of sMDPs developed here, we not only prove the existence of the optimal stopping times of sMPs, but also provide an algorithm for computing the optimal stopping times of sMPs. Moreover, we show that the optimal and \(\varepsilon\)-optimal stopping time can be characterized by the hitting time of some special sets. Finally, we give an example to illustrate the effectiveness of our conclusions.A note on optimization formulations of Markov decision processeshttps://zbmath.org/1491.601232022-09-13T20:28:31.338867Z"Ying, Lexing"https://zbmath.org/authors/?q=ai:ying.lexing"Zhu, Yuhua"https://zbmath.org/authors/?q=ai:zhu.yuhuaThe paper summarizes the primal, primal-dual and dual problems for discounted standard Markov decision processes, discounted regularized Markov decision processes, undiscounted standard Markov decision processes and undiscounted regularized Markov decision processes. Moreover, the paper shows the equivalence between the dual problem and policy gradient as well as the equivalence between the primal problem and Bellman equation for the above four Markov decision processes. These optimization formulations are helpful for the theoretical study of Markov decision processes algorithms.
Reviewer: Yan-Hong Song (Wuhan)Asymptotics of the optimum in discrete sequential assignmenthttps://zbmath.org/1491.601872022-09-13T20:28:31.338867Z"Járai, Antal A."https://zbmath.org/authors/?q=ai:jarai.antal-aSummary: We consider the stochastic sequential assignment problem of \textit{C. Derman} et al. [Manage. Sci., Theory 18, 349--355 (1972; Zbl 0238.90054)] corresponding to a discrete distribution supported on a finite set of points. We use large deviation estimates to compute the asymptotics of the optimal policy as the number of tasks \(N \to \infty \).Estimating the parameters of fuzzy linear regression model with crisp inputs and Gaussian fuzzy outputs: a goal programming approachhttps://zbmath.org/1491.620712022-09-13T20:28:31.338867Z"Hosseinzadeh, E."https://zbmath.org/authors/?q=ai:hosseinzadeh.elham"Hassanpour, H."https://zbmath.org/authors/?q=ai:hassanpour.hassanSummary: In this paper, we offered a new method to fit a fuzzy linear regression model to a set of crisp inputs and Gaussian fuzzy outputs, by considering its parameters as Gaussian fuzzy numbers. To calculate the regression coefficients, a nonlinear programming model is formulated based on a new distance between Gaussian fuzzy numbers. The nonlinear programming model is converted to a goal programming model by choosing appropriate deviation variables and then to a linear programming which can be solved simply by simplex method. To show the efficiency of proposed model, some applicative examples are solved and three simulation studies are performed. The computational results are compared with some earlier methods.A parametric recurrent neural network scheme for solving a class of fuzzy regression models with some real-world applicationshttps://zbmath.org/1491.620722022-09-13T20:28:31.338867Z"Karbasi, Delara"https://zbmath.org/authors/?q=ai:karbasi.delara"Nazemi, Alireza"https://zbmath.org/authors/?q=ai:nazemi.alireza"Rabiei, Mohammadreza"https://zbmath.org/authors/?q=ai:rabiei.mohammadrezaSummary: In this paper, a hybrid scheme based on recurrent neural networks for approximate fuzzy coefficients (parameters) of fuzzy linear and polynomial regression models with fuzzy output and crisp inputs is presented. Here, a neural network is first constructed based on some concepts of convex optimization and stability theory. The suggested neural network model guarantees to find the approximate parameters of the fuzzy regression problem. The existence and convergence of the trajectories of the neural network are studied. The Lyapunov stability for the neural network is also shown. Some illustrative examples provide a further demonstration of the effectiveness of the method.Estimation of dynamic discrete choice models using artificial neural network approximationshttps://zbmath.org/1491.622492022-09-13T20:28:31.338867Z"Norets, Andriy"https://zbmath.org/authors/?q=ai:norets.andriySummary: I propose a method for inference in dynamic discrete choice models (DDCM) that utilizes Markov chain Monte Carlo (MCMC) and artificial neural networks (ANNs). MCMC is intended to handle high-dimensional integration in the likelihood function of richly specified DDCMs. ANNs approximate the dynamic-program (DP) solution as a function of the parameters and state variables prior to estimation to avoid having to solve the DP on each iteration. Potential applications of the proposed methodology include inference in DDCMs with random coefficients, serially correlated unobservables, and dependence across individual observations. The article discusses MCMC estimation of DDCMs, provides relevant background on ANNs, and derives a theoretical justification for the method. Experiments suggest this to be a promising approach.A self-adaptive Tseng extragradient method for solving monotone variational inequality and fixed point problems in Banach spaceshttps://zbmath.org/1491.650382022-09-13T20:28:31.338867Z"Jolaoso, Lateef Olakunle"https://zbmath.org/authors/?q=ai:jolaoso.lateef-olakunleSummary: In this paper, we introduce a self-adaptive projection method for finding a common element in the solution set of variational inequalities (VIs) and fixed point set for relatively nonexpansive mappings in 2-uniformly convex and uniformly smooth real Banach spaces. We prove a strong convergence result for the sequence generated by our algorithm without imposing a Lipschitz condition on the cost operator of the VIs. We also provide some numerical examples to illustrate the performance of the proposed algorithm by comparing with related methods in the literature. This result extends and improves some recent results in the literature in this direction.Ritz-like values in steplength selections for stochastic gradient methodshttps://zbmath.org/1491.650422022-09-13T20:28:31.338867Z"Franchini, Giorgia"https://zbmath.org/authors/?q=ai:franchini.giorgia"Ruggiero, Valeria"https://zbmath.org/authors/?q=ai:ruggiero.valeria"Zanni, Luca"https://zbmath.org/authors/?q=ai:zanni.lucaSummary: The steplength selection is a crucial issue for the effectiveness of the stochastic gradient methods for large-scale optimization problems arising in machine learning. In a recent paper, \textit{R. Bollapragada} et al. [SIAM J. Optim. 28, No. 4, 3312--3343 (2018; Zbl 1461.65131)] propose to include an adaptive subsampling strategy into a stochastic gradient scheme, with the aim to assure the descent feature in expectation of the stochastic gradient directions. In this approach, theoretical convergence properties are preserved under the assumption that the positive steplength satisfies at any iteration a suitable bound depending on the inverse of the Lipschitz constant of the objective function gradient. In this paper, we propose to tailor for the stochastic gradient scheme the steplength selection adopted in the full-gradient method knows as limited memory steepest descent method. This strategy, based on the Ritz-like values of a suitable matrix, enables to give a local estimate of the inverse of the local Lipschitz parameter, without introducing line search techniques, while the possible increase in the size of the subsample used to compute the stochastic gradient enables to control the variance of this direction. An extensive numerical experimentation highlights that the new rule makes the tuning of the parameters less expensive than the trial procedure for the efficient selection of a constant step in standard and mini-batch stochastic gradient methods.Convergence analysis of a nonmonotone projected gradient method for multiobjective optimization problems on Riemannian manifoldshttps://zbmath.org/1491.650432022-09-13T20:28:31.338867Z"Li, Xiaobo"https://zbmath.org/authors/?q=ai:li.xiaobo"Krishan Lal, Manish"https://zbmath.org/authors/?q=ai:krishan-lal.manishSummary: We propose two nonmonotone projected gradient methods for multiobjective optimization problems on Riemannian manifolds. The global convergence to the Pareto critical point is proved under some reasonable conditions. Moreover, under some convexity assumptions, we also establish the full convergence to the Pareto critical point produced by the proposed algorithm on Riemannian manifolds with nonnegative curvature.Two limited-memory optimization methods with minimum violation of the previous secant conditionshttps://zbmath.org/1491.650442022-09-13T20:28:31.338867Z"Vlček, Jan"https://zbmath.org/authors/?q=ai:vlcek.jan"Lukšan, Ladislav"https://zbmath.org/authors/?q=ai:luksan.ladislavSummary: Limited-memory variable metric methods based on the well-known Broyden-Fletcher-Goldfarb-Shanno (BFGS) update are widely used for large scale optimization. The block version of this update, derived for general objective functions in [\textit{J. Vlček} and \textit{L. Lukšan}, Numer. Algorithms 80, No. 3, 957--987 (2019; Zbl 1440.90076)], satisfies the secant conditions with all used difference vectors and for quadratic objective functions gives the best improvement of convergence in some sense, but the corresponding direction vectors are not descent directions generally. To guarantee the descent property of direction vectors and simultaneously violate the secant conditions as little as possible in some sense, two methods based on the block BFGS update are proposed. They can be advantageously used together with methods based on vector corrections for conjugacy. Here we combine two types of these corrections to satisfy the secant conditions with both the corrected and uncorrected (original) latest difference vectors. Global convergence of the proposed algorithm is established for convex and sufficiently smooth functions. Numerical experiments demonstrate the efficiency of the new methods.Bregman Itoh-Abe methods for sparse optimisationhttps://zbmath.org/1491.650452022-09-13T20:28:31.338867Z"Benning, Martin"https://zbmath.org/authors/?q=ai:benning.martin"Riis, Erlend Skaldehaug"https://zbmath.org/authors/?q=ai:riis.erlend-skaldehaug"Schönlieb, Carola-Bibiane"https://zbmath.org/authors/?q=ai:schonlieb.carola-bibianeSummary: In this paper we propose optimisation methods for variational regularisation problems based on discretising the inverse scale space flow with discrete gradient methods. Inverse scale space flow generalises gradient flows by incorporating a generalised Bregman distance as the underlying metric. Its discrete-time counterparts, Bregman iterations and linearised Bregman iterations are popular regularisation schemes for inverse problems that incorporate a priori information without loss of contrast. Discrete gradient methods are tools from geometric numerical integration for preserving energy dissipation of dissipative differential systems. The resultant Bregman discrete gradient methods are unconditionally dissipative and achieve rapid convergence rates by exploiting structures of the problem such as sparsity. Building on previous work on discrete gradients for non-smooth, non-convex optimisation, we prove convergence guarantees for these methods in a Clarke subdifferential framework. Numerical results for convex and non-convex examples are presented.Curve based approximation of measures on manifolds by discrepancy minimizationhttps://zbmath.org/1491.650482022-09-13T20:28:31.338867Z"Ehler, Martin"https://zbmath.org/authors/?q=ai:ehler.martin"Gräf, Manuel"https://zbmath.org/authors/?q=ai:graf.manuel"Neumayer, Sebastian"https://zbmath.org/authors/?q=ai:neumayer.sebastian"Steidl, Gabriele"https://zbmath.org/authors/?q=ai:steidl.gabrieleSummary: The approximation of probability measures on compact metric spaces and in particular on Riemannian manifolds by atomic or empirical ones is a classical task in approximation and complexity theory with a wide range of applications. Instead of point measures we are concerned with the approximation by measures supported on Lipschitz curves. Special attention is paid to push-forward measures of Lebesgue measures on the unit interval by such curves. Using the discrepancy as distance between measures, we prove optimal approximation rates in terms of the curve's length and Lipschitz constant. Having established the theoretical convergence rates, we are interested in the numerical minimization of the discrepancy between a given probability measure and the set of push-forward measures of Lebesgue measures on the unit interval by Lipschitz curves. We present numerical examples for measures on the \(2\)- and \(3\)-dimensional torus, the \(2\)-sphere, the rotation group on \(\mathbb{R}^3\) and the Grassmannian of all \(2\)-dimensional linear subspaces of \(\mathbb{R}^4\). Our algorithm of choice is a conjugate gradient method on these manifolds, which incorporates second-order information. For efficient gradient and Hessian evaluations within the algorithm, we approximate the given measures by truncated Fourier series and use fast Fourier transform techniques on these manifolds.Metrically regular mapping and its utilization to convergence analysis of a restricted inexact Newton-type methodhttps://zbmath.org/1491.650502022-09-13T20:28:31.338867Z"Rashid, Mohammed Harunor"https://zbmath.org/authors/?q=ai:rashid.mohammed-harunorSummary: In the present paper, we study the restricted inexact Newton-type method for solving the generalized equation \(0\in f(x)+F(x)\), where \(X\) and \(Y\) are Banach spaces, \(f:X\to Y\) is a Fréchet differentiable function and \(F\colon X\rightrightarrows Y\) is a set-valued mapping with closed graph. We establish the convergence criteria of the restricted inexact Newton-type method, which guarantees the existence of any sequence generated by this method and show this generated sequence is convergent linearly and quadratically according to the particular assumptions on the Fréchet derivative of \(f\). Indeed, we obtain semilocal and local convergence results of restricted inexact Newton-type method for solving the above generalized equation when the Fréchet derivative of \(f\) is continuous and Lipschitz continuous as well as \(f+F\) is metrically regular. An application of this method to variational inequality is given. In addition, a numerical experiment is given which illustrates the theoretical result.Solving differential equations with artificial bee colony programminghttps://zbmath.org/1491.650752022-09-13T20:28:31.338867Z"Boudouaoui, Yassine"https://zbmath.org/authors/?q=ai:boudouaoui.yassine"Habbi, Hacene"https://zbmath.org/authors/?q=ai:habbi.hacene"Ozturk, Celal"https://zbmath.org/authors/?q=ai:ozturk.celal"Karaboga, Dervis"https://zbmath.org/authors/?q=ai:karaboga.dervisSummary: Relying on artificial bee colony programming (ABCP), we present in this paper, for the first time, a novel methodology for solving differential equations. The three-phase evolving process of ABCP is managed to apply on the issue of recovering the exact solution of differential equations through a well-posed problem. In fact, the original ABCP model which has been initially developed for symbolic regression cannot be used directly as differential problems might have multiple outputs. Moreover, the definition of fitness function is a critical problem-dependent issue for model design. In this sense, a problem-specific ABCP algorithm is worked out in the present contribution. With the proposed algorithm, solution with multiple outputs can evolve under a multiple-tree framework toward the exact solution. For fitness function evaluation, different forms are derived for ordinary and partial differential equations by performing experiments with multiple runs. Results on several differential equations are reported and compared to other advanced methods to assess the feasibility and the potential of the proposed method. A computational performance evaluation is provided for the considered examples and completed with an additional study on the impact of key control parameters.Ensemble Kalman filter for neural network-based one-shot inversionhttps://zbmath.org/1491.651202022-09-13T20:28:31.338867Z"Guth, Philipp A."https://zbmath.org/authors/?q=ai:guth.philipp-a"Schillings, Claudia"https://zbmath.org/authors/?q=ai:schillings.claudia"Weissmann, Simon"https://zbmath.org/authors/?q=ai:weissmann.simonSummary: We study the use of novel techniques arising in machine learning for inverse problems. Our approach replaces the complex forward model by a neural network, which is trained simultaneously in a one-shot sense when estimating the unknown parameters from data, i.e., the neural network is trained only for the unknown parameter. By establishing a link to the Bayesian approach to inverse problems we develop an algorithmic framework that ensures the feasibility of the parameter estimate with respect to the forward model. We propose an efficient, derivative-free optimization method based on variants of the ensemble Kalman inversion. Numerical experiments show that the ensemble Kalman filter for neural network-based one-shot inversion is a promising direction combining optimization and machine learning techniques for inverse problems.
For the entire collection see [Zbl 1491.49002].A unified FFT-based approach to maximum assignment problems related to transitive finite group actionshttps://zbmath.org/1491.651762022-09-13T20:28:31.338867Z"Clausen, Michael"https://zbmath.org/authors/?q=ai:clausen.michaelSummary: This paper studies the cross-correlation of two real-valued functions whose common domain is a set on which a finite group acts transitively. Such a group action generalizes, e.g., circular time shifts for discrete periodic time signals or spatial translations in the context of image processing. Cross-correlations can be reformulated as convolutions in group algebras and are thus transformable into the spectral domain via Fast Fourier Transforms. The resulting general discrete cross-correlation theorem will serve as the basis for a unified FFT-based approach to maximum assignment problems related to transitive finite group actions. Our main focus is on those assignment problems arising from the actions of symmetric groups on the cosets of Young subgroups. The ``simplest'' nontrivial representatives of the latter class of problems are the NP-hard symmetric and asymmetric maximum quadratic assignment problem. We present a systematic spectral approach in terms of the representation theory of symmetric groups to solve such assignment problems. This generalizes and algorithmically improves Kondor's spectral branch-and-bound approach [\textit{R. Kondor}, in: Proceedings of the 21st annual ACM-SIAM symposium on discrete algorithms, SODA 2010, Austin, TX, USA, January 17--19, 2010. Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM); New York, NY: Association for Computing Machinery (ACM). 1017--1028 (2010; Zbl 1288.68130)] to the exact solution of the asymmetric maximum quadratic assignment problem.How to play hot and cold on a linehttps://zbmath.org/1491.680552022-09-13T20:28:31.338867Z"Haverkort, Herman"https://zbmath.org/authors/?q=ai:haverkort.herman-j"Kübel, David"https://zbmath.org/authors/?q=ai:kubel.david"Langetepe, Elmar"https://zbmath.org/authors/?q=ai:langetepe.elmar"Schwarzwald, Barbara"https://zbmath.org/authors/?q=ai:schwarzwald.barbaraSummary: Suppose we are searching for a target point \(t\) in the unit interval. To pinpoint the location of \(t\), we issue query points \(q_1, \ldots , q_n \in [0,1]\). As a response, we obtain an ordering of the query points by distance to \(t\). This restricts possible locations of \(t\) to a subinterval. We define the accuracy of a query strategy as the reciprocal of the size of the subinterval to which we can pinpoint \(t\) in the worst case. We describe a strategy with accuracy \(\Theta (n^2)\), which is at most a factor two from optimal if all query points are generated at once. With query points generated one by one depending on the response received on previous query points, we achieve accuracy \(\Omega (2.29^n)\), and prove that no strategy can achieve \(\Omega (3.66^n)\).
For the entire collection see [Zbl 1369.68023].A parallel naive approach for non-dominated sorting: a theoretical study considering PRAM CREW modelhttps://zbmath.org/1491.680572022-09-13T20:28:31.338867Z"Mishra, Sumit"https://zbmath.org/authors/?q=ai:mishra.sumit-chandra"Coello Coello, Carlos A."https://zbmath.org/authors/?q=ai:coello-coello.carlos-aSummary: Pareto-based multi-objective evolutionary algorithms use non-dominated sorting as an intermediate step. These algorithms are easy to parallelize as various steps of these algorithms are independent of each other. Researchers have focused on the parallelization of non-dominated sorting in order to reduce the execution time of these algorithms. In this paper, we focus on one of the initial approaches for non-dominated sorting also known as naive approach, proposed by Srinivas et al. and explore the scope of parallelism in this approach. Parallelism is explored in the considered approach in three different ways considering Parallel Random Access Machine, Concurrent Read Exclusive Write model. The time and space complexities of three different parallel versions are also analyzed. Analysis of parallel algorithms is usually carried out under the assumption that an unbounded number of processors are available. Thus, the same assumption has been considered in our analysis too and we have obtained the maximum number of processors required for three parallel versions.Train scheduling on a unidirectional pathhttps://zbmath.org/1491.680862022-09-13T20:28:31.338867Z"Garg, Apoorv"https://zbmath.org/authors/?q=ai:garg.apoorv"Ranade, Abhiram G."https://zbmath.org/authors/?q=ai:ranade.abhiram-gSummary: We formulate what might be the simplest train scheduling problem considered in the literature and show it to be NP-hard. We also give a log-factor randomised algorithm for it. In our problem we have a unidirectional train track with equidistant stations, each station initially having at most one train. In addition, there can be at most one train poised to enter each station. The trains must move to their destinations subject to the constraint that at every time instant there can be at most one train at each station and on the track between stations. The goal is to minimise the maximum delay of any train. Our problem can also be interpreted as a packet routing problem, and our work strengthens the hardness results from that literature.
For the entire collection see [Zbl 1388.68010].Posimodular function optimizationhttps://zbmath.org/1491.680872022-09-13T20:28:31.338867Z"Halldórsson, Magnús M."https://zbmath.org/authors/?q=ai:halldorsson.magnus-m"Ishii, Toshimasa"https://zbmath.org/authors/?q=ai:ishii.toshimasa"Makino, Kazuhisa"https://zbmath.org/authors/?q=ai:makino.kazuhisa"Takazawa, Kenjiro"https://zbmath.org/authors/?q=ai:takazawa.kenjiroSummary: A function \(f: 2^V \longrightarrow \mathbb {R}\) on a finite set \(V\) is posimodular if \(f(X)+f(Y) \geq f(X\setminus Y)+f(Y\setminus X)\), for all \(X,Y\subseteq V\). Posimodular functions often arise in combinatorial optimization such as undirected cut functions. We consider the problem of finding a nonempty subset \(X\) minimizing \(f(X)\), when the posimodular function \(f\) is given by oracle access.
We show that posimodular function minimization requires exponential time, contrasting with the polynomial solvability of submodular function minimization that forms another generalization of cut functions. On the other hand, the problem is fixed-parameter tractable in terms of the size of the image (or range) of \(f\).
In more detail, we show that \(\varOmega (2^{0.3219n} T_f)\) time is necessary and \(O(2^{0.92n}T_f)\) sufficient, where \(T_f\) denotes the time for one function evaluation. When the image of \(f\) is \(D=\{0,1,\ldots ,d\}\), \(O(2^{1.271d}nT_f)\) time is sufficient and \(\varOmega (2^{0.1609d}T_f)\) necessary. We can also generate all sets minimizing \(f\) in time \(2^{O(d)} n^2 T_f\).
Finally, we also consider the problem of maximizing a given posimodular function, showing that it requires at least \(2^{n-1}T_f\) time in general, while it has time complexity \(\varTheta (n^{d-1}T_f)\) when \(D=\{0,1,\ldots,d\}\) is the image of \(f\), for integer \(d\).
For the entire collection see [Zbl 1369.68023].Posimodular function optimizationhttps://zbmath.org/1491.680882022-09-13T20:28:31.338867Z"Halldórsson, Magnús M."https://zbmath.org/authors/?q=ai:halldorsson.magnus-m"Ishii, Toshimasa"https://zbmath.org/authors/?q=ai:ishii.toshimasa"Makino, Kazuhisa"https://zbmath.org/authors/?q=ai:makino.kazuhisa"Takazawa, Kenjiro"https://zbmath.org/authors/?q=ai:takazawa.kenjiroSummary: A function \(f: 2^V \rightarrow \mathbb{R}\) on a finite set \(V\) is \textit{posimodular} if \(f(X)+f(Y) \ge f(X{\setminus } Y)+f(Y{\setminus } X)\), for all \(X,Y\subseteq V\). Posimodular functions often arise in combinatorial optimization such as undirected cut functions. We consider the problem of finding a nonempty subset \(X\) minimizing \(f(X)\), when the posimodular function \(f\) is given by oracle access. We show that posimodular function minimization requires exponential time, contrasting with the polynomial solvability of submodular function minimization that forms another generalization of cut functions. On the other hand, the problem is fixed-parameter tractable in terms of the size \(D\) of the image (or range) of \(f\). In more detail, we show that \(\varOmega (2^{0.32n} T_f)\) time is necessary and \(O(2^{0.92n}T_f)\) sufficient, where \(T_f\) denotes the time for one function evaluation and \(n = |V|\). When the image of \(f\) is \(D=\{0,1,\ldots ,d\}\) for integer \(d\), \(O(2^{1.218d}nT_f)\) time is sufficient. We can also generate all sets minimizing \(f\) in time \(2^{O(d)} n^2 T_f\). Finally, we also consider the problem of maximizing a given posimodular function, showing that it requires at least \(2^{n-1}T_f\) time in general, while it has time complexity \(\varTheta (\binom{n}{d-1}T_f)\) when \(D=\{0,1,\ldots , d\}\) is the image of \(f\), for integer \(d=O(n^{1/4})\).Representative sets and irrelevant vertices: new tools for kernelizationhttps://zbmath.org/1491.680922022-09-13T20:28:31.338867Z"Kratsch, Stefan"https://zbmath.org/authors/?q=ai:kratsch.stefan"Wahlström, Magnus"https://zbmath.org/authors/?q=ai:wahlstrom.magnusTweaking the odds in probabilistic timed automatahttps://zbmath.org/1491.680982022-09-13T20:28:31.338867Z"Hartmanns, Arnd"https://zbmath.org/authors/?q=ai:hartmanns.arnd"Katoen, Joost-Pieter"https://zbmath.org/authors/?q=ai:katoen.joost-pieter"Kohlen, Bram"https://zbmath.org/authors/?q=ai:kohlen.bram"Spel, Jip"https://zbmath.org/authors/?q=ai:spel.jipSummary: We consider probabilistic timed automata (PTA) in which probabilities can be parameters, i.e. symbolic constants. They are useful to model randomised real-time systems where exact probabilities are unknown, or where the probability values should be optimised. We prove that existing techniques to transform probabilistic timed automata into equivalent finite-state Markov decision processes (MDPs) remain correct in the parametric setting, using a systematic proof pattern. We implemented two of these parameter-preserving transformations -- using digital clocks and backwards reachability -- in the \textsc{Modest Toolset}. Using \textsc{Storm}'s parameter space partitioning approach, parameter values can be efficiently synthesized in the resulting parametric MDPs. We use several case studies from the literature of varying state and parameter space sizes to experimentally evaluate the performance and scalability of this novel analysis trajectory for parametric PTA.
For the entire collection see [Zbl 1482.68004].Probabilistic disclosure: maximisation vs. minimisationhttps://zbmath.org/1491.681062022-09-13T20:28:31.338867Z"Bérard, Béatrice"https://zbmath.org/authors/?q=ai:berard.beatrice"Haddad, Serge"https://zbmath.org/authors/?q=ai:haddad.serge"Lefaucheux, Engel"https://zbmath.org/authors/?q=ai:lefaucheux.engelSummary: We consider opacity questions where an observation function provides to an external attacker a view of the states along executions and secret executions are those visiting some state from a fixed subset. Disclosure occurs when the observer can deduce from a finite observation that the execution is secret, the \(\varepsilon\)-disclosure variant corresponding to the execution being secret with probability greater than \(1-\varepsilon\). In a probabilistic and non deterministic setting, where an internal agent can choose between actions, there are two points of view, depending on the status of this agent: the successive choices can either help the attacker trying to disclose the secret, if the system has been corrupted, or they can prevent disclosure as much as possible if these choices are part of the system design. In the former situation, corresponding to a worst case, the disclosure value is the supremum over the strategies of the probability to disclose the secret (maximisation), whereas in the latter case, the disclosure is the infimum (minimisation). We address quantitative problems (comparing the optimal value with a threshold) and qualitative ones (when the threshold is zero or one) related to both forms of disclosure for a fixed or finite horizon. For all problems, we characterise their decidability status and their complexity. We discover a surprising asymmetry: on the one hand optimal strategies may be chosen among deterministic ones in maximisation problems, while it is not the case for minimisation. On the other hand, for the questions addressed here, more minimisation problems than maximisation ones are decidable.
For the entire collection see [Zbl 1388.68010].More on change-making and related problemshttps://zbmath.org/1491.681262022-09-13T20:28:31.338867Z"Chan, Timothy M."https://zbmath.org/authors/?q=ai:chan.timothy-m-y"He, Qizheng"https://zbmath.org/authors/?q=ai:he.qizhengThis paper considers the change-making or coin-changing problem: given a set of \(n\) types of coins, each of integer value, the goal is to select the minimum number of coins whose total value is equal to a specified target value \(t\). This problem is known to be weakly NP-hard and the previously to this paper best known algorithm for it had time complexity \(O(t \log t \log \log t)\).
The paper also considers the all-targets version of the problem, where the goal is to solve the change-making problem for all target values \(j\) between 1 and a specified upper bound \(t\). The previously to this paper best algorithm for the problem has time complexity \(\tilde O(t^{3/2})\), where as usual the \(\tilde O\) notation hides polylogarithmic factors.
The paper gives a simple FFT-based algorithm for the all-targets version of the problem that has time complexity \(\tilde O(t^{4/3})\). The complexity of the problem is also studied with respect to the largest coin value \(u\), and an algorithm is given for the problem with time complexity \(O(u^2 \log u + t)\). This is a very simple 3-line algorithm that interestingly has a rather complex correctness proof using a theorem on the Frobenius problem. This algorithm can be modified to solve the all-capacities unbounded knapsack problem within the same time bound.
The final results presented in the paper are
\begin{itemize}
\item an \(\tilde O((t \sigma)^{2/3}+t)\) algorithm for the all-targets change-making problem, where \(\sigma\) is the sum of values of the \(n\) coin types
\item an \(\tilde O(u)\) algorithm for the change-making problem
\item an \(\tilde O(nu)\) algorithm for the unbounded knapsack problem, and
\item a proof that an algorithm of Bringmann et al. can solve the minimum word-break problem in time \(\tilde O(nm^{1/3}+m)\), where the minimum word-break problem consists in expressing a string \(s\) of length \(n\) as the concatenation of the smallest number of words from a set \(D\) of strings with total length \(m\).
\end{itemize}
Reviewer: Roberto Solis-Oba (London)Replica placement on bounded treewidth graphshttps://zbmath.org/1491.681292022-09-13T20:28:31.338867Z"Aggarwal, Anshul"https://zbmath.org/authors/?q=ai:aggarwal.anshul"Chakaravarthy, Venkatesan T."https://zbmath.org/authors/?q=ai:chakaravarthy.venkatesan-t"Gupta, Neelima"https://zbmath.org/authors/?q=ai:gupta.neelima"Sabharwal, Yogish"https://zbmath.org/authors/?q=ai:sabharwal.yogish"Sharma, Sachin"https://zbmath.org/authors/?q=ai:sharma.sachin-s"Thakral, Sonika"https://zbmath.org/authors/?q=ai:thakral.sonikaSummary: We consider the Replica Placement problem: given a graph and a set of clients, place replicas on a minimum set of nodes of the graph to serve all the clients; each client is associated with a request and maximum distance that it can travel to get served; there is a maximum limit (capacity) on the amount of request a replica can serve. The problem falls under the general framework of capacitated set cover. It admits an \(O(\log n)\)-approximation and it is NP-hard to approximate within a factor of \(o(\log n)\). We study the problem in terms of the treewidth \(t\) of the graph and present an \(O(t)\)-approximation algorithm.
For the entire collection see [Zbl 1369.68023].Safe learning for near-optimal schedulinghttps://zbmath.org/1491.681512022-09-13T20:28:31.338867Z"Busatto-Gaston, Damien"https://zbmath.org/authors/?q=ai:busatto-gaston.damien"Chakraborty, Debraj"https://zbmath.org/authors/?q=ai:chakraborty.debraj"Guha, Shibashis"https://zbmath.org/authors/?q=ai:guha.shibashis"Pérez, Guillermo A."https://zbmath.org/authors/?q=ai:perez.guillermo-a"Raskin, Jean-François"https://zbmath.org/authors/?q=ai:raskin.jean-francoisSummary: In this paper, we investigate the combination of synthesis, model-based learning, and online sampling techniques to obtain safe and near-optimal schedulers for a preemptible task scheduling problem. Our algorithms can handle Markov decision processes (MDPs) that have \(10^{20}\) states and beyond which cannot be handled with state-of-the art probabilistic model-checkers. We provide probably approximately correct (PAC) guarantees for learning the model. Additionally, we extend Monte-Carlo tree search with advice, computed using safety games or obtained using the earliest-deadline-first scheduler, to safely explore the learned model online. Finally, we implemented and compared our algorithms empirically against shielded deep \(Q\)-learning on large task systems.
For the entire collection see [Zbl 1482.68004].Improving kernel online learning with a snapshot memoryhttps://zbmath.org/1491.681622022-09-13T20:28:31.338867Z"Le, Trung"https://zbmath.org/authors/?q=ai:le.trung-bao|le.trung-hieu|le.trung-kien"Nguyen, Khanh"https://zbmath.org/authors/?q=ai:nguyen.khanh-ngoc|nguyen.khanh-p|nguyen.khanh-hieu|nguyen.khanh-hoan|nguyen.khanh-quoc"Phung, Dinh"https://zbmath.org/authors/?q=ai:phung.dinh-qSummary: We propose in this paper the \textit{Stochastic Variance-reduced Gradient Descent for Kernel Online Learning} (DualSVRG), which obtains the \(\varepsilon \)-approximate \textit{linear} convergence rate and is not vulnerable to the curse of kernelization. Our approach uses a variance reduction technique to reduce the variance when estimating full gradient, and further exploits recent work in dual space gradient descent for online learning to achieve model optimality. This is achieved by introducing \textit{the concept of an instant memory}, which is a snapshot storing the most recent incoming data instances and proposing\textit{ three transformer oracles}, namely budget, coverage, and always-move oracles. We further develop rigorous theoretical analysis to demonstrate that our proposed approach can obtain the \(\varepsilon \)-approximate \textit{linear} convergence rate, while maintaining model sparsity, hence encourages fast training. We conduct extensive experiments on several benchmark datasets to compare our DualSVRG with state-of-the-art baselines in both batch and online settings. The experimental results show that our DualSVRG yields superior predictive performance, while spending comparable training time with baselines.Landscape analysis for shallow neural networks: complete classification of critical points for affine target functionshttps://zbmath.org/1491.681792022-09-13T20:28:31.338867Z"Cheridito, Patrick"https://zbmath.org/authors/?q=ai:cheridito.patrick"Jentzen, Arnulf"https://zbmath.org/authors/?q=ai:jentzen.arnulf"Rossmannek, Florian"https://zbmath.org/authors/?q=ai:rossmannek.florianSummary: In this paper, we analyze the landscape of the true loss of neural networks with one hidden layer and ReLU, leaky ReLU, or quadratic activation. In all three cases, we provide a complete classification of the critical points in the case where the target function is affine and one-dimensional. In particular, we show that there exist no local maxima and clarify the structure of saddle points. Moreover, we prove that non-global local minima can only be caused by `dead' ReLU neurons. In particular, they do not appear in the case of leaky ReLU or quadratic activation. Our approach is of a combinatorial nature and builds on a careful analysis of the different types of hidden neurons that can occur.Optimization under uncertainty explains empirical success of deep learning heuristicshttps://zbmath.org/1491.681812022-09-13T20:28:31.338867Z"Kreinovich, Vladik"https://zbmath.org/authors/?q=ai:kreinovich.vladik-ya"Kosheleva, Olga"https://zbmath.org/authors/?q=ai:kosheleva.olga-mThe authors attempt to offer some theoretical reasons for the success of deep neural networks. A few components of typical artificial neural networks are discussed:
\begin{itemize}
\item Activation functions.
\item Why sigmoid functions, which used to be used in DNN, became much less used compared to ReLU.
\item Why ReLU is ``efficient''.
\item Max pooling as a good shift-invariant operator for image processing.
\item Why taking softmax is a better mechanism than taking maximum.
\item Why taking geometric mean is a better way to compute average than taking arithmetic mean.
\end{itemize}
For the entire collection see [Zbl 1471.90002].
Reviewer: Hongshan Li (Mountain View)CMD: controllable matrix decomposition with global optimization for deep neural network compressionhttps://zbmath.org/1491.681862022-09-13T20:28:31.338867Z"Zhang, Haonan"https://zbmath.org/authors/?q=ai:zhang.haonan"Liu, Longjun"https://zbmath.org/authors/?q=ai:liu.longjun"Zhou, Hengyi"https://zbmath.org/authors/?q=ai:zhou.hengyi"Sun, Hongbin"https://zbmath.org/authors/?q=ai:sun.hongbin.1"Zheng, Nanning"https://zbmath.org/authors/?q=ai:zheng.nanningSummary: The compression and acceleration of Deep neural networks (DNNs) are necessary steps to deploy sophisticated networks into resource-constrained hardware systems. Due to the weight matrix tends to be low-rank and sparse, several low-rank and sparse compression schemes are leveraged to reduce the overwhelmed weight parameters of DNNs. In these previous schemes, how to make the most of the low-rank and sparse components of weight matrices and how to globally decompose the weight matrix of different layers for efficient compression need to be further investigated. In this paper, in order to effectively utilize the low-rank and sparse characteristics of the weight matrix, we first introduce a sparse coefficient to dynamically control the allocation between the low-rank and sparse components, and an efficient reconstructed network is designed to reduce the inference time. Secondly, since the results of low-rank decomposition can affect the compression ratio and accuracy of DNNs, we establish an optimization problem to automatically select the optimal hyperparameters of the compressed network and achieve global compression for all the layers of network synchronously. Finally, to solve the optimization problem, we present a decomposition-searching algorithm to search the optimal solution. The algorithm can dynamically keep the balance between the compression ratio and accuracy. Extensive experiments of AlexNet, VGG-16 and ResNet-18 on CIFAR-10 and ImageNet are employed to evaluate the effectiveness of the proposed approach. After slight fine-tuning, compressed networks have gained \(1.2\times\) to \(11.3\times\) speedup and our method reduces the size of different networks by \(1.4\times\) to \(14.6\times \).Split packing: packing circles into triangles with optimal worst-case densityhttps://zbmath.org/1491.682582022-09-13T20:28:31.338867Z"Fekete, Sándor P."https://zbmath.org/authors/?q=ai:fekete.sandor-p"Morr, Sebastian"https://zbmath.org/authors/?q=ai:morr.sebastian"Scheffer, Christian"https://zbmath.org/authors/?q=ai:scheffer.christianSummary: In the circle packing problem for triangular containers, one asks whether a given set of circles can be packed into a given triangle. Packing problems like this have been shown to be \(\mathsf {NP}\)-hard. In this paper, we present a new sufficient condition for packing circles into any right or obtuse triangle using only the circles' combined area: it is possible to pack any circle instance whose combined area does not exceed the triangle's incircle. This area condition is tight, in the sense that for any larger area, there are instances which cannot be packed.
A similar result for square containers has been established earlier this year, using the versatile, divide-and-conquer-based split packing algorithm. In this paper, we present a generalized, weighted version of this approach, allowing us to construct packings of circles into asymmetric triangles. It seems crucial to the success of these results that split packing does not depend on an orthogonal subdivision structure. Beside realizing all packings below the critical density bound, our algorithm can also be used as a constant-factor approximation algorithm when looking for the smallest non-acute triangle of a given side ratio in which a given set of circles can be packed.
An interactive visualization of the Split Packing approach and other related material can be found at \url{https://morr.cc/split-packing/}.
For the entire collection see [Zbl 1369.68023].Near-optimal approximate shortest paths and transshipment in distributed and streaming modelshttps://zbmath.org/1491.682642022-09-13T20:28:31.338867Z"Becker, Ruben"https://zbmath.org/authors/?q=ai:becker.ruben"Forster, Sebastian"https://zbmath.org/authors/?q=ai:forster.sebastian"Karrenbauer, Andreas"https://zbmath.org/authors/?q=ai:karrenbauer.andreas"Lenzen, Christoph"https://zbmath.org/authors/?q=ai:lenzen.christophApproximation algorithms for stochastic \(k\)-TSPhttps://zbmath.org/1491.682682022-09-13T20:28:31.338867Z"Ene, Alina"https://zbmath.org/authors/?q=ai:ene.alina"Nagarajan, Viswanath"https://zbmath.org/authors/?q=ai:nagarajan.viswanath"Saket, Rishi"https://zbmath.org/authors/?q=ai:saket.rishiSummary: This paper studies the stochastic variant of the classical \(k\)-TSP problem where rewards at the vertices are independent random variables which are instantiated upon the tour's visit. The objective is to minimize the expected length of a tour that collects reward at least \(k\). The solution is a policy describing the tour which may (adaptive) or may not (non-adaptive) depend on the observed rewards. Our work presents an adaptive \(O(\log k)\)-approximation algorithm for Stochastic \(k\)-TSP, along with a non-adaptive \(O(\log^2k)\)-approximation algorithm which also upper bounds the adaptivity gap by \(O(\log^2k)\). We also show that the adaptivity gap of Stochastic \(k\)-TSP is at least \(e\), even in the special case of stochastic knapsack cover.
For the entire collection see [Zbl 1388.68010].Online strongly convex optimization with unknown delayshttps://zbmath.org/1491.682722022-09-13T20:28:31.338867Z"Wan, Yuanyu"https://zbmath.org/authors/?q=ai:wan.yuanyu"Tu, Wei-Wei"https://zbmath.org/authors/?q=ai:tu.weiwei"Zhang, Lijun"https://zbmath.org/authors/?q=ai:zhang.lijunSummary: We investigate the problem of online convex optimization with unknown delays, in which the feedback of a decision arrives with an arbitrary delay. Previous studies have presented delayed online gradient descent (DOGD), and achieved the regret bound of \(O(\sqrt{D})\) by only utilizing the convexity condition, where \(D\ge T\) is the sum of delays over \(T\) rounds. In this paper, we further exploit the strong convexity to improve the regret bound. Specifically, we first propose a variant of DOGD for strongly convex functions, and establish a better regret bound of \(O(d\log T)\), where \(d\) is the maximum delay. The essential idea is to let the learning rate decay with the total number of received feedback linearly. Furthermore, we extend the strongly convex variant of DOGD and its theoretical guarantee to the more challenging bandit setting by combining with the classical \((n+1)\)-point and two-point gradient estimators, where \(n\) is the dimensionality. To the best of our knowledge, this is the first work that solves online strongly convex optimization under the general delayed setting.Discrete variable optimization of structures subjected to dynamic loads using equivalent static loads and metaheuristic algorithmshttps://zbmath.org/1491.740842022-09-13T20:28:31.338867Z"Al-Bazoon, Mustafa"https://zbmath.org/authors/?q=ai:al-bazoon.mustafa"Arora, Jasbir S."https://zbmath.org/authors/?q=ai:arora.jasbir-sSummary: This paper presents a new computational procedure for optimization of structures subjected to dynamic loads. The optimization problem is formulated with discrete design variables that represent the members from a table of commercially available members. Also, the requirements in the American Institute of Steel Construction (AISC) manual are formulated as constraints. This results in a nondifferentiable optimization problem. In the new procedure, the dynamic load is transformed into equivalent static loads (ESLs). Then the static response optimization problem having discrete design variables is solved using a metaheuristic optimization algorithm. Three methods to calculate the ESLs are investigated. It is found that the ESL cycles cannot converge to the final design. Therefore after a few ESL cycles, the original dynamic loads need to be used in the optimization process. Four example problems are solved to analyze the procedure. Based on this analysis, it is concluded that the new procedure is more efficient compared to a procedure that does not use the ESL cycles because it reduces the total CPU effort to obtain the final design. Also, better final designs are found. The reason is that many more designs are analyzed very efficiently with the ESL procedure.Lagrangian dual framework for conservative neural network solutions of kinetic equationshttps://zbmath.org/1491.820102022-09-13T20:28:31.338867Z"Hwang, Hyung Ju"https://zbmath.org/authors/?q=ai:hwang.hyung-ju"Son, Hwijae"https://zbmath.org/authors/?q=ai:son.hwijaeSummary: In this paper, we propose a novel conservative formulation for solving kinetic equations via neural networks. More precisely, we formulate the learning problem as a constrained optimization problem with constraints that represent the physical conservation laws. The constraints are relaxed toward the residual loss function by the Lagrangian duality. By imposing physical conservation properties of the solution as constraints of the learning problem, we demonstrate far more accurate approximations of the solutions in terms of errors and the conservation laws, for the kinetic Fokker-Planck equation and the homogeneous Boltzmann equation.Facility layout. Mathematical optimization techniques and engineeringhttps://zbmath.org/1491.900012022-09-13T20:28:31.338867Z"Anjos, Miguel F."https://zbmath.org/authors/?q=ai:anjos.miguel-f"Vieira, Manuel V. C."https://zbmath.org/authors/?q=ai:vieira.manuel-v-cThe book considers the mathematical modeling of several types of facility layout problems as optimization problems. The first two chapters present an overview of the problems and challengers in this area and introduce the facility layout problem on a single row. This is followed by a series of extensions to more complicated layouts such as layout problems on several rows or a single floor. The mathematical optimization techniques presented include mixed integer programming, second-order conic programming and semi-definite programming. Several advanced topics are outlined, such as inequality separation, semi-definite optimization formulations, multi-stage approaches and dynamic facility layout problems. The last chapter considers interesting applications of the facility layout problem to nine selected industries, such as metallurgy, defence industry, shoe manufacturing, automotive industry and hospital layout problems. The authors conclude with an annexe containing the mathematical technicalities of semi-definite and conic optimization.
Reviewer: Efstratios Rappos (Aubonne)Mathematical programming for power systems operation. From theory to applications in Pythonhttps://zbmath.org/1491.900022022-09-13T20:28:31.338867Z"Garcés, Alejandro"https://zbmath.org/authors/?q=ai:garces.alejandroThis book explores the use of the Python programming language for the modeling and solution of power system operations models, in particular with the use of convex optimization techniques. The first chapter introduces the problems related to the operation of power systems and the related terminology. This is followed by five chapters on mathematical programming where a number of relevant optimization techniques such as convex, conic and robust optimization are presented, including any relevant Python code. The second part of the book focuses on the operation of power systems. The problems that are modeled in this section include the economic dispatch of thermal units, the unit commitment problem, the hydrothermal scheduling, optimal power flow and more modern approaches such as active distribution networks and demand-side management. Each model contains the problem description, followed by its mathematical formulation and relevant Python code, and ends with suggestions for extensions and further reading. The code presented contains the programming of the core element or key idea in each problem and of course further work would be required for a stand-alone, full implementation. The book concludes with a very useful annex containing the more complex mathematical concepts and an introduction to relevant Python packages (numpy, pandas and matplotlib).
Reviewer: Efstratios Rappos (Aubonne)Intelligent decision support systems. Combining operations research and artificial intelligence -- essays in honor of Roman Słowińskihttps://zbmath.org/1491.900052022-09-13T20:28:31.338867ZPublisher's description: This book presents a collection of essays written by leading researchers to honor Roman Slowinski's major scholarly interests and contributions. He is well-known for conducting extensive research on methodologies and techniques for intelligent decision support, where he combines operational research and artificial intelligence. The book reconstructs his main contributions, presents cutting-edge research and provides an outlook on the most promising and advanced domains of computer science and multiple criteria decision aiding.
The respective chapters cover a wide range of related research areas, including decision sciences, ordinal data mining, preference learning and multiple criteria decision aiding, modeling of uncertainty and imprecision in decision problems, rough set theory, fuzzy set theory, multi-objective optimization, project scheduling and decision support applications. As such, the book will appeal to researchers and scholars in related fields.
The articles of this volume will be reviewed individually.Meta-heuristic optimization techniques. Applications in engineeringhttps://zbmath.org/1491.900062022-09-13T20:28:31.338867ZPublisher's description: This book offers a thorough overview of the most popular and researched meta-heuristic optimization techniques and nature-inspired algorithms. Their wide applicability makes them a hot research topic and an effi cient tool for the solution of complex optimization problems in various fi elds of sciences, engineering, and in numerous industries.
The articles of mathematical interest will be reviewed individually.Artificial bee colony optimization-inspired synergetic study of fractional-order economic production quantity modelhttps://zbmath.org/1491.900112022-09-13T20:28:31.338867Z"Rahaman, Mostafijur"https://zbmath.org/authors/?q=ai:rahaman.mostafijur"Mondal, Sankar Prasad"https://zbmath.org/authors/?q=ai:mondal.sankar-prasad"Shaikh, Ali Akbar"https://zbmath.org/authors/?q=ai:shaikh.ali-akbar"Pramanik, Prasenjit"https://zbmath.org/authors/?q=ai:pramanik.prasenjit"Roy, Samarjit"https://zbmath.org/authors/?q=ai:roy.samarjit"Maiti, Manas Kumar"https://zbmath.org/authors/?q=ai:maiti.manas-kumar"Mondal, Rituparna"https://zbmath.org/authors/?q=ai:mondal.rituparna"De, Debashis"https://zbmath.org/authors/?q=ai:de.debashisSummary: Inventory control is one of the most widely recognized issues in the reality. This investigation manages the utilization of fractional derivatives and integration on an inventory control problem. The memory of a dynamical model is a highly concerned issue which is commonly neglected by the models described in terms of integer-order differential equation. The memory capturing the power of fractional derivative (in Caputo's sense) is utilized here to describe an economic production quantity model with deterioration when the demand depends on price and stock and production is stock dependent. Also, this study covers the integer-order model with the same assumptions as a memoryless model and a particular case of the fractional model. Due to the complex nature of the model, numerical optimization with the help of a modified artificial bee colony algorithm is done instead of the analytical approach of optimization. Finally, we have performed a sensitivity analysis in order to make a fruitful conclusion.A novel fuzzy mathematical model for an integrated supply chain planning using multi-objective evolutionary algorithmhttps://zbmath.org/1491.900142022-09-13T20:28:31.338867Z"Alavidoost, M. H."https://zbmath.org/authors/?q=ai:alavidoost.m-h"Jafarnejad, A."https://zbmath.org/authors/?q=ai:jafarnejad.a"Babazadeh, Hossein"https://zbmath.org/authors/?q=ai:babazadeh.hosseinSummary: Due to the high competition in today's global market, design of an efficient supply chain is necessary to achieve competitive advantages. Having concerned this issue, scheduling is one of critical concepts in supply chain management that leads to increase efficiency and customer satisfaction as well as decrease costs. Recently, build-to-order supply chain (BTO-SC) system has attracted considerable attention because of its successful implementation in high-tech companies such as Dell, BMW, Compaq, and Gateway. Despite several studies in the context of BTO-SC, there has been revealed several gaps that this study attempts to eliminate them. In this paper, a fuzzy mixed-integer linear programming optimization model is proposed for an integrated supply, production, and distribution supply chain of multi-product, multi-level, multi-plant. Because of uncertainty, imprecision and variability associated with real data, fuzzy approach is utilized for processing such information in modeling the problem. Usually, supply chain problems have conflicting objectives that should be optimized simultaneously. Thus, the proposed model aims to optimize customer satisfaction as well as chain supply costs, simultaneously. Due to high complexity and lack of proper benchmark for this problem, the model is solved by some popular multi-objective meta-heuristic algorithms. Also, Taguchi method is deployed to calibrate as well as control the parameters in four mentioned algorithms. Finally, a new framework is proposed for ranking of these algorithms use of fuzzy VIKOR method. The presented framework is an excellent comparison dashboard, and it is applicable not only for comparing multi-objective genetic algorithms but also for comparing all of multi-objective meta-heuristic algorithms.Credit linked two-stage multi-objective transportation problem in rough and bi-rough environmentshttps://zbmath.org/1491.900152022-09-13T20:28:31.338867Z"Bera, Raj Kumar"https://zbmath.org/authors/?q=ai:bera.raj-kumar"Mondal, Shyamal Kumar"https://zbmath.org/authors/?q=ai:mondal.shyamal-kumarSummary: With a rapid growth of research in multi-objective transportation problem, rough and bi-rough sets are two new mathematical ideas for formulating real-world-based problems involving uncertain data. In this study, we have investigated a two-stage multi-objective transportation problem by considering credit period policy under rough and bi-rough environments. In this regard, three conflicting objective functions have been optimized simultaneously under the same restrictions. In first objective function, we have presented the minimization of transportation cost of a production house. In second objective function, total transportation cost of retailers has been minimized. But, in last one, we have maximized total profit of distributors. Besides, due to existence of different types of uncertainties in our real-life problems, in the proposed model, independent parameters (including, actual transportation cost, requirement of the retailers, and cost per unit distance) have been considered as rough in nature and dependent parameters such as demanded transportation cost and demand of the distributors have been considered as bi-rough in nature. Moreover, to convert the uncertain model into an equivalent deterministic form, a rough and bi-rough programming approach has been derived along with the expected value approach. Finally, by using these ideas, the mathematical model of our considered transportation problem has been illustrated. After that, the proposed model has been solved by applying NSGA-II algorithm (elitist non-dominated sorting genetic algorithm) with some simulated numerical data. Some sensitivity analysis associated with our proposed model has also been discussed to show the effectiveness of the model.A sustainable-resilience healthcare network for handling COVID-19 pandemichttps://zbmath.org/1491.900162022-09-13T20:28:31.338867Z"Goodarzian, Fariba"https://zbmath.org/authors/?q=ai:goodarzian.fariba"Ghasemi, Peiman"https://zbmath.org/authors/?q=ai:ghasemi.peiman"Gunasekaren, Angappa"https://zbmath.org/authors/?q=ai:gunasekaren.angappa"Taleizadeh, Ata Allah"https://zbmath.org/authors/?q=ai:taleizadeh.ata-allah"Abraham, Ajith"https://zbmath.org/authors/?q=ai:abraham.ajithSummary: In this paper, a new production, allocation, location, inventory holding, distribution, and flow problems for a new sustainable-resilient health care network related to the COVID-19 pandemic under uncertainty is developed that also integrated sustainability aspects and resiliency concepts. Then, a multi-period, multi-product, multi-objective, and multi-echelon mixed-integer linear programming model for the current network is formulated and designed. Formulating a new MILP model to design a sustainable-resilience healthcare network during the COVID-19 pandemic and developing three hybrid meta-heuristic algorithms are among the most important contributions of this research. In order to estimate the values of the required demand for medicines, the simulation approach is employed. To cope with uncertain parameters, stochastic chance-constraint programming is proposed. This paper also proposed three meta-heuristic methods including Multi-Objective Teaching-learning-based optimization (TLBO), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA) to find Pareto solutions. Since heuristic approaches are sensitive to input parameters, the Taguchi approach is suggested to control and tune the parameters. A comparison is performed by using eight assessment metrics to validate the quality of the obtained Pareto frontier by the heuristic methods on the experiment problems. To validate the current model, a set of sensitivity analysis on important parameters and a real case study in the United States are provided. Based on the empirical experimental results, computational time and eight assessment metrics proposed methodology seems to work well for the considered problems. The results show that by raising the transportation costs, the total cost and the environmental impacts of sustainability increased steadily and the trend of the social responsibility of staff rose gradually between \(- 20\) and 0\%, but, dropped suddenly from 0 to + 20\%. Also in terms of the on-resiliency of the proposed network, the trends climbed slightly and steadily. Applications of this paper can be useful for hospitals, pharmacies, distributors, medicine manufacturers and the Ministry of Health.JMD method for transforming an unbalanced fully intuitionistic fuzzy transportation problem into a balanced fully intuitionistic fuzzy transportation problemhttps://zbmath.org/1491.900212022-09-13T20:28:31.338867Z"Mishra, Akansha"https://zbmath.org/authors/?q=ai:mishra.akansha"Kumar, Amit"https://zbmath.org/authors/?q=ai:kumar.amit.1|kumar.amit.2|kumar.amit-n|kumar.amitSummary: \textit{A. Mahmoodirad} et al. [ibid. 23, No. 12, 4521--4530 (2019; Zbl 1418.90036)] proposed an approach for solving fully intuitionistic fuzzy transportation problems (FIFTPs) (transportation problems in which each parameter is represented as a triangular intuitionistic fuzzy number). In this approach, firstly, an unbalanced fully intuitionistic fuzzy transportation problem (FIFTP) is transformed into a balanced FIFTP, and then the intuitionistic fuzzy (IF) optimal solution of the transformed balanced FIFTP is obtained. In this paper, it is shown that Mahmoodirad et al.'s approach fails to transform an unbalanced FIFTP and hence, Mahmoodirad et al.'s approach fails to find the IF optimal solution of unbalanced FIFTPs. It is obvious that to overcome this limitation of Mahmoodirad et al.'s approach, there is need to propose a method to transform an unbalanced FIFTP into a balanced FIFTP. Therefore, in this paper, a new method (named as JMD method) is proposed to transform an unbalanced FIFTP into a balanced FIFTP.An algorithmic approach to solve unbalanced triangular fuzzy transportation problemshttps://zbmath.org/1491.900222022-09-13T20:28:31.338867Z"Muthuperumal, S."https://zbmath.org/authors/?q=ai:muthuperumal.s"Titus, P."https://zbmath.org/authors/?q=ai:titus.p"Venkatachalapathy, M."https://zbmath.org/authors/?q=ai:venkatachalapathy.mSummary: This paper presents problems in fuzzy transportation, which deals with fuzzy costs, fuzzy supplies, and demands of a fuzzy nature on any quantity transported. The paper deals with the minimization of the total fuzzy cost under the fuzzified decision-variables. The proposed method gives better optimum for fuzzy transportation problem (FTP) with unbalance and balance types. The method is intended to obtain a basic feasible solution (or ``initial basic feasible solution'') (IBFS) of unbalanced fuzzy problems with triangular fuzzy number. A new and simple heuristic approach for obtaining optimum solution of triangular fuzzy unbalanced transportation problem is proposed that reduces the number of iterations in the optimization process. A given triangular fuzzy unbalanced TP is converted into a modified triangular unbalanced FTP by increasing the fuzzy-demand/fuzzy-supply of an origin and a destination, and the same is resolved by new method. Also an illustrative numerical example is discussed for proposed method solving a triangular FTP with m, n origins and destinations, respectively.Uncertain four-dimensional multi-objective multi-item transportation models via GP techniquehttps://zbmath.org/1491.900232022-09-13T20:28:31.338867Z"Sahoo, Palash"https://zbmath.org/authors/?q=ai:sahoo.palash"Jana, Dipak Kumar"https://zbmath.org/authors/?q=ai:jana.dipak-kumar"Pramanik, Sutapa"https://zbmath.org/authors/?q=ai:pramanik.sutapa"Panigrahi, Goutam"https://zbmath.org/authors/?q=ai:panigrahi.goutamSummary: In this paper, a new type of four-dimensional multi-objective multi-item transportation problem is established using uncertain theory. We formulate and derive the expected value goal programming model and chance-constrained goal programming model based on the uncertain theory, where unit transportation cost, availabilities, capacities of conveyances, demands, unit transportation time, unit loading and unloading time are represented as uncertain matrices. Based on some properties of uncertain theory, the expected value goal programming model and chance-constrained goal programming model are transformed into the corresponding deterministic equivalents form via the soft computing technique, i.e., generalized reduced gradient technique named by LINGO-14.0. After that, a real-life numerical example is given to illustrate the performance of the models. Finally, the sensitivity analysis of the proposed model is presented through chance-constrained goal programming method with respect to different confidence levels.Solving an integrated cell formation and group layout problem using a simulated annealing enhanced by linear programminghttps://zbmath.org/1491.900802022-09-13T20:28:31.338867Z"Forghani, Kamran"https://zbmath.org/authors/?q=ai:forghani.kamran"Fatemi Ghomi, S. M. T."https://zbmath.org/authors/?q=ai:fatemi-ghomi.seyyed-mohammad-taghi"Kia, Reza"https://zbmath.org/authors/?q=ai:kia.rezaSummary: This paper proposes a new approach to integrating the cell formation, group layout of rectangle-shaped machines and routing selection problems. The problem is formulated as a mixed-integer program with the objective of minimizing the handling costs. Due to the computational complexity of the problem, a hybrid simulated annealing (SA) is employed to solve the problem. The sequence-pair representation, originally proposed for block placement, is utilized for solution encoding. Two placement algorithms are developed to evaluate the objective function value of an encoded solution in the SA. The first placement algorithm is based on solving a linear program embedded with a constraint reduction algorithm. The second one is a fast heuristic that can evaluate an encoded solution in much less computational time. Benchmarks selected from the literature are solved to verify the performance of the SA and to accomplish comparisons. The computational results demonstrated the high performance of both designed hybrid SA algorithms. The comparison against the literature also indicated that the flexibility incorporated into the model results in better layouts, even if the model is solved only by a solver such as CPLEX.Multi-objective evolutionary algorithm for solving energy-aware fuzzy job shop problemshttps://zbmath.org/1491.900832022-09-13T20:28:31.338867Z"González-Rodríguez, Inés"https://zbmath.org/authors/?q=ai:gonzalez-rodriguez.ines"Puente, Jorge"https://zbmath.org/authors/?q=ai:puente.jorge"Palacios, Juan José"https://zbmath.org/authors/?q=ai:palacios.juan-jose"Vela, Camino R."https://zbmath.org/authors/?q=ai:vela.camino-rSummary: A growing concern about the environmental impact of manufacturing processes and in particular the associated energy consumption has recently driven some researchers within the scheduling community to consider energy costs in addition to more traditional performance-related measures, such as satisfaction of due-date commitments. Recent research is also devoted to narrowing the gap between real-world applications and academic problems by handling uncertainty in some input data. In this paper, we address the job shop scheduling problem, a well-known hard problem with many applications, using fuzzy sets to model uncertainty in processing times and with the target of finding solutions that perform well with respect to both due-date fulfilment and energy efficiency. The resulting multi-objective problem is solved using an evolutionary algorithm based on the NSGA-II procedure, where the decoding operator incorporates a new heuristic procedure in order to improve the solutions' energy consumption. This heuristic is based on a theoretical analysis of the changes in energy consumption when a solution is subject to slight changes, referred to as local right shifts. The experimental results support the theoretical study and show the potential of the proposal.Solving technician routing and scheduling problem using improved particle swarm optimizationhttps://zbmath.org/1491.900872022-09-13T20:28:31.338867Z"Pekel, Engin"https://zbmath.org/authors/?q=ai:pekel.enginSummary: In this paper, an improved particle swarm optimization (IPSO) algorithm is proposed to solve the technician routing and scheduling problem (TRSP). The TRSP consists of the assignment of technicians into teams, the assignment of teams to tasks, the construction of routes, and the selection of the day on which a service is provided by considering the proficiency level of workers and the proficiency requirement of the task. The paper considers the planning horizon as a multi-period covering 5 days, which further increases the complexity of the problem. Then a task can be fulfilled in any one of 5 days. The IPSO algorithm includes a particle swarm optimization (PSO) algorithm and one neighborhood operator. One neighborhood operator is used to avoid the local solution trap since the global best solution found by PSO is falling into a local solution trap. Further, the proposed algorithm's performance is experimentally compared with the branch-and-cut algorithm for the solution of the TRSP, on the benchmark instances generated from the literature. The computational results show that IPSO provides better solutions considering the branch-and-cut algorithm within reasonable computing time.Scheduling problem in seru production system considering DeJong's learning effect and job splittinghttps://zbmath.org/1491.900892022-09-13T20:28:31.338867Z"Zhang, Zhe"https://zbmath.org/authors/?q=ai:zhang.zhe"Song, Xiaoling"https://zbmath.org/authors/?q=ai:song.xiaoling"Huang, Huijun"https://zbmath.org/authors/?q=ai:huang.huijun"Yin, Yong"https://zbmath.org/authors/?q=ai:yin.yong"Lev, Benjamin"https://zbmath.org/authors/?q=ai:lev.benjaminSummary: \textit{Seru} is a relatively new type of Japanese production mode originated from the electronic assembly industry. In practice, \textit{seru} production has been proven to be efficient, flexible, response quickly, and can cope with the fluctuating production demands in a current volatile market. This paper focuses on scheduling problems in \textit{seru} production system. Motivated by the realty of labor-intensive assembly industry, we consider learning effect of workers and job splitting with the objective of minimizing the total completion time. A nonlinear integer programming model for the \textit{seru} scheduling problem is provided, and it is proved to be polynomial solvable. Therefore, a branch and bound algorithm is designed for small sized \textit{seru} scheduling problems, while a local search-based hybrid genetic algorithm employing shortest processing time rule is provided for large sized problems. Finally, computational experiments are conducted, and the results demonstrate the practicability of the proposed \textit{seru} scheduling model and the efficiency of our solution methods.Development of two-stage parallel-series system with fuzzy data: a fuzzy DEA approachhttps://zbmath.org/1491.900912022-09-13T20:28:31.338867Z"Arya, Alka"https://zbmath.org/authors/?q=ai:arya.alka"Singh, Sanjeet"https://zbmath.org/authors/?q=ai:singh.sanjeetSummary: We consider a two-stage parallel-series system having three sub-systems. The independent two sub-systems of the first stage are linked in parallel and then linked to the third sub-system of the second stage in series. The deterministic two-stage parallel-series system approach is extended to uncertain/imprecise environment where the data are represented as fuzzy numbers. Using the Zadeh extension principle, we develop a fuzzy two-stage parallel-series system to determine the lower and upper bound fuzzy efficiencies of the decision-making units with the help of \(\alpha \)-cut and rank the DMUs using the ranking index approach. The proposed methodology is illustrated using the case of Taiwan's non-life insurance companies.Interval network Malmquist productivity index for examining productivity changes of insurance companies under data uncertainty: a case studyhttps://zbmath.org/1491.900932022-09-13T20:28:31.338867Z"Esmaeili, Fatemeh Sadat Seyed"https://zbmath.org/authors/?q=ai:esmaeili.fatemeh-sadat-seyed"Rostamy-Malkhalifeh, Mohsen"https://zbmath.org/authors/?q=ai:rostamy-malkhalifeh.mohsen"Lotfi, Farhad Hosseinzadeh"https://zbmath.org/authors/?q=ai:hosseinzadeh-lotfi.farhadSummary: The insurance industry is one of the important financial institutions that has a significant place in the economic growth and development of the country. Given the industry's influential role in the financial markets, it is imperative to evaluate the performance and calculate changes in insurance companies' productivity over time. It is necessary to explain that the internal structure of insurance companies can be considered as a two-stage process involving marketing and investment. The purpose of the current study is to propose a novel approach to calculate the changes in insurance companies' productivity by considering their two-stage structure as well as the inherent uncertainties in the data. It should be noted that in order to propose of new interval network Malmquist Productivity Index, the network data envelopment analysis approach (NDEA), Malmquist productivity index (MPI), and interval programming are applied. The implementation of the proposed research approach is also evaluated using real data of 10 insurance companies in Iran. According to the obtained results, most of the companies have regressed from the first stage and marketing perspective, but in the second stage and from the investment perspective, the majority of companies have represented an acceptable improvement in their productivity.Decision making using new category of similarity measures and study their applications in medical diagnosis problemshttps://zbmath.org/1491.900952022-09-13T20:28:31.338867Z"Khalil, Shuker Mahmood"https://zbmath.org/authors/?q=ai:khalil.shuker-mahmoodSummary: The aim of this paper is to propose a new category of similarity measures, we begin to introduce the concept of effect matrix \(\overset{\wedge}{T} (R_1 \times R_2 \times R_3 ,p_m ,q_n ,f_v)\) of type 3-tuple and study some of their properties. Moreover, from the soft set we find the pictures of type regular and irregular using effect matrix \(\overset{\wedge}{T}(R_1 \times R_2 \times R_3 ,p_m ,q_n ,f_v)\) of type 3-tuple are found. An effect matrix \(\overset{\wedge}{T} (R_1 \times R_2 \times R_3 ,p_m ,q_n ,f_v )\) of type 3-tuple is better than effect matrix \(\overset{\wedge}{T}(\mathfrak{R}_1 \times \mathfrak{R}_2 ,p_n ,q_m)\) of type 2-tuple, because we can deal with three different sets \(R_1\) (a family of objectives), \(R_2\) (a family of parameters), \(R_3\) (a family of second parameters) in the same problem. This burden can be alleviated by application of type 3-tuple. Some applications of soft effect matrix of type 3-tuple \(\overset{\wedge}{T} (R_1 \times R_2 \times R_3 ,p_m ,q_n ,f_v)\) in decision making problems are studied and explained. In this work we deal with pictures. Moreover, the similarity measure between two different soft sets under the same universal soft set \((R_1 \times R_2 \times R_3)\) can be studied and explained its applications in medical diagnosis problems.A new soft computing approach for green supplier selection problem with interval type-2 trapezoidal fuzzy statistical group decision and avoidance of information losshttps://zbmath.org/1491.900962022-09-13T20:28:31.338867Z"Mousavi, S. M."https://zbmath.org/authors/?q=ai:mousavi.s-meysam"Foroozesh, N."https://zbmath.org/authors/?q=ai:foroozesh.n"Zavadskas, E. K."https://zbmath.org/authors/?q=ai:zavadskas.edmundas-kazimieras"Antucheviciene, J."https://zbmath.org/authors/?q=ai:antucheviciene.jurgitaSummary: Green supplier selection problem (GSSP) is viewed as multiple attributes group decision-making (MAGDM) issue that includes the green growth and influential factors within subjective and objective natures. Because of the expanding uncertain conditions of social and economic environments, some assessment factors are not sufficiently described by numerical appraisals and classic fuzzy sets. Moreover, supply chain decision makers (DMs) may not provide complete rationality under numerous viable choice circumstances. In this research, a new MAGDM model is proposed by interval type-2 trapezoidal fuzzy numbers (IT2TrFNs) via some matrices of possibilistic mean and standard deviation statistical concepts. A new weighting method of experts within the group decision-making process is developed based on possibilistic statistical information. Also, a new ranking process based on relative-closeness coefficients is presented to rank all green supplier candidates under IT2TrF uncertainty. Finally, this research offers an illustrative example in supply chain networks to appraise green supplier candidates in terms of some factors by the proposed model along with the comparison to a recent decision method.The multi-zone location-routing problem with pricing: a flow-based formulation and two heuristic approacheshttps://zbmath.org/1491.900972022-09-13T20:28:31.338867Z"Sadeghi Dastaki, Mohsen"https://zbmath.org/authors/?q=ai:sadeghi-dastaki.mohsen"Setak, Mostafa"https://zbmath.org/authors/?q=ai:setak.mostafa"Karimi, Hossein"https://zbmath.org/authors/?q=ai:karimi.hosseinSummary: This paper integrates the concept of pricing into the location-routing problem. The problem consists of a firm trying to optimize its multi-zone network in order to maximize its profit. It divides its big market into some zones and tries to determine the optimal zonal price of the products. It then dispatches its products from its position to each zone. After receiving the products to depots, a vehicle should distribute the products between customers. This matter is compatible with real-world distribution systems such as fruits, textile products, and leather. In other words, the main objectives of the firm are to have the optimal price, vehicle routes, and location of the depot in each zone. Therefore, a flow-based model as a mixed-integer nonlinear programming is proposed to solve the problem. In the light of this nonlinearity, we employ a piecewise linearization approximation method. In addition, to adapt with the large-scale problems, two heuristic algorithms with three combinations of operators in the local search are suggested. In order to evaluate their performance, some test instances and a case study are solved. Based on the results, the performance of the algorithms is compared with each other and the flow-based formulation. The results show the outperformance of the algorithms rather than the flow-based model. Moreover, the results show the linearization approach can efficiently approximate the nonlinear model. Furthermore, the impact of the shipping cost ratio on the depot selection, vehicles' route, and firms' profit is revealed.Polynomial-time algorithm for linear programming based on a kernel function with hyperbolic-logarithmic barrier termhttps://zbmath.org/1491.900982022-09-13T20:28:31.338867Z"Touil, Imene"https://zbmath.org/authors/?q=ai:touil.imene"Chikouche, Wided"https://zbmath.org/authors/?q=ai:chikouche.widedSummary: In this work, we present an interior point algorithm for linear optimization problems based on a kernel function which has a hyperbolic-logarithmic function in its barrier term. This kernel function was first proposed by the authors themselves for semi-definite programming (SDP) problems in [the authors, Filomat 34, No. 12, 3957--3969 (2020; Zbl 07541463)]. By simple analysis tools, several properties of the proposed kernel function are used to compute the total number of iterations. We show that the worst-case iteration complexity of our algorithm for large-update methods improves the obtained iteration bounds based on the first trigonometric [\textit{M. El Ghami} et al., J. Comput. Appl. Math. 236, No. 15, 3613--3623 (2012; Zbl 1242.90292)] as well as the classic kernel functions. For small-update methods, we derive the best known iteration bound. Numerical tests reveal that the proposed kernel function has promising results comparing with some existing kernel functions.Fair packing and covering on a relative scalehttps://zbmath.org/1491.900992022-09-13T20:28:31.338867Z"Diakonikolas, Jelena"https://zbmath.org/authors/?q=ai:diakonikolas.jelena"Fazel, Maryam"https://zbmath.org/authors/?q=ai:fazel.maryam"Orecchia, Lorenzo"https://zbmath.org/authors/?q=ai:orecchia.lorenzoThe improved distributed algorithms are obtained for constructing \(\varepsilon\)-approximate solutions to \(\alpha\)-fair packing problems. For \(\alpha\in [0,1)\), one of the main results (Theorem 4.4) shows that a solution with a \((1+\varepsilon)\)-relative error is reached within \(O(\log(n\rho)\log(mn\rho/\varepsilon)/(1-\alpha)^3\varepsilon^2)\) iterations. For \(\alpha=1\), one of the main results (Theorem 4.8) yields a \(\varepsilon\)-approximate convergence in \(O(\log^3(mn\rho/\varepsilon)/\varepsilon^2)\) iterations. For \(\alpha>1\), one of the main results (Theorem 4.14) shows that a solution with a \((1-\varepsilon)\)-relative error is reached within
\(O(\max\{\alpha^3\log(n\rho/\varepsilon)\log(mn\rho/\varepsilon)/\varepsilon,\log(1/\varepsilon(\alpha-1))\log(mn\rho/\varepsilon)/\varepsilon(\alpha-1)\})\) iterations.
Reviewer: Yisheng Song (Hong Kong)NMR assignment through linear programminghttps://zbmath.org/1491.901002022-09-13T20:28:31.338867Z"Bravo-Ferreira, José F. S."https://zbmath.org/authors/?q=ai:ferreira.jose-f-s-bravo"Cowburn, David"https://zbmath.org/authors/?q=ai:cowburn.david"Khoo, Yuehaw"https://zbmath.org/authors/?q=ai:khoo.yuehaw"Singer, Amit"https://zbmath.org/authors/?q=ai:singer.amitSummary: Nuclear Magnetic Resonance (NMR) Spectroscopy is the second most used technique (after X-ray crystallography) for structural determination of proteins. A computational challenge in this technique involves solving a discrete optimization problem that assigns the resonance frequency to each atom in the protein. This paper introduces LIAN (LInear programming Assignment for NMR), a novel linear programming formulation of the problem which yields state-of-the-art results in simulated and experimental datasets.Inverse multiple criteria sorting problem with fuzzy parameters: an application of building energy labelling improvementhttps://zbmath.org/1491.901012022-09-13T20:28:31.338867Z"Ecer, Billur"https://zbmath.org/authors/?q=ai:ecer.billur"Kabak, Mehmet"https://zbmath.org/authors/?q=ai:kabak.mehmet"Dagdeviren, Metin"https://zbmath.org/authors/?q=ai:dagdeviren.metinSummary: Classification is defined as the problem of assignment of objects to the predefined classes. In general view, classification problems divided into two groups: classification and sorting problems. Sorting problems define the case of existence of ordered classes for objects, while classes are not ordered in classification problems. Besides these two groups of classification problems, Inverse Multiple Criteria Sorting Problem (IMSCP) is also introduced into the literature in recent years. IMSCP deals with finding the possible actions that can change the assignment of objects to classes in order to obtain the desired classification of objects. The main aim in this study is to propose an extension of IMSCP with fuzzy parameters with a proper solution approach. A case study of building energy labelling improvement in an existing building site in Ankara is solved by using parametric fuzzy solution approach of Carlsson and Korhonen. Obtained results of the application presents the possible actions to improve the energy labels of the buildings within the site. Also, solution results show that the proposed model in this study can be used to improve current Building Energy Performance model in Turkey to a new one with efficiency improvement suggestions.Notes on \(\{a,b,c\}\)-modular matriceshttps://zbmath.org/1491.901022022-09-13T20:28:31.338867Z"Glanzer, Christoph"https://zbmath.org/authors/?q=ai:glanzer.christoph"Stallknecht, Ingo"https://zbmath.org/authors/?q=ai:stallknecht.ingo"Weismantel, Robert"https://zbmath.org/authors/?q=ai:weismantel.robertSummary: Let \(A \in \mathbb{Z}^{m \times n}\) be an integral matrix and \(a, b, c \in \mathbb{Z}\) satisfy \(a \geq b \geq c \geq 0\). The question is to recognize whether \(A\) is {\(a,b,c\)}\textit{-modular}, i.e., whether the set of \(n \times n\) subdeterminants of \(A\) in absolute value is \(\{a,b,c\}\). We will succeed in solving this problem in polynomial time unless \(A\) possesses a \textit{duplicative relation}, that is, \(A\) has nonzero \(n \times n\) subdeterminants \(k_1\) and \(k_2\) satisfying \(2 \cdot|k_1| = |k_2|\). This is an extension of the well-known recognition algorithm for totally unimodular matrices. As a consequence of our analysis, we present a polynomial time algorithm to solve integer programs in standard form over \(\{a,b,c\}\)-modular constraint matrices for any constants \(a, b\) and \(c\).Certifiably optimal sparse inverse covariance estimationhttps://zbmath.org/1491.901032022-09-13T20:28:31.338867Z"Bertsimas, Dimitris"https://zbmath.org/authors/?q=ai:bertsimas.dimitris-j"Lamperski, Jourdain"https://zbmath.org/authors/?q=ai:lamperski.jourdain-b"Pauphilet, Jean"https://zbmath.org/authors/?q=ai:pauphilet.jeanSummary: We consider the maximum likelihood estimation of sparse inverse covariance matrices. We demonstrate that current heuristic approaches primarily encourage robustness, instead of the desired sparsity. We give a novel approach that solves the cardinality constrained likelihood problem to certifiable optimality. The approach uses techniques from mixed-integer optimization and convex optimization, and provides a high-quality solution with a guarantee on its suboptimality, even if the algorithm is terminated early. Using a variety of synthetic and real datasets, we demonstrate that our approach can solve problems where the dimension of the inverse covariance matrix is up to 1000 s. We also demonstrate that our approach produces significantly sparser solutions than Glasso and other popular learning procedures, makes less false discoveries, while still maintaining state-of-the-art accuracy.Risk and complexity in scenario optimizationhttps://zbmath.org/1491.901042022-09-13T20:28:31.338867Z"Garatti, S."https://zbmath.org/authors/?q=ai:garatti.simone"Campi, M. C."https://zbmath.org/authors/?q=ai:campi.marco-claudioScenario optimization is a broad methodology to perform optimization based on empirical knowledge. In this paper the authors open a new direction of investigation: the risk that a performance is not achieved, or that constraints are violated, is studied jointly with the complexity of the solution. The core achievement of this paper is showing that there exists a profound, and quite general, link between the two concepts: risk and complexity. Exploiting this link furnishes fundamental tools to evaluate the risk of scenario solutions, so complementing the heuristic use of data, in decision-making with a solid theory that enables one to certify the dependability of the decision. They show that the joint probability distribution of risk and complexity of scenario optimization problems is concentrated in such a way that the complexity carries fundamental information to tightly judge the risk.
Reviewer: I. M. Stancu-Minasian (Bucureşti)Problem-based optimal scenario generation and reduction in stochastic programminghttps://zbmath.org/1491.901052022-09-13T20:28:31.338867Z"Henrion, R."https://zbmath.org/authors/?q=ai:henrion.rene"Römisch, W."https://zbmath.org/authors/?q=ai:romisch.wernerIn this paper the authors study a problem-based approach to scenario generation and reduction for stochastic programming models without information constraints. The generation of scenarios is an important issue for solving applied stochastic programming models. Presently Monte Carlo sampling methods are the preferred approach, but besides Quasi-Monte Carlo and sparse grid methods also best approximation methods are in use. The authors show that the optimal scenario generation problem can be formulated as generalized semi-infinite program (Theorem 1) which is convex in some cases (Theorem 2),enjoys stability (Theorem 3) and allows a transformation into a standard semi-infinite program in a number of cases. Also, the authors revisit the problem of optimal scenario reduction for two-stage models and provide a new formulation as mixed-integer linear semi-infinite program. In the last part, the authors present a mixed-integer linear semi-infinite program for optimal scenario generation in chance constrained programming. In the ``Appendix'' the authors provide a short description of the discretization method due to \textit{R. Reemtsen} [J. Optim. Theory Appl. 71, No. 1, 85--103 (1991; Zbl 0793.90088)] Finally, the authors illustrate the approach to scenario generation for the classical newsvendor problems with random demand.
Reviewer: I. M. Stancu-Minasian (Bucureşti)Distributionally-robust machine learning using locally differentially-private datahttps://zbmath.org/1491.901062022-09-13T20:28:31.338867Z"Farokhi, Farhad"https://zbmath.org/authors/?q=ai:farokhi.farhadSummary: We consider machine learning, particularly regression, using locally-differentially private datasets. The Wasserstein distance is used to define an ambiguity set centered at the empirical distribution of the dataset corrupted by local differential privacy noise. The radius of the ambiguity set is selected based on privacy budget, spread of data, and size of the problem. Machine learning with private dataset is rewritten as a distributionally-robust optimization. For general distributions, the distributionally-robust optimization problem can be relaxed as a regularized machine learning problem with the Lipschitz constant of the machine learning model as a regularizer. For Gaussian data, the distributionally-robust optimization problem can be solved exactly to find an optimal regularizer. Training with this regularizer can be posed as a semi-definite program.On distributionally robust optimization problems with \(k\)-th order stochastic dominance constraints induced by full random quadratic recoursehttps://zbmath.org/1491.901072022-09-13T20:28:31.338867Z"Zhang, Sainan"https://zbmath.org/authors/?q=ai:zhang.sainan"Guo, Shaoyan"https://zbmath.org/authors/?q=ai:guo.shaoyan"Zhang, Liwei"https://zbmath.org/authors/?q=ai:zhang.liwei"Zhang, Hongwei"https://zbmath.org/authors/?q=ai:zhang.hongwei.2Summary: In this paper, we consider the optimization problems with \(k\)-th order stochastic dominance constraint on the objective function of the two-stage stochastic programs with full random quadratic recourse. By establishing the Lipschitz continuity of the feasible set mapping under some pseudo-metric, we show the Lipschitz continuity of the optimal value function and the upper semicontinuity of the optimal solution mapping of the problem. Furthermore, by the Hölder continuity of parameterized ambiguity set under the pseudo-metric, we demonstrate the quantitative stability results of the feasible set mapping, the optimal value function and the optimal solution mapping of the corresponding distributionally robust problem.Maximum feasible subsystems of distance geometry constraintshttps://zbmath.org/1491.901082022-09-13T20:28:31.338867Z"Bruglieri, Maurizio"https://zbmath.org/authors/?q=ai:bruglieri.maurizio"Cordone, Roberto"https://zbmath.org/authors/?q=ai:cordone.roberto"Liberti, Leo"https://zbmath.org/authors/?q=ai:liberti.leoSummary: We study the problem of satisfying the maximum number of distance geometry constraints with minimum experimental error. This models the determination of the shape of proteins from atomic distance data which are obtained from nuclear magnetic resonance experiments and exhibit experimental and systematic errors. Experimental errors are represented by interval constraints on Euclidean distances. Systematic errors occur from a misassignment of distances to wrong atomic pairs: we represent such errors by maximizing the number of satisfiable distance constraints. We present many mathematical programming formulations, as well as a ``matheuristic'' algorithm based on reformulations, relaxations, restrictions and refinement. We show that this algorithm works on protein graphs with hundreds of atoms and thousands of distances.Comparison of active-set and gradient projection-based algorithms for box-constrained quadratic programminghttps://zbmath.org/1491.901092022-09-13T20:28:31.338867Z"Crisci, Serena"https://zbmath.org/authors/?q=ai:crisci.serena"Kružík, Jakub"https://zbmath.org/authors/?q=ai:kruzik.jakub"Pecha, Marek"https://zbmath.org/authors/?q=ai:pecha.marek"Horák, David"https://zbmath.org/authors/?q=ai:horak.davidSummary: This paper presents on four chosen benchmarks an experimental evidence of efficiency of active-set-based algorithms and a gradient projection scheme exploiting Barzilai-Borwein-based steplength rule for box-constrained quadratic programming problems, which have theoretically proven rate of convergence. The crucial phase of active-set-based algorithms is the identification of the appropriate active set combining three types of steps -- a classical minimization step, a step expanding the active set and a step reducing it. Presented algorithms employ various strategies using the components of the gradient for an update of this active set to be fast, reliable and avoiding undesirable oscillations of active set size.A study of piecewise linear-quadratic programshttps://zbmath.org/1491.901102022-09-13T20:28:31.338867Z"Cui, Ying"https://zbmath.org/authors/?q=ai:cui.ying"Chang, Tsung-Hui"https://zbmath.org/authors/?q=ai:chang.tsung-hui"Hong, Mingyi"https://zbmath.org/authors/?q=ai:hong.mingyi"Pang, Jong-Shi"https://zbmath.org/authors/?q=ai:pang.jong-shiSummary: Motivated by a growing list of nontraditional statistical estimation problems of the piecewise kind, this paper provides a survey of known results supplemented with new results for the class of piecewise linear-quadratic programs. These are linearly constrained optimization problems with piecewise linear-quadratic objective functions. We first summarize some local properties of a piecewise linear-quadratic function in terms of their first- and second-order directional derivatives. We next extend some well-known necessary and sufficient second-order conditions for local optimality of a quadratic program to a piecewise linear-quadratic program and provide a dozen such equivalent conditions for strong, strict, and isolated local optimality, showing in particular that a piecewise linear-quadratic program has the same characterizations for local minimality as a standard quadratic program. As a consequence of one such condition, we show that the number of strong, strict, or isolated local minima of a piecewise linear-quadratic program is finite; this result supplements a recent result about the finite number of directional stationary objective values. We also consider a special class of unconstrained composite programs involving a non-differentiable norm function, for which we show that the task of verifying the second-order stationary condition can be converted to the problem of checking the copositivity of certain Schur complement on the nonnegative orthant.On standard quadratic programs with exact and inexact doubly nonnegative relaxationshttps://zbmath.org/1491.901112022-09-13T20:28:31.338867Z"Gökmen, Y. Görkem"https://zbmath.org/authors/?q=ai:gokmen.y-gorkem"Yıldırım, E. Alper"https://zbmath.org/authors/?q=ai:yildirim.emre-alperSummary: The problem of minimizing a (nonconvex) quadratic form over the unit simplex, referred to as a standard quadratic program, admits an exact convex conic formulation over the computationally intractable cone of completely positive matrices. Replacing the intractable cone in this formulation by the larger but tractable cone of doubly nonnegative matrices, i.e., the cone of positive semidefinite and componentwise nonnegative matrices, one obtains the so-called doubly nonnegative relaxation, whose optimal value yields a lower bound on that of the original problem. We present a full algebraic characterization of the set of instances of standard quadratic programs that admit an exact doubly nonnegative relaxation. This characterization yields an algorithmic recipe for constructing such an instance. In addition, we explicitly identify three families of instances for which the doubly nonnegative relaxation is exact. We establish several relations between the so-called convexity graph of an instance and the tightness of the doubly nonnegative relaxation. We also provide an algebraic characterization of the set of instances for which the doubly nonnegative relaxation has a positive gap and show how to construct such an instance using this characterization.A geometrical analysis on convex conic reformulations of quadratic and polynomial optimization problemshttps://zbmath.org/1491.901122022-09-13T20:28:31.338867Z"Kim, Sunyoung"https://zbmath.org/authors/?q=ai:kim.sunyoung"Kojima, Masakazu"https://zbmath.org/authors/?q=ai:kojima.masakazu"Toh, Kim-Chuan"https://zbmath.org/authors/?q=ai:toh.kimchuanAn accelerated minimal gradient method with momentum for strictly convex quadratic optimizationhttps://zbmath.org/1491.901132022-09-13T20:28:31.338867Z"Oviedo, Harry"https://zbmath.org/authors/?q=ai:oviedo.harry"Dalmau, Oscar"https://zbmath.org/authors/?q=ai:dalmau.oscar"Herrera, Rafael"https://zbmath.org/authors/?q=ai:herrera.rafaelSummary: In this article we address the problem of minimizing a strictly convex quadratic function using a novel iterative method. The new algorithm is based on the well-known Nesterov's accelerated gradient method. At each iteration of our scheme, the new point is computed by performing a line-search scheme using a search direction given by a linear combination of three terms, whose parameters are chosen so that the residual norm is minimized at each step of the process. We establish the linear convergence of the proposed method and show that its convergence rate factor is analogous to the one available for other gradient methods. Finally, we present preliminary numerical results on some sets of synthetic and real strictly convex quadratic problems, showing that the proposed method outperforms in terms of efficiency, a wide collection of state-of-the art gradient methods, and that it is competitive against the conjugate gradient method in terms of CPU time and number of iterations.On the tightness of SDP relaxations of QCQPshttps://zbmath.org/1491.901142022-09-13T20:28:31.338867Z"Wang, Alex L."https://zbmath.org/authors/?q=ai:wang.alex-l"Kılınç-Karzan, Fatma"https://zbmath.org/authors/?q=ai:kilinc-karzan.fatmaSummary: Quadratically constrained quadratic programs (QCQPs) are a fundamental class of optimization problems well-known to be NP-hard in general. In this paper we study conditions under which the standard semidefinite program (SDP) relaxation of a QCQP is tight. We begin by outlining a general framework for proving such sufficient conditions. Then using this framework, we show that the SDP relaxation is tight whenever the quadratic eigenvalue multiplicity, a parameter capturing the amount of symmetry present in a given problem, is large enough. We present similar sufficient conditions under which the projected epigraph of the SDP gives the convex hull of the epigraph in the original QCQP. Our results also imply new sufficient conditions for the tightness (as well as convex hull exactness) of a second order cone program relaxation of simultaneously diagonalizable QCQPs.Closing the gap between necessary and sufficient conditions for local nonglobal minimizer of trust region subproblemhttps://zbmath.org/1491.901152022-09-13T20:28:31.338867Z"Wang, Jiulin"https://zbmath.org/authors/?q=ai:wang.jiulin"Xia, Yong"https://zbmath.org/authors/?q=ai:xia.yongDuality of sum of nonnegative circuit polynomials and optimal SONC boundshttps://zbmath.org/1491.901162022-09-13T20:28:31.338867Z"Papp, Dávid"https://zbmath.org/authors/?q=ai:papp.davidSummary: Circuit polynomials are polynomials with properties that make it easy to compute sharp and certifiable global lower bounds for them. Consequently, one may use them to find certifiable lower bounds for any polynomial by writing it as a sum of circuit polynomials with known lower bounds. Recent work has shown that sums of nonnegative circuit polynomials (or SONC polynomials for short) can be used to compute global lower bounds (called SONC bounds) for polynomials in this manner very efficiently both in theory and in practice, if the polynomial is bounded from below and its support satisfies a certain nondegeneracy assumption. The quality of the SONC bound depends on the circuits used in the computation but finding the set of circuits that yield the best attainable SONC bound among the astronomical number of candidate circuits is a non-trivial task that has not been addressed so far. We propose an efficient method to compute the optimal SONC lower bound by iteratively identifying the optimal circuits to use in the SONC bounding process. The method is derived from a new proof of the result that every SONC polynomial decomposes into SONC polynomials on the same support. This proof is based on convex programming duality and motivates a column generation approach that is particularly attractive for sparse polynomials of high degree and with many unknowns. The method is implemented and tested on a large set of sparse polynomial optimization problems with up to 40 unknowns, of degree up to 60, and up to 3000 monomials in the support. The results indicate that the method is efficient in practice and requires only a small number of iterations to identify the optimal circuits.Local convergence of tensor methodshttps://zbmath.org/1491.901172022-09-13T20:28:31.338867Z"Doikov, Nikita"https://zbmath.org/authors/?q=ai:doikov.nikita"Nesterov, Yurii"https://zbmath.org/authors/?q=ai:nesterov.yuriiSummary: In this paper, we study local convergence of high-order Tensor Methods for solving convex optimization problems with composite objective. We justify local superlinear convergence under the assumption of uniform convexity of the smooth component, having Lipschitz-continuous high-order derivative. The convergence both in function value and in the norm of minimal subgradient is established. Global complexity bounds for the Composite Tensor Method in convex and uniformly convex cases are also discussed. Lastly, we show how local convergence of the methods can be globalized using the inexact proximal iterations.Geometrical convergence rate for distributed optimization with time-varying directed graphs and uncoordinated step-sizeshttps://zbmath.org/1491.901182022-09-13T20:28:31.338867Z"Lü, Qingguo"https://zbmath.org/authors/?q=ai:lu.qingguo"Li, Huaqing"https://zbmath.org/authors/?q=ai:li.huaqing"Xia, Dawen"https://zbmath.org/authors/?q=ai:xia.dawenSummary: This paper studies a class of distributed optimization algorithms by a set of agents, where each agent only has access to its own local convex objective function, and the goal of the agents is to jointly minimize the sum of all the local functions. The communications among agents are described by a sequence of time-varying directed graphs which are assumed to be uniformly strongly connected. A column stochastic mixing matrices is employed in the algorithm which exactly steers all the agents to asymptotically converge to a global and consensual optimal solution even under the assumption that the step-sizes are uncoordinated. Two fairly standard conditions for achieving the geometrical convergence rate are established under the assumption that the objective functions are strong convexity and have Lipschitz continuous gradient. The theoretical analysis shows that the distributed algorithm is capable of driving the whole network to geometrically converge to an optimal solution of the convex optimization problem as long as the uncoordinated step-sizes do not exceed some upper bound. We also give an explicit analysis for the convergence rate of our algorithm through a different approach. Finally, simulation results illustrate the feasibility of the proposed algorithm and the effectiveness of the theoretical analysis throughout this paper.Accelerated methods with fastly vanishing subgradients for structured non-smooth minimizationhttps://zbmath.org/1491.901192022-09-13T20:28:31.338867Z"Maingé, Paul-Emile"https://zbmath.org/authors/?q=ai:mainge.paul-emile"Labarre, Florian"https://zbmath.org/authors/?q=ai:labarre.florianSummary: In a real Hilbert space, we study a new class of forward-backward algorithms for structured non-smooth minimization problems. As a special case of the parameters, we recover the method AFB (\textit{Accelerated Forward-Backward}) that was recently discussed as an enhanced variant of FISTA (\textit{Fast Iterative Soft Thresholding Algorithm}). Our algorithms enjoy the well-known properties of AFB. Namely, they generate convergent sequences \((x_n)\) that minimize the function values at the rate \(o(n^{-2})\). Another important specificity of our processes is that they can be regarded as discrete models suggested by first-order formulations of Newton-like dynamical systems. This permit us to extend to the non-smooth setting, a property of fast convergence to zero of the gradients, established so far for discrete Newton-like dynamics with smooth potentials only. In specific, as a new result, we show that the latter property also applies to AFB. To prove this stability phenomenon, we develop a technical analysis that can be also useful regarding many other related developments. Numerical experiments are furthermore performed so as to illustrate the properties of the considered algorithms comparing with other existing ones.Duality for quasiconvex minimization over closed convex coneshttps://zbmath.org/1491.901202022-09-13T20:28:31.338867Z"Martínez-Legaz, Juan Enrique"https://zbmath.org/authors/?q=ai:martinez-legaz.juan-enrique"Sosa, Wilfredo"https://zbmath.org/authors/?q=ai:sosa.wilfredoSummary: We establish a general duality theorem in a generalized conjugacy framework, which generalizes a classical result on the minimization of a convex function over a closed convex cone. Our theorem yields two quasiconvex duality schemes; one of them is of the surrogate duality type and is applicable to problems having an evenly quasiconvex objective function, whereas the other one is applicable to problems with Lipschitz quasiconvex objective functions and yields duals whose objective functions do not involve any surrogate constraint.Tight bounds on Lyapunov rankhttps://zbmath.org/1491.901212022-09-13T20:28:31.338867Z"Orlitzky, Michael"https://zbmath.org/authors/?q=ai:orlitzky.michael-jThis note deals with the problem of bounding the Lyapunov rank of a cone in an Euclidean space, providing \(\frac{n^2- n}{2} + 1\) as the maximal bound in the case of the Lorentz cone.
Reviewer: Sorin-Mihai Grad (Paris)Linear convergence rate of the generalized alternating direction method of multipliers for a class of convex minimization problemshttps://zbmath.org/1491.901222022-09-13T20:28:31.338867Z"Peng, Jianwen"https://zbmath.org/authors/?q=ai:peng.jianwen"Zhang, Xueqing"https://zbmath.org/authors/?q=ai:zhang.xueqingSummary: Recently, the generalized alternating direction method of multipliers (GADMM) proposed by Eckstein and Bertsekas has received intensive attention from a broad spectrum of areas. In this paper, we firstly prove some important inequalities for the sequence generated by the GADMM for solving the convex optimization problems and establish the local linear convergence rate of GADMM, which generalize the corresponding results in [\textit{D. Han} and \textit{X. Yuan}, SIAM J. Numer. Anal. 51, No. 6, 3446--3457 (2013; Zbl 1285.90033)] from quadratic programs case to the convex optimization problems. Secondly, we also establish the global linear convergence rate of GADMM for solving the convex optimization problems that the subdifferentials of the underlying functions are piecewise linear multifunctions.Applications of accelerated computational methods for quasi-nonexpansive operators to optimization problemshttps://zbmath.org/1491.901232022-09-13T20:28:31.338867Z"Sahu, D. R."https://zbmath.org/authors/?q=ai:sahu.daya-ramSummary: This paper studies the convergence rates of two accelerated computational methods without assuming nonexpansivity of the underlying operators with convex and affine domains in infinite-dimensional Hilbert spaces. One method is a noninertial method, and its convergence rate is estimated as \(R_{T,\{x_n\}}(n)=o\left( \frac{1}{\sqrt{n}}\right)\) in worst case. The other is an inertial method, and its convergence rate is estimated as \(R_{T,\{y_n\}}(n)=o\left( \frac{1}{\sqrt{n}}\right)\) under practical conditions. Then, we apply our results to give new results on convergence rates for solving generalized split common fixed-point problems for the class of demimetric operators. We also apply our results to variational inclusion problems and convex optimization problems. Our results significantly improve and/or develop previously discussed fixed-point problems and splitting problems and related algorithms. To demonstrate the applicability of our methods, we provide numerical examples for comparisons and numerical experiments on regression problems for publicly available high-dimensional real datasets taken from different application domains.Characterizing the solution set of convex optimization problems without convexity of constraintshttps://zbmath.org/1491.901242022-09-13T20:28:31.338867Z"Sisarat, Nithirat"https://zbmath.org/authors/?q=ai:sisarat.nithirat"Wangkeeree, Rabian"https://zbmath.org/authors/?q=ai:wangkeeree.rabianSummary: Some characterizations of solution sets of a convex optimization problem with a convex feasible set described by tangentially convex constraints are given. The results are expressed in terms of convex subdifferentials, tangential subdifferentials, and Lagrange multipliers. In order to characterize the solution set, we first introduce the so-called pseudo Lagrangian-type function and establish a constant pseudo Lagrangian-type property for the solution set. This property is still valid in the case of a pseudoconvex locally Lipschitz objective function, and then used to derive Lagrange multiplier-based characterizations of the solution set. Some examples are given to illustrate the significances of our theoretical results.Global exact optimization for covering a rectangle with 6 circleshttps://zbmath.org/1491.901252022-09-13T20:28:31.338867Z"Cafieri, Sonia"https://zbmath.org/authors/?q=ai:cafieri.sonia"Hansen, Pierre"https://zbmath.org/authors/?q=ai:hansen.pierre"Messine, Frédéric"https://zbmath.org/authors/?q=ai:messine.fredericSummary: We address the problem of covering a rectangle with six identical circles, whose radius is to be minimized. We focus on open cases from \textit{J. B. M. Melissen} and \textit{P. C. Schuur} [Discrete Appl. Math. 99, No. 1--3, 149--156 (2000; Zbl 0951.52017)]. Depending on the rectangle side lengths, different configurations of the circles, corresponding to the different ways they are placed, yield the optimal covering. We prove the optimality of the two configurations corresponding to open cases. For the first one, we propose a mathematical mixed-integer nonlinear optimization formulation, that allows one to compute global optimal solutions. For the second one, we provide an analytical expression of the optimal radius as a function of one of the rectangle side lengths. All open cases are thus closed for the optimal covering of a rectangle with six circles.A dimensionality reduction technique for unconstrained global optimization of functions with low effective dimensionalityhttps://zbmath.org/1491.901262022-09-13T20:28:31.338867Z"Cartis, Coralia"https://zbmath.org/authors/?q=ai:cartis.coralia"Otemissov, Adilet"https://zbmath.org/authors/?q=ai:otemissov.adiletSummary: We investigate the unconstrained global optimization of functions with low effective dimensionality, which are constant along certain (unknown) linear subspaces. Extending the technique of random subspace embeddings in [\textit{Z. Wang} et al., J. Artif. Intell. Res. (JAIR) 55, 361--387 (2016; Zbl 1358.90089)], we study a generic Random Embeddings for Global Optimization (REGO) framework that is compatible with any global minimization algorithm. Instead of the original, potentially large-scale optimization problem, within REGO, a Gaussian random, low-dimensional problem with bound constraints is formulated and solved in a reduced space. We provide novel probabilistic bounds for the success of REGO in solving the original, low effective-dimensionality problem, which show its independence of the (potentially large) ambient dimension and its precise dependence on the dimensions of the effective and embedding subspaces. These results significantly improve existing theoretical analyses by providing the exact distribution of a reduced minimizer and its Euclidean norm and by the general assumptions required on the problem. We validate our theoretical findings by extensive numerical testing of REGO with three types of global optimization solvers, illustrating the improved scalability of REGO compared with the full-dimensional application of the respective solvers.On obtaining the convex hull of quadratic inequalities via aggregationshttps://zbmath.org/1491.901272022-09-13T20:28:31.338867Z"Dey, Santanu S."https://zbmath.org/authors/?q=ai:dey.santanu-s"Muñoz, Gonzalo"https://zbmath.org/authors/?q=ai:munoz.gonzalo"Serrano, Felipe"https://zbmath.org/authors/?q=ai:serrano.felipeExtremely non-convex optimization problems: the case of the multiple obnoxious facilities locationhttps://zbmath.org/1491.901282022-09-13T20:28:31.338867Z"Kalczynski, Pawel"https://zbmath.org/authors/?q=ai:kalczynski.pawel-jan"Drezner, Zvi"https://zbmath.org/authors/?q=ai:drezner.zviSummary: The multiple obnoxious facilities location problem is an extremely non-convex optimization problem with millions of local optima. It is a very challenging problem. We improved the best known solution for 33 out of 76 test instances. We believe that the results of many instances reported here are still not optimal and thus better objective function values exist. We challenge the optimization community to design procedures that will further improve some of the results reported here. Optimality can be proven for a small number of new facilities. Proving optimality for a large number of facilities would be an achievement.Generic property of the partial calmness condition for bilevel programming problemshttps://zbmath.org/1491.901292022-09-13T20:28:31.338867Z"Ke, Rongzhu"https://zbmath.org/authors/?q=ai:ke.rongzhu"Yao, Wei"https://zbmath.org/authors/?q=ai:yao.wei.1|yao.wei"Ye, Jane J."https://zbmath.org/authors/?q=ai:ye.jane-j"Zhang, Jin"https://zbmath.org/authors/?q=ai:zhang.jin.2Block-coordinate and incremental aggregated proximal gradient methods for nonsmooth nonconvex problemshttps://zbmath.org/1491.901302022-09-13T20:28:31.338867Z"Latafat, Puya"https://zbmath.org/authors/?q=ai:latafat.puya"Themelis, Andreas"https://zbmath.org/authors/?q=ai:themelis.andreas"Patrinos, Panagiotis"https://zbmath.org/authors/?q=ai:patrinos.panagiotisSummary: This paper analyzes block-coordinate proximal gradient methods for minimizing the sum of a separable smooth function and a (nonseparable) nonsmooth function, both of which are allowed to be nonconvex. The main tool in our analysis is the forward-backward envelope, which serves as a particularly suitable continuous and real-valued Lyapunov function. Global and linear convergence results are established when the cost function satisfies the Kurdyka-Łojasiewicz property without imposing convexity requirements on the smooth function. Two prominent special cases of the investigated setting are regularized finite sum minimization and the sharing problem; in particular, an immediate byproduct of our analysis leads to novel convergence results and rates for the popular Finito/MISO algorithm in the nonsmooth and nonconvex setting with very general sampling strategies.Role of sparsity and structure in the optimization landscape of non-convex matrix sensinghttps://zbmath.org/1491.901312022-09-13T20:28:31.338867Z"Molybog, Igor"https://zbmath.org/authors/?q=ai:molybog.igor"Sojoudi, Somayeh"https://zbmath.org/authors/?q=ai:sojoudi.somayeh"Lavaei, Javad"https://zbmath.org/authors/?q=ai:lavaei.javadSummary: In this work, we study the optimization landscape of the non-convex matrix sensing problem that is known to have many local minima in the worst case. Since the existing results are related to the notion of restricted isometry property (RIP) that cannot directly capture the underlying structure of a given problem, they can hardly be applied to real-world problems where the amount of data is not exorbitantly high. To address this issue, we develop the notion of kernel structure property to obtain necessary and sufficient conditions for the inexistence of spurious local solutions for any class of matrix sensing problems over a given search space. This notion precisely captures the underlying sparsity and structure of the problem, based on tools in conic optimization. We simplify the conditions for a certain class of problems to show their satisfaction and apply them to data analytics for power systems.An active-set algorithm for norm constrained quadratic problemshttps://zbmath.org/1491.901322022-09-13T20:28:31.338867Z"Rontsis, Nikitas"https://zbmath.org/authors/?q=ai:rontsis.nikitas"Goulart, Paul J."https://zbmath.org/authors/?q=ai:goulart.paul-j"Nakatsukasa, Yuji"https://zbmath.org/authors/?q=ai:nakatsukasa.yujiSummary: We present an algorithm for the minimization of a nonconvex quadratic function subject to linear inequality constraints and a two-sided bound on the 2-norm of its solution. The algorithm minimizes the objective using an active-set method by solving a series of trust-region subproblems (TRS). Underpinning the efficiency of this approach is that the global solution of the TRS has been widely studied in the literature, resulting in remarkably efficient algorithms and software. We extend these results by proving that nonglobal minimizers of the TRS, or a certificate of their absence, can also be calculated efficiently by computing the two rightmost eigenpairs of an eigenproblem. We demonstrate the usefulness and scalability of the algorithm in a series of experiments that often outperform state-of-the-art approaches; these include calculation of high-quality search directions arising in Sequential Quadratic Programming on problems of the \texttt{CUTEst} collection, and Sparse Principal Component Analysis on a large text corpus problem (70 million nonzeros) that can help organize documents in a user interpretable way.Safe global optimization of expensive noisy black-box functions in the \(\delta \)-Lipschitz frameworkhttps://zbmath.org/1491.901332022-09-13T20:28:31.338867Z"Sergeyev, Yaroslav D."https://zbmath.org/authors/?q=ai:sergeyev.yaroslav-d"Candelieri, Antonio"https://zbmath.org/authors/?q=ai:candelieri.antonio"Kvasov, Dmitri E."https://zbmath.org/authors/?q=ai:kvasov.dmitri-e"Perego, Riccardo"https://zbmath.org/authors/?q=ai:perego.riccardoSummary: In this paper, the problem of safe global maximization (it should not be confused with robust optimization) of expensive noisy black-box functions satisfying the Lipschitz condition is considered. The notion ``safe'' means that the objective function \(f(x)\) during optimization should not violate a ``safety'' threshold, for instance, a certain a priori given value \(h\) in a maximization problem. Thus, any new function evaluation (possibly corrupted by noise) must be performed at ``safe points'' only, namely, at points \(y\) for which it is known that the objective function \(f(y) > h\). The main difficulty here consists in the fact that the used optimization algorithm should ensure that the safety constraint will be satisfied at a point \(y\) \textit{before} evaluation of \(f(y)\) will be executed. Thus, it is required both to determine the safe region \(\varOmega\) within the search domain \(D\) and to find the global maximum within \(\varOmega \). An additional difficulty consists in the fact that these problems should be solved in the presence of the noise. This paper starts with a theoretical study of the problem, and it is shown that even though the objective function \(f(x)\) satisfies the Lipschitz condition, traditional Lipschitz minorants and majorants cannot be used due to the presence of the noise. Then, a \(\delta \)-Lipschitz framework and two algorithms using it are proposed to solve the safe global maximization problem. The first method determines the safe area within the search domain, and the second one executes the global maximization over the found safe region. For both methods, a number of theoretical results related to their functioning and convergence is established. Finally, numerical experiments confirming the reliability of the proposed procedures are performed.Global optimization method with dual Lipschitz constant estimates for problems with non-convex constraintshttps://zbmath.org/1491.901342022-09-13T20:28:31.338867Z"Strongin, Roman"https://zbmath.org/authors/?q=ai:strongin.roman-g"Barkalov, Konstantin"https://zbmath.org/authors/?q=ai:barkalov.konstantin"Bevzuk, Semen"https://zbmath.org/authors/?q=ai:bevzuk.semenSummary: This paper considers the constrained global optimization problems, in which the functions are of the ``black-box'' type and satisfy the Lipschitz condition. The algorithms for solving the problems of this class require the use of adequate estimates of the a priori unknown Lipschitz constants for the problem functions. A novel approach presented in this paper is based on a simultaneous use of two estimates of the Lipschitz constant: an overestimated and an underestimated one. The upper estimate provides the global convergence, whereas the lower one reduces the number of trials necessary to find the global optimizer with the required accuracy. The considered algorithm for solving the constrained problems does not use the ideas of the penalty function method; each constraint of the problem is accounted for separately. The convergence conditions of the proposed algorithm are formulated in the corresponding theorem. The results of the numerical experiments on a series of multiextremal problems with non-convex constraints demonstrating the efficiency of the proposed scheme of dual Lipschitz constant estimates are presented.Solving polyhedral d.c. optimization problems via concave minimizationhttps://zbmath.org/1491.901352022-09-13T20:28:31.338867Z"vom Dahl, Simeon"https://zbmath.org/authors/?q=ai:vom-dahl.simeon"Löhne, Andreas"https://zbmath.org/authors/?q=ai:lohne.andreasSummary: The problem of minimizing the difference of two convex functions is called polyhedral d.c. optimization problem if at least one of the two component functions is polyhedral. We characterize the existence of global optimal solutions of polyhedral d.c. optimization problems. This result is used to show that, whenever the existence of an optimal solution can be certified, polyhedral d.c. optimization problems can be solved by certain concave minimization algorithms. No further assumptions are necessary in case of the first component being polyhedral and just some mild assumptions to the first component are required for the case where the second component is polyhedral. In case of both component functions being polyhedral, we obtain a primal and dual existence test and a primal and dual solution procedure. Numerical examples are discussed.DTSMA: dominant swarm with adaptive T-distribution mutation-based slime mould algorithmhttps://zbmath.org/1491.901362022-09-13T20:28:31.338867Z"Yin, Shihong"https://zbmath.org/authors/?q=ai:yin.shihong"Luo, Qifang"https://zbmath.org/authors/?q=ai:luo.qifang"Du, Yanlian"https://zbmath.org/authors/?q=ai:du.yanlian"Zhou, Yongquan"https://zbmath.org/authors/?q=ai:zhou.yongquanSummary: The slime mould algorithm (SMA) is a metaheuristic algorithm recently proposed, which is inspired by the oscillations of slime mould. Similar to other algorithms, SMA also has some disadvantages such as insufficient balance between exploration and exploitation, and easy to fall into local optimum. This paper, an improved SMA based on dominant swarm with adaptive t-distribution mutation (DTSMA) is proposed. In DTSMA, the dominant swarm is used improved the SMA's convergence speed, and the adaptive t-distribution mutation balances is used enhanced the exploration and exploitation ability. In addition, a new exploitation mechanism is hybridized to increase the diversity of populations. The performances of DTSMA are verified on CEC2019 functions and eight engineering design problems. The results show that for the CEC2019 functions, the DTSMA performances are best; for the engineering problems, DTSMA obtains better results than SMA and many algorithms in the literature when the constraints are satisfied. Furthermore, DTSMA is used to solve the inverse kinematics problem for a 7-DOF robot manipulator. The overall results show that DTSMA has a strong optimization ability. Therefore, the DTSMA is a promising metaheuristic optimization for global optimization problems.Convergence rate of a rectangular subdivision-based optimization algorithm for smooth multivariate functionshttps://zbmath.org/1491.901372022-09-13T20:28:31.338867Z"Zheng, Cuicui"https://zbmath.org/authors/?q=ai:zheng.cuicui"Calvin, James"https://zbmath.org/authors/?q=ai:calvin.james-a|calvin.james-mSummary: In [\textit{C. Zheng} et al., J. Glob. Optim. 79, No. 2, 431--445 (2021; Zbl 1465.90081)] the authors described a global optimization algorithm for multivariate continuous functions and applied it to an image processing problem. While the algorithm was shown to converge for all continuous functions, the convergence rate was not established. In this paper we assume that the objective function is smooth, and establish the asymptotic convergence rate when the algorithm is applied to such a function.An improved arithmetic optimization algorithm with forced switching mechanism for global optimization problemshttps://zbmath.org/1491.901382022-09-13T20:28:31.338867Z"Zheng, Rong"https://zbmath.org/authors/?q=ai:zheng.rong"Jia, Heming"https://zbmath.org/authors/?q=ai:jia.heming"Abualigah, Laith"https://zbmath.org/authors/?q=ai:abualigah.laith"Liu, Qingxin"https://zbmath.org/authors/?q=ai:liu.qingxin"Wang, Shuang"https://zbmath.org/authors/?q=ai:wang.shuangSummary: Arithmetic optimization algorithm (AOA) is a newly proposed meta-heuristic method which is inspired by the arithmetic operators in mathematics. However, the AOA has the weaknesses of insufficient exploration capability and is likely to fall into local optima. To improve the searching quality of original AOA, this paper presents an improved AOA (IAOA) integrated with proposed forced switching mechanism (FSM). The enhanced algorithm uses the random math optimizer probability (\textit{RMOP}) to increase the population diversity for better global search. And then the forced switching mechanism is introduced into the AOA to help the search agents jump out of the local optima. When the search agents cannot find better positions within a certain number of iterations, the proposed FSM will make them conduct the exploratory behavior. Thus the cases of being trapped into local optima can be avoided effectively. The proposed IAOA is extensively tested by twenty-three classical benchmark functions and ten CEC2020 test functions and compared with the AOA and other well-known optimization algorithms. The experimental results show that the proposed algorithm is superior to other comparative algorithms on most of the test functions. Furthermore, the test results of two training problems of multi-layer perceptron (MLP) and three classical engineering design problems also indicate that the proposed IAOA is highly effective when dealing with real-world problems.An improved remora optimization algorithm with autonomous foraging mechanism for global optimization problemshttps://zbmath.org/1491.901392022-09-13T20:28:31.338867Z"Zheng, Rong"https://zbmath.org/authors/?q=ai:zheng.rong"Jia, Heming"https://zbmath.org/authors/?q=ai:jia.heming"Abualigah, Laith"https://zbmath.org/authors/?q=ai:abualigah.laith"Wang, Shuang"https://zbmath.org/authors/?q=ai:wang.shuang"Wu, Di"https://zbmath.org/authors/?q=ai:wu.diSummary: The remora optimization algorithm (ROA) is a newly proposed metaheuristic algorithm for solving global optimization problems. In ROA, each search agent searches new space according to the position of host, which makes the algorithm suffer from the drawbacks of slow convergence rate, poor solution accuracy, and local optima for some optimization problems. To tackle these problems, this study proposes an improved ROA (IROA) by introducing a new mechanism named autonomous foraging mechanism (AFM), which is inspired from the fact that remora can also find food on its own. In AFM, each remora has a small chance to search food randomly or according to the current food position. Thus the AFM can effectively expand the search space and improve the accuracy of the solution. To substantiate the efficacy of the proposed IROA, twenty-three classical benchmark functions and ten latest CEC 2021 test functions with various types and dimensions were employed to test the performance of IROA. Compared with seven metaheuristic and six modified algorithms, the results of test functions show that the IROA has superior performance in solving these optimization problems. Moreover, the results of five representative engineering design optimization problems also reveal that the IROA has the capability to obtain the optimal results for real-world optimization problems. To sum up, these test results confirm the effectiveness of the proposed mechanism.Optimal search for locally feasible solutions of a linear function on permutationshttps://zbmath.org/1491.901402022-09-13T20:28:31.338867Z"Donets, G. P."https://zbmath.org/authors/?q=ai:donets.g-p"Biletskyi, V. I."https://zbmath.org/authors/?q=ai:biletskyi.v-iSummary: We consider the problem of optimal search for locally feasible solutions of a linear function on the permutations where the linear function takes the values from the given interval. We describe a new method of solving such problem by a targeted search of the permutations that provide locally feasible solutions with minimal search.Side-constrained minimum sum-of-squares clustering: mathematical programming and random projectionshttps://zbmath.org/1491.901412022-09-13T20:28:31.338867Z"Liberti, Leo"https://zbmath.org/authors/?q=ai:liberti.leo"Manca, Benedetto"https://zbmath.org/authors/?q=ai:manca.benedettoSummary: This paper investigates a mathematical programming based methodology for solving the minimum sum-of-squares clustering problem, also known as the ``k-means problem'', in the presence of side constraints. We propose several exact and approximate mixed-integer linear and nonlinear formulations. The approximations are based on norm inequalities and random projections, the approximation guarantees of which are based on an additive version of the Johnson-Lindenstrauss lemma. We perform computational testing (with fixed CPU time) on a range of randomly generated and real data instances of medium size, but with high dimensionality. We show that when side constraints make k-means inapplicable, our proposed methodology -- which is easy and fast to implement and deploy -- can obtain good solutions in limited amounts of time.A faster FPTAS for knapsack problem with cardinality constrainthttps://zbmath.org/1491.901422022-09-13T20:28:31.338867Z"Li, Wenxin"https://zbmath.org/authors/?q=ai:li.wenxin"Lee, Joohyun"https://zbmath.org/authors/?q=ai:lee.joohyun"Shroff, Ness"https://zbmath.org/authors/?q=ai:shroff.ness-bSummary: We study the \(K\)-item knapsack problem (\textit{i.e.}, 1.5-dimensional knapsack problem), a generalization of the famous 0-1 knapsack problem (\textit{i.e.}, 1-dimensional knapsack problem) in which an upper bound \(K\) is imposed on the number of items selected. This problem is of fundamental importance and is known to have a broad range of applications in various fields. It is well known that, there is no \textit{fully polynomial time approximation scheme} (FPTAS) for the \(d\)-dimensional knapsack problem when \(d \geq 2\), unless P = NP. While the \(K\)-item knapsack problem is known to admit an FPTAS, the complexity of all existing FPTASs has a high dependency on the cardinality bound \(K\) and approximation error \(\varepsilon\), which could result in inefficiencies especially when \(K\) and \(\varepsilon^{- 1}\) increase. The current best results are due to \textit{M. Mastrolilli} and \textit{M. Hutter} [Discrete Appl. Math. 154, No. 4, 640--649 (2006; Zbl 1120.90050)], in which two schemes are presented exhibiting a space-time tradeoff-one scheme with time complexity \(O (n + K z^2 / \varepsilon^2)\) and space complexity \(O (n + z^3 / \varepsilon)\), and another scheme that requires a run-time of \(O (n + (K z^2 + z^4) / \varepsilon^2)\) but only needs \(O (n + z^2 / \varepsilon)\) space, where \(z = \min \{ K , 1 / \varepsilon \}\).
In this paper we close the space-time tradeoff exhibited in [loc. cit.] by designing a new FPTAS with a running time of \(\widetilde{O} (n + z^2 / \varepsilon^2)\), while simultaneously reaching a space complexity of \(O (n + z^2 / \varepsilon)\). Our scheme provides \(\widetilde{O} (K)\) and \(O (z)\) improvements on the state-of-the-art algorithms in time and space complexity respectively, and is the \textit{first} scheme that achieves a running time that is \textit{independent} of the cardinality bound \(K\) (up to logarithmic factors) under fixed \(\varepsilon\). Another salient feature of our algorithm is that it is the \textit{first} FPTAS that achieves better time and space complexity bounds than the very first standard FPTAS \textit{over all parameter regimes}.Approximation algorithms for the \(k\)-depots Hamiltonian path problemhttps://zbmath.org/1491.901432022-09-13T20:28:31.338867Z"Yang, Yichen"https://zbmath.org/authors/?q=ai:yang.yichen"Liu, Zhaohui"https://zbmath.org/authors/?q=ai:liu.zhaohui"Yu, Wei"https://zbmath.org/authors/?q=ai:yu.weiSummary: We consider a multiple-depots extension of the classic Hamiltonian path problem where \(k\) salesmen are initially located at different depots. To the best of our knowledge, no algorithm for this problem with a constant approximation ratio has been previously proposed, except for some special cases. We present a polynomial algorithm with a tight approximation ratio of \(\max \left\{ \frac{3}{2},2-\frac{1}{k}\right\}\) for arbitrary \(k\geqslant 1\), and an algorithm with approximation ratio \(\frac{5}{3}\) that runs in polynomial time for fixed \(k\). Moreover, we develop a recursive framework to improve the approximation ratio to \(\frac{3}{2}+\epsilon \). This framework is polynomial for fixed \(k\) and \(\epsilon \), and may be useful in improving the Christofides-like heuristics for other related multiple salesmen routing problems.Second-order optimality conditions in locally Lipschitz inequality-constrained multiobjective optimizationhttps://zbmath.org/1491.901442022-09-13T20:28:31.338867Z"Constantin, Elena"https://zbmath.org/authors/?q=ai:constantin.elenaSummary: The main goal of this paper is to give some primal and dual Karush-Kuhn-Tucker second-order necessary conditions for the existence of a strict local Pareto minimum of order two for an inequality-constrained multiobjective optimization problem. Dual Karush-Kuhn-Tucker second-order sufficient conditions are provided too. We suppose that the objective function and the active inequality constraints are only locally Lipschitz in the primal necessary conditions and only strictly differentiable in sense of Clarke at the extremum point in the dual conditions. Examples illustrate the applicability of the obtained results.Sequential Pareto subdifferential multi-composition rule and application to multiobjective minimax location problemshttps://zbmath.org/1491.901452022-09-13T20:28:31.338867Z"Dali, Issam"https://zbmath.org/authors/?q=ai:dali.issam"Laghdir, Mohamed"https://zbmath.org/authors/?q=ai:laghdir.mohamed"Moustaid, Mohamed B."https://zbmath.org/authors/?q=ai:moustaid.mohamed-bilalSummary: In the absence of any qualification condition, we provide a sequential formula for the weak/proper Pareto subdifferential of multi-composed convex vector mappings. For illustrating this formula, we propose deriving sequential optimality conditions characterizing weakly/properly efficient solutions of multiobjective minimax location problems with infimal distances. We present an example of multiobjective bilevel programming problems with an extremal value function, where the standard Lagrange multipliers conditions can not be derived due to the lack of constraint qualification and the sequential conditions hold.Properly efficient solutions to non-differentiable multiobjective optimization problemshttps://zbmath.org/1491.901462022-09-13T20:28:31.338867Z"dos Santos, L. Batista"https://zbmath.org/authors/?q=ai:batista-dos-santos.lucelina"Rojas-Medar, M. A."https://zbmath.org/authors/?q=ai:rojas-medar.marko-antonio"Vivanco-Orellana, V."https://zbmath.org/authors/?q=ai:vivanco-orellana.violetaSummary: In this work sufficient conditions are established to ensure that all feasible points are (properly) efficient solutions in non trivial situations, for a class of non-differentiable, non-convex multiobjective minimization problems. Considering locally Lipschitz functions and some results of non-differentiable analysis introduced by F. H. Clarke.Necessary optimality conditions for a semivectorial bilevel optimization problem using the kth-objective weighted-constraint approachhttps://zbmath.org/1491.901472022-09-13T20:28:31.338867Z"Gadhi, Nazih Abderrazzak"https://zbmath.org/authors/?q=ai:gadhi.nazih-abderrazzak"El Idrissi, Mohammed"https://zbmath.org/authors/?q=ai:el-idrissi.mohammed"Hamdaoui, Khadija"https://zbmath.org/authors/?q=ai:hamdaoui.khadijaSummary: In this paper, we have pointed out that the proof of Theorem 11 in the recent paper [\textit{L. Lafhim}, Positivity 24, No. 2, 395--413 (2020; Zbl 1476.90297)] is erroneous. Using techniques from variational analysis, we propose other proofs to detect necessary optimality conditions in terms of Karush-Kuhn-Tucker multipliers. Our main results are given in terms of the limiting subdifferentials and the limiting normal cones. Completely detailed first order necessary optimality conditions are then given in the smooth setting while using the generalized differentiation calculus of Mordukhovich.A note on solving multi-objective integer indefinite quadratic fractional programshttps://zbmath.org/1491.901482022-09-13T20:28:31.338867Z"Kushwah, Prerna"https://zbmath.org/authors/?q=ai:kushwah.prerna"Sharma, Vikas"https://zbmath.org/authors/?q=ai:sharma.vikas-kumarSummary: In this note we have discussed that a simplex like algorithm to solve a indefinite quadratic fractional programming problem proposed by \textit{A. Mekhilef} et al. [Ann. Oper. Res. 296, No. 1--2, 821--840 (2021; Zbl 1460.90171)] fails to find its optimal solution and so it may not generate the actual set of efficient points of the corresponding multi-objective integer indefinite quadratic fractional programs. A counter example in support of this argument is also given.On multi-objective optimization problems and vector variational-like inequalitieshttps://zbmath.org/1491.901492022-09-13T20:28:31.338867Z"Laha, Vivek"https://zbmath.org/authors/?q=ai:laha.vivek"Singh, Harsh Narayan"https://zbmath.org/authors/?q=ai:singh.harsh-narayanSummary: This paper deals with nonsmooth multi-objective optimization problems involving locally Lipschitz \(V-r\)-invexity using Michel-Penot subdifferential. We consider vector variational-like inequalities of Stampacchia and Minty type and establish some results, which give necessary and sufficient conditions for a feasible point to be Pareto optimal solution of the MOP. We also establish various results related to weak Pareto optimal solution of the MOP and corresponding weak versions of the vector variational-like inequalities.
For the entire collection see [Zbl 1443.00026].Goal programming technique for solving fully interval-valued intuitionistic fuzzy multiple objective transportation problemshttps://zbmath.org/1491.901502022-09-13T20:28:31.338867Z"Malik, Manisha"https://zbmath.org/authors/?q=ai:malik.manisha"Gupta, S. K."https://zbmath.org/authors/?q=ai:gupta.sushil-k|gupta.samit-kumar|gupta.sanjeev-k|gupta.sanjiv-kumar|gupta.sudhir-kumar|gupta.sunil-kumar|gupta.satish-kumar|gupta.surender-k|gupta.sunit-k|gupta.sulaxana-k|gupta.santosh-k|gupta.shiv-kumar|gupta.satyandra-k|gupta.shiv-k|gupta.sarvesh-kumar|gupta.suresh-k|gupta.sandeep-k-s|gupta.sumant-kSummary: In transportation problems, the cost depends on various irresistible factors like climatic conditions, fuel expenses, etc. Consequently, the transportation problems with crisp parameters fail to handle such situations. However, the construction of the problems under an imprecise environment can significantly tackle these circumstances. The intuitionistic fuzzy number associated with a point is framed by two parameters, namely membership and non-membership degrees. The membership degree determines its acceptance level, while the non-membership measures its non-belongingness (rejection level). However, a person, because of some hesitation, instead of giving a fixed real number to the acceptance and rejection levels, may assign them intervals. This new construction not only generalizes the concept of intuitionistic fuzzy theory but also gives wider scope with more flexibility. In the present article, a balanced transportation problem having all the parameters and variables as interval-valued intuitionistic fuzzy numbers is formulated. Then, a solution methodology based on goal programming approach is proposed. This algorithm not only cares to maximize the acceptance level of the objective functions but simultaneously minimizes the deviational variables attached with each goal. To tackle the interval-valued intuitionistic fuzzy constraints corresponding to each objective function, three membership and non-membership functions, linear, exponential and hyperbolic, are used. Further, a numerical example is solved to demonstrate the computational steps of the algorithm, and a comparison is drawn amidst linear, exponential and hyperbolic membership functions.Piecewise linear bounding functions in univariate global optimizationhttps://zbmath.org/1491.901512022-09-13T20:28:31.338867Z"Posypkin, Mikhail"https://zbmath.org/authors/?q=ai:posypkin.mikhail-a"Usov, Alexander"https://zbmath.org/authors/?q=ai:usov.aleksandr-aleksandrovich"Khamisov, Oleg"https://zbmath.org/authors/?q=ai:khamisov.oleg-valerevichSummary: The paper addresses the problem of constructing lower and upper estimators for univariate functions. This problem is of crucial importance in global optimization, where such bounds are used to reduce the search area. We propose to use piecewise linear estimators for bounding univariate functions and show how such estimators can be derived from the function's algebraic expression. The basic properties of such estimators are formulated and proved. We implemented the algorithms for the automated construction of lower and upper piecewise linear estimators and experimentally compared the proposed approach with the first-order interval bounds, Pijavskij method, and slope arithmetic. Numerical examples demonstrate that the piecewise linear estimators are more accurate with respect to the mentioned approaches. We also show that global optimization algorithms can significantly benefit from using piecewise linear estimators. Another advantage of the proposed approach is that the objective function does not have to be differentiable. This feature can favorably distinguish this method from other methods where the first and second derivatives are used.Archiving strategies for evolutionary multi-objective optimization algorithmshttps://zbmath.org/1491.901522022-09-13T20:28:31.338867Z"Schütze, Oliver"https://zbmath.org/authors/?q=ai:schutze.oliver"Hernández, Carlos"https://zbmath.org/authors/?q=ai:hernandez-garciadiego.carlos|hernandez.carlos-angosto|hernandez.carlos-gThis monograph concentrates on improving stochastic population-based methods developed for multiobjective optimization, in particular, evolutionary algorithms.
Here, solving multiobjective optimization problems is understood as generating a representative set of Pareto optimal solutions. The selection of which solutions to save during the solution process of a stochastic population-based method is called archiving. The main focus of the monograph is in developing archiving strategies, also called archivers, for computing discretizations of Pareto optimal solutions with certain properties. They also consider nearly optimal solutions. Thus, both representing Pareto optimal and so-called epsilon-approximate Pareto solutions is of interest.
Finally, some evolutionary multiobjective optimization methods are coupled with proposed achivers and encouraging results of the strenths of such a combination are reported. In this, one representative method of the three types of evolutionary multiobjective optimization methods is considered, that is, of dominance-based, indicator-based and decomposition-based methods. In addition, a fourth method is considered that aims to approximate the set of epsilon-Pareto optimal solutions.
Many of the examples considered have only two objective functions, which enables visualizing the solutions. Some attention is also paid to benefits of achiving in single objective optimization.
Reviewer: Kaisa Miettinen (Helsinki)Optimality and duality for weak quasi efficiency of multiobjective fractional problems via convexificatorshttps://zbmath.org/1491.901532022-09-13T20:28:31.338867Z"van Luu, Do"https://zbmath.org/authors/?q=ai:do-van-luu."Linh, Pham Thi"https://zbmath.org/authors/?q=ai:linh.pham-thiSummary: Fritz John and Kuhn-Tucker necessary conditions for weak quasi-efficiency of multiobjective fractional optimization problems with equality, inequality and set constraints are derived. Under asumptions on asymptotic pseudoinvexity of the objective and asymptotic quasiinvexity of constraint functions, sufficient conditions for weak quasi-efficiency are also given together with duality theorems of Wolfe and Mond-Weir types.Scalarization in vector optimization with arbitrary domination setshttps://zbmath.org/1491.901542022-09-13T20:28:31.338867Z"Weidner, Petra"https://zbmath.org/authors/?q=ai:weidner.petraSummary: In this paper, vector optimization problems with arbitrary domination sets are studied. Scalarization results for efficient and weakly efficient elements of these problems are proved, where especially Gerstewitz functionals and order unit norms are used. A surrogate for the weakly efficient point set is investigated, which is of special interest for domination sets with an empty algebraic interior. The study includes the consideration of perturbed problems. In general, we admit any linear space as image space of the vector optimization problem and do not assume convexity of the feasible point set or of the domination set. Nevertheless, several statements refer to the case that the domination set is an ordering cone. The results can be applied to determine optimal elements which are defined by relations, especially in decision making. Furthermore, they constitute a base for the systematic investigation of many scalarizing problems in multiobjective optimization. This is illustrated for the Hurwicz Rule in decision making under uncertainty.Stochastic multilevel composition optimization algorithms with level-independent convergence rateshttps://zbmath.org/1491.901552022-09-13T20:28:31.338867Z"Balasubramanian, Krishnakumar"https://zbmath.org/authors/?q=ai:balasubramanian.krishnakumar"Ghadimi, Saeed"https://zbmath.org/authors/?q=ai:ghadimi.saeed"Nguyen, Anthony"https://zbmath.org/authors/?q=ai:nguyen.anthony-gThe KKT optimality conditions for optimization problem with interval-valued objective function on Hadamard manifoldshttps://zbmath.org/1491.901562022-09-13T20:28:31.338867Z"Chen, Sheng-lan"https://zbmath.org/authors/?q=ai:chen.shenglanSome of Karush-Kuhn-Tucker criteria under Hadamard's manifolds are developed, by appropriate numerical applications.The results can be extended in the Infinite Ordered Vector Spaces, with real consequences.
Reviewer: Vasile Postolică (Piatra Neamţ)A subspace acceleration method for minimization involving a group sparsity-inducing regularizerhttps://zbmath.org/1491.901572022-09-13T20:28:31.338867Z"Curtis, Frank E."https://zbmath.org/authors/?q=ai:curtis.frank-e"Dai, Yutong"https://zbmath.org/authors/?q=ai:dai.yutong"Robinson, Daniel P."https://zbmath.org/authors/?q=ai:robinson.daniel-pExact penalty functions with multidimensional penalty parameter and adaptive penalty updateshttps://zbmath.org/1491.901582022-09-13T20:28:31.338867Z"Dolgopolik, M. V."https://zbmath.org/authors/?q=ai:dolgopolik.maxim-vladimirovichSummary: We present a general theory of exact penalty functions with vectorial (multidimensional) penalty parameter for optimization problems in infinite dimensional spaces. In comparison with the scalar case, the use of vectorial penalty parameters provides much more flexibility, allows one to adaptively and independently take into account the violation of each constraint during optimization process, and often leads to a better overall performance of an optimization method using an exact penalty function. We obtain sufficient conditions for the local and global exactness of penalty functions with vectorial penalty parameters and study convergence of global exact penalty methods with several different penalty updating strategies. In particular, we present a new algorithmic approach to an analysis of the global exactness of penalty functions, which contains a novel characterisation of the global exactness property in terms of behaviour of sequences generated by certain optimization methods.Unassigned distance geometry and molecular conformation problemshttps://zbmath.org/1491.901592022-09-13T20:28:31.338867Z"Duxbury, Phil"https://zbmath.org/authors/?q=ai:duxbury.phil"Lavor, Carlile"https://zbmath.org/authors/?q=ai:lavor.carlile-campos"Liberti, Leo"https://zbmath.org/authors/?q=ai:liberti.leo"de Salles-Neto, Luiz Leduino"https://zbmath.org/authors/?q=ai:de-salles-neto.luiz-leduinoSummary: 3D protein structures and nanostructures can be obtained by exploiting distance information provided by experimental techniques, such as nuclear magnetic resonance and the pair distribution function method. These are examples of instances of the unassigned distance geometry problem (uDGP), where the aim is to calculate the position of some points using a list of associated distance values not previoulsy assigned to the pair of points. We propose new mathematical programming formulations and a new heuristic to solve the uDGP related to molecular structure calculations. In addition to theoretical results, computational experiments are also provided.Robust food-energy-water-environmental security management: Stochastic quasigradient procedure for linkage of distributed optimization models under asymmetric information and uncertaintyhttps://zbmath.org/1491.901602022-09-13T20:28:31.338867Z"Ermoliev, Y."https://zbmath.org/authors/?q=ai:ermoliev.yuri-m"Zagorodny, A. G."https://zbmath.org/authors/?q=ai:zagorodny.a-g"Bogdanov, V. L."https://zbmath.org/authors/?q=ai:bogdanov.v-l"Ermolieva, T."https://zbmath.org/authors/?q=ai:ermolieva.tatiana-y|ermoleva.t-yu"Havlik, P."https://zbmath.org/authors/?q=ai:havlik.petr"Rovenskaya, E."https://zbmath.org/authors/?q=ai:rovenskaya.elena-a"Komendantova, N."https://zbmath.org/authors/?q=ai:komendantova.n"Obersteiner, M."https://zbmath.org/authors/?q=ai:obersteiner.michaelSummary: The paper presents a consistent algorithm for regional and sectoral distributed models' linkage and optimization under asymmetric information based on iterative stochastic quasigradient (SQG) solution procedure of, in general, nonsmooth nondifferentiable optimization. The procedure is used for linking individual sectoral and regional models for integrated and interdependent Food-Energy-Water-Environmental security analysis and management.New sharp necessary optimality conditions for mathematical programs with equilibrium constraintshttps://zbmath.org/1491.901612022-09-13T20:28:31.338867Z"Gfrerer, Helmut"https://zbmath.org/authors/?q=ai:gfrerer.helmut"Ye, Jane J."https://zbmath.org/authors/?q=ai:ye.jane-jSummary: In this paper, we study the mathematical program with equilibrium constraints formulated as a mathematical program with a parametric generalized equation involving the regular normal cone. We derive a new necessary optimality condition which is sharper than the usual M-stationary condition and is applicable even when no constraint qualifications hold for the corresponding mathematical program with complementarity constraints reformulation.A general asymptotic function with applications in nonconvex optimizationhttps://zbmath.org/1491.901622022-09-13T20:28:31.338867Z"Hadjisavvas, Nicolas"https://zbmath.org/authors/?q=ai:hadjisavvas.nicolas"Lara, Felipe"https://zbmath.org/authors/?q=ai:lara.felipe"Luc, Dinh The"https://zbmath.org/authors/?q=ai:dinh-the-luc.Summary: We introduce a new concept of asymptotic functions which allows us to simultaneously study convex and concave functions as well as quasiconvex and quasiconcave functions. We provide some calculus rules and most relevant properties of the new asymptotic functions for application purpose. We also compare them with the classical asymptotic functions of convex analysis. By using the new concept of asymptotic functions we establish sufficient conditions for the nonemptiness and for the boundedness of the solution set of quasiconvex minimization problems and quasiconcave maximization problems. Applications are given for quadratic and fractional quadratic problems.Scaled constraint qualifications and necessary optimality conditions for nonsmooth mathematical programs with second-order cone complementarity constraintshttps://zbmath.org/1491.901632022-09-13T20:28:31.338867Z"Hajheidari, A."https://zbmath.org/authors/?q=ai:hajheidari.a"Movahedian, N."https://zbmath.org/authors/?q=ai:movahedian.nooshinThe following minimization problem with second-order cone complementarity constraints is considered:
\[
f(z) \rightarrow \min \text{ subject to } G_i(z) \in K_{m_i},~ H_i(z) \subset K_{m_i},~ (G_i(z),H_i(z)) = 0,~ i = 1,\dots, J,
\]
where \(f: \mathbb{R}^n \rightarrow \mathbb{R},~ G_i: \mathbb{R}^n \rightarrow \mathbb{R}^{m_i},~ i = 1,\dots J\) and \(K_{m_i}\) denotes for all \(i\) the \(m_i\)-dimensional second-order cone. Applications of the minimization problem and their modifications are outlined. It is shown that usually used constraint qualifications as Mangasarian-Fromowitz constraint qualifications cannot be used for this problem. The authors propose new constraint qualifications and study properties of the corresponding new stationary concepts.
Reviewer: Karel Zimmermann (Praha)MPCC: strongly stable C-stationary points when the number of active constraints is \(n + 1\)https://zbmath.org/1491.901642022-09-13T20:28:31.338867Z"Hernández Escobar, Daniel"https://zbmath.org/authors/?q=ai:hernandez-escobar.daniel"Rückmann, Jan-J."https://zbmath.org/authors/?q=ai:ruckmann.jan-joachimSummary: We consider the class of mathematical problems with complementarity constraints (MPCC) and apply Kojima's concept of strongly stable stationary points (originally introduced for a standard optimization problem) to C-stationary points of MPCC under certain assumptions. This concept refers to local existence and uniqueness of a stationary point for each sufficiently small perturbed problem. Assuming that the number of active constraints is \(n+1\) and an appropriate constraint qualification holds at the considered point, the goal of this paper is twofold: For MPCC we will present necessary conditions for strong stability as well as equivalent algebraic characterizations for this topological concept.Homogeneous polynomials and spurious local minima on the unit spherehttps://zbmath.org/1491.901652022-09-13T20:28:31.338867Z"Lasserre, Jean B."https://zbmath.org/authors/?q=ai:lasserre.jean-bernardSummary: We consider forms on the Euclidean unit sphere. We obtain a simple and complete characterization of all points that satisfies the standard second-order necessary condition of optimality. It is stated solely in terms of the value of (i) \(f\), (ii) the norm of its gradient, and (iii) the first two smallest eigenvalues of its Hessian, all evaluated at the point. In fact this property also holds for twice continuous differentiable functions that are positively homogeneous. We also characterize a class of degree-\(d\) forms with no spurious local minima on \(\mathbb{S}^{n-1}\) by using a property of gradient ideals in algebraic geometry.Technical and economic analysis of potential steam supply from waste treatment plants to industries in Aichi Prefecture, Japanhttps://zbmath.org/1491.901662022-09-13T20:28:31.338867Z"Maki, Seiya"https://zbmath.org/authors/?q=ai:maki.seiya"Ohnishi, Satoshi"https://zbmath.org/authors/?q=ai:ohnishi.satoshi"Fujii, Minoru"https://zbmath.org/authors/?q=ai:fujii.minoru"Goto, Naohiro"https://zbmath.org/authors/?q=ai:goto.naohiro"Sun, Lu"https://zbmath.org/authors/?q=ai:sun.luSummary: Waste-to-energy treatment is an efficient energy recovery method and an important countermeasure against global warming. The supply of steam from waste treatment plants to industrial sectors presents a higher energy recovery efficiency than traditional waste-to-energy methods. However, its technical potential and economic benefits are not fully understood by policymakers. Furthermore, the regional characteristics and effects of neighboring land use, which affect the heat supply and demand, are not commonly analyzed. Therefore, this study evaluates the spatial steam demand of industries in the Aichi Prefecture by using the steam demand unit value per production shipment and the spatial value of manufactured goods shipments. In addition, this study evaluates the steam supply potential from waste treatment plants to industrial sectors based on the estimated spatial steam demand and the transportable distances. The results show that the steam supply from waste treatment plants to the industries has a great physical potential in Aichi Prefecture. From a cost-benefit perspective, plans considering a 1 km range can be profitable, but the profitability decreases as the distance increases and the no-profitability mesh number increases. To improve the energy recovery efficiency from waste, the location of waste treatment plant should be changed, and the efficiency of steam transport on both supply and demand sides should be increased.Metric regularity of quasidifferentiable mappings and optimality conditions for nonsmooth mathematical programming problemshttps://zbmath.org/1491.901672022-09-13T20:28:31.338867Z"Dolgopolik, M. V."https://zbmath.org/authors/?q=ai:dolgopolik.maxim-vladimirovichSummary: This article is devoted to the analysis of necessary and/or sufficient conditions for metric regularity in terms of Demyanov-Rubinov-Polyakova quasidifferentials. We obtain new necessary and sufficient conditions for the local metric regularity of a multifunction in terms of quasidifferentials of the distance function to this multifunction. We also propose a new MFCQ-type constraint qualification for a parametric system of quasidifferentiable equality and inequality constraints and prove that it ensures the metric regularity of a multifunction associated with this system. As an application, we utilize our constraint qualification to strengthen existing optimality conditions for quasidifferentiable programming problems with equality and inequality constraints. We also prove the independence of the optimality conditions of the choice of quasidifferentials and present a simple example in which the optimality conditions in terms of quasidifferentials detect the non-optimality of a given point, while optimality conditions in terms of various subdifferentials fail to disqualify this point as non-optimal.Robustness characterizations for uncertain optimization problems via image space analysishttps://zbmath.org/1491.901682022-09-13T20:28:31.338867Z"Wei, Hong-Zhi"https://zbmath.org/authors/?q=ai:wei.hongzhi"Chen, Chun-Rong"https://zbmath.org/authors/?q=ai:chen.chunrong"Li, Sheng-Jie"https://zbmath.org/authors/?q=ai:li.shengjieSummary: In this paper, by means of linear and nonlinear (regular) weak separation functions, we obtain some characterizations of robust optimality conditions for uncertain optimization problems, especially saddle point sufficient optimality conditions. Additionally, the relationships between three approaches used for robustness analysis: image space analysis, vector optimization and set-valued optimization, are discussed. Finally, an application for finding a shortest path is given to verify the validity of the results derived in this paper.Solvability of implicit semidefinite and implicit copositive complementarity problemshttps://zbmath.org/1491.901692022-09-13T20:28:31.338867Z"Mahalik, K."https://zbmath.org/authors/?q=ai:mahalik.k"Nahak, C."https://zbmath.org/authors/?q=ai:nahak.chandalSummary: In this paper, we introduce the concept of exceptional family to implicit semidefinite complementarity problems and implicit copositive complementarity problems. Based on the notion of the exceptional family of elements, we prove that the nonexistence of exceptional family of elements is a sufficient condition for the existence theorem of implicit semidefinite complementarity problems and the implicit copositive complementarity problems. The condition is also necessary for relatively pseudomonotone operators to implicit semidefinite complementarity problems. Our results generalize the corresponding results of \textit{G. Isac} et al. [J. Glob. Optim. 10, No. 2, 207--225 (1997; Zbl 0880.90127)] and extend the results of \textit{L. Zhang} [Appl. Math. Comput. 196, No. 1, 86--93 (2008; Zbl 1144.90495)], \textit{Q.-J. Hu} et al. [J. Math. Anal. Appl. 388, No. 1, 519--524 (2012; Zbl 1233.90262)], \textit{N. Huang} and \textit{C. Ma} [Optim. Lett. 8, No. 1, 259--265 (2014; Zbl 1288.90107)] and \textit{V. A. Bulavsky} et al. [Optimization 49, No. 5--6, 405--423 (2001; Zbl 1054.90076)]. Moreover, we also present some theorems correlated to the structure and strict feasibility of implicit semidefinite complementarity problem. In our analysis, the new concept of exceptional family plays a vital role for the solvability of implicit semidefinite and implicit copositive complementarity problems.A new subclass of \(Q_0\)-matrix in linear complementarity theoryhttps://zbmath.org/1491.901702022-09-13T20:28:31.338867Z"Singh, Gambheer"https://zbmath.org/authors/?q=ai:singh.gambheer"Neogy, S. K."https://zbmath.org/authors/?q=ai:neogy.samir-kumar"Kumar, Promila"https://zbmath.org/authors/?q=ai:kumar.promilaSummary: In this article, we introduce a new matrix class \(\overline{L}(d)\) (a subclass of \(Q_0\)-matrices which are obtained as a limit of a sequence of \(L(d)\)-matrices) such that for any \(A\) in this class, a solution to LCP \((q, A)\) exists if LCP \((q, A)\) is feasible, and Lemke's algorithm will find a solution or demonstrate infeasibility. We present a counterexample to show that an \(\overline{L}(d)\)-matrix need not be an \(L(d)\)-matrix. We also show that if \(A \in \overline{L}(d)\), there is an even number of solutions for any nondegenerate vector \(q\). An application of this new matrix class that arises from general quadratic programs and polymatrix games belongs to this class. Finally, we present an example related to the existence of equilibrium in polymatrix games.Wolfe type duality for nonsmooth optimization problems with vanishing constraintshttps://zbmath.org/1491.901712022-09-13T20:28:31.338867Z"Ghobadzadeh, Marjan"https://zbmath.org/authors/?q=ai:ghobadzadeh.marjan"Kanzi, Nader"https://zbmath.org/authors/?q=ai:kanzi.nader"Fallahi, Kamal"https://zbmath.org/authors/?q=ai:fallahi.kamalSummary: In this paper, we formulate and study the duality problems in Wolfe type for the mathematical programs with vanishing constraints in nonsmooth case, whereas \textit{S. K. Mishra} et al. [Ann. Oper. Res. 243, No. 1--2, 249--272 (2016; Zbl 1348.90623)] investigated it in smooth case. Also, we derive the weak, strong, strict converse duality results for the problems with Lipschitzian data utilizing Clarke subdifferential.The multiple Steiner TSP with order constraints: complexity and optimization algorithmshttps://zbmath.org/1491.901722022-09-13T20:28:31.338867Z"Gabrel, Virginie"https://zbmath.org/authors/?q=ai:gabrel.virginie"Mahjoub, A. Ridha"https://zbmath.org/authors/?q=ai:mahjoub.ali-ridha|ridha-mahjoub.a"Taktak, Raouia"https://zbmath.org/authors/?q=ai:taktak.raouia"Uchoa, Eduardo"https://zbmath.org/authors/?q=ai:uchoa.eduardoSummary: We consider a variant of the Travelling Salesman Problem (TSP), the Multiple Steiner TSP with Order constraints (MSTSPO). Consider a weighted undirected graph and a set of salesmen, and each salesman is associated with a set of compulsory vertices to visit, called terminals. The MSTSPO consists in finding a minimum-cost subgraph containing for each salesman a tour going in a specified order through its terminals. Along with its importance from a theoretical point of view, the problem is also challenging in practice since it has applications in telecommunication networks. We show that the problem is NP-hard even for a single salesman and propose integer programming formulations. We then devise both Branch-and-Cut and Branch-and-Price algorithms to solve the problem. The extensive computational results are presented, showing the efficiency of our algorithms.Linearized formulations for failure aware barter exchangehttps://zbmath.org/1491.901732022-09-13T20:28:31.338867Z"Goldberg, Noam"https://zbmath.org/authors/?q=ai:goldberg.noam"Poss, Michael"https://zbmath.org/authors/?q=ai:poss.michaelSummary: Mathematical programming formulations are developed for determining chains of organ-donation exchange pairs in a compatibility graph where pairwise exchanges may fail. The objective is to maximize the expected value where pairs are known to fail with given probabilities. In previous work, namely that of \textit{J. P. Dickerson}, \textit{A. D. Procaccia} and \textit{T. Sandholm} [``Failure-aware kidney exchange'', Manage. Sci. 65, No. 4, 323--340 (2019; \url{doi:10.1287/mnsc.2018.3026})] this NP-hard problem was solved heuristically or exactly only for limited path lengths. Although the problem appears highly nonlinear, we formulate it as a mixed-integer linear program. A computationally tractable layered formulation that approximately solves larger instances is also proposed and a computational study is presented for evaluating the proposed formulations.Shadowed type 2 fuzzy-based Markov model to predict shortest path with optimized waiting timehttps://zbmath.org/1491.901742022-09-13T20:28:31.338867Z"Kumar, Pawan"https://zbmath.org/authors/?q=ai:kumar.pawan"Dudeja, Chanchal"https://zbmath.org/authors/?q=ai:dudeja.chanchalSummary: Recently, the traffic network exhibits a very critical situation due to the speedy rise of urbanization and population growth. This paper suggests a better solution for such traffic issues via delay-optimized Shortest Path Prediction (SPP) method. Even though Shadowed Type 2 (ST2) fuzzy logic works well for delay optimization with uncertain data, it causes a rise in fuzzy partitioning complexity. This motivates to development Shadowed Type 2 Fuzzy Markov (ST2FM) scheme for accurate prediction of the shortest path. In ST2FM, waiting for time optimization performed at rush junction based on ST2 fuzzy rules. Optimized path detail is periodically updated in the Transition Probability Matrix of the Markov model for SPP. Thus, ST2FM helps a node to easily identify the shortest path to reach the destination without waiting at traffic junctions. The absence of fuzzy partitioning and the use of Markov prediction greatly reduce the computational complexity of ST2FM. Matlab 2016a working environment is utilized for research implementation and results are compared with ST2 fuzzy, Interval Type 2 fuzzy and Fuzzy-based Convolution Neural Network. From this analysis, the proposed work demonstrates 96\% of prediction accuracy with less error than existing works.Mixed integer nonlinear optimization models for the Euclidean Steiner tree problem in \(\mathbb{R}^d\)https://zbmath.org/1491.901752022-09-13T20:28:31.338867Z"Ouzia, Hacene"https://zbmath.org/authors/?q=ai:ouzia.hacene"Maculan, Nelson"https://zbmath.org/authors/?q=ai:maculan.nelson-fSummary: New mixed integer nonlinear optimization models for the Euclidean Steiner tree problem in \(d\)-space (with \(d\geq 3)\) will be presented in this work. All models feature a nonsmooth objective function but the continuous relaxations of their set of feasible solutions are convex. From these models, four convex mixed integer linear and nonlinear relaxations will be considered. Each relaxation has the same set of feasible solutions as the set of feasible solutions of the model from which it is derived. Finally, preliminary computational results highlighting the main features of the presented relaxations will be discussed.Evolutionary operators for the Hamiltonian completion problemhttps://zbmath.org/1491.901762022-09-13T20:28:31.338867Z"Puljić, Krunoslav"https://zbmath.org/authors/?q=ai:puljic.krunoslav"Manger, Robert"https://zbmath.org/authors/?q=ai:manger.robertSummary: This paper deals with evolutionary algorithms for solving the Hamiltonian completion problem. More precisely, the paper is concerned with a collection of crossover and mutation operators, which mostly originate from the traveling salesman problem, but have further on been modified or customized for Hamiltonian completion. The considered crossovers and mutations are tested on a set of randomly generated problem instances. The obtained experimental results clearly show that the behavior and relative ranking of the operators within the Hamiltonian completion environment are different than within the traveling salesman environment. Moreover, it is shown that our modified or custom-designed operator variants accomplish much better results for Hamiltonian completion than the standard variants.The non-positive circuit weight problem in parametric graphs: a solution based on dioid theoryhttps://zbmath.org/1491.901772022-09-13T20:28:31.338867Z"Zorzenon, Davide"https://zbmath.org/authors/?q=ai:zorzenon.davide"Komenda, Jan"https://zbmath.org/authors/?q=ai:komenda.jan"Raisch, Jörg"https://zbmath.org/authors/?q=ai:raisch.jorgSummary: Let us consider a parametric weighted directed graph in which every arc \((j , i)\) has weight of the form \(w ((j , i)) = \max (P_{i j} + \lambda , I_{i j} - \lambda , C_{i j})\), where \(\lambda\) is a real parameter and \(P\), \(I\) and \(C\) are arbitrary square matrices with elements in \(\mathbb{R} \cup \{-\infty\}\). In this paper, we design an algorithm that solves the \textit{Non-positive Circuit weight Problem} (NCP) on this class of parametric graphs, which consists in finding all values of \(\lambda\) such that the graph does not contain circuits with positive weight. This problem, which generalizes other instances of the NCP previously investigated in the literature, has applications in the consistency analysis of a class of discrete-event systems called P-time event graphs. The proposed algorithm is based on max-plus algebra and formal languages, and improves the worst-case complexity of other existing approaches, achieving strongly polynomial time complexity \(\mathcal{O} (n^4)\) (where \(n\) is the number of nodes in the graph).Analysis of optimal battery state-of-charge trajectory patterns for blended mode of a parallel plug-in hybrid electric vehicle and a wide range of driving conditionshttps://zbmath.org/1491.901782022-09-13T20:28:31.338867Z"Soldo, Jure"https://zbmath.org/authors/?q=ai:soldo.jure"Škugor, Branimir"https://zbmath.org/authors/?q=ai:skugor.branimir"Deur, Joško"https://zbmath.org/authors/?q=ai:deur.joskoSummary: In Plug-in hybrid electric vehicles (PHEVs) typically combine several power sources, which are coordinated by means of an optimal energy management strategy. When considering the so-called blended mode, in which the engine is regularly used over a trip, the shape of battery state-of-charge (SoC) trajectory over travelled distance is of particular importance for achieving minimum fuel consumption. The paper deals with in-depth analysis of optimal SoC trajectories obtained by off-line control variable optimization of a PHEV-type city bus given in parallel (P2) powertrain configuration. The optimization is conducted by using a dynamic programming-based optimization algorithm for a wide range of driving cycles and operating scenarios. It is found that, as opposed to usually assumed linear-like near-optimal shape, the SoC vs. travelled distance trajectory can take on significantly different optimal shapes for non-zero road grade profiles or driving cycle with relatively long distance. The emphasis is on analyzing root causes for such behavior and its implications to fuel consumption.Mean-field Markov decision processes with common noise and open-loop controlshttps://zbmath.org/1491.901792022-09-13T20:28:31.338867Z"Motte, Médéric"https://zbmath.org/authors/?q=ai:motte.mederic"Pham, Huyên"https://zbmath.org/authors/?q=ai:pham.huyenSummary: We develop an exhaustive study of Markov decision process (MDP) under mean field interaction both on states and actions in the presence of common noise, and when optimization is performed over open-loop controls on infinite horizon. Such model, called CMKV-MDP for conditional McKean-Vlasov MDP, arises and is obtained here rigorously with a rate of convergence as the asymptotic problem of \(N\)-cooperative agents controlled by a social planner/influencer that observes the environment noises but not necessarily the individual states of the agents. We highlight the crucial role of relaxed controls and randomization hypothesis for this class of models with respect to classical MDP theory. We prove the correspondence between CMKV-MDP and a general lifted MDP on the space of probability measures, and establish the dynamic programming Bellman fixed point equation satisfied by the value function, as well as the existence of \(\epsilon\)-optimal randomized feedback controls. The arguments of proof involve an original measurable optimal coupling for the Wasserstein distance. This provides a procedure for learning strategies in a large population of interacting collaborative agents.A new Abadie-type constraint qualification for general optimization problemshttps://zbmath.org/1491.901802022-09-13T20:28:31.338867Z"Hejazi, M. Alavi"https://zbmath.org/authors/?q=ai:hejazi.mansoureh-alavi"Movahedian, N."https://zbmath.org/authors/?q=ai:movahedian.nooshinSummary: A non-Lipschitz version of the Abadie constraint qualification is introduced for a nonsmooth and nonconvex general optimization problem. The relationship between the new Abadie-type constraint qualification and the local error bound property is clarified. Also, a necessary optimality condition, based on the pseudo-Jacobians, is derived under the Abadie constraint qualification. Moreover, some examples are given to illustrate the obtained results.Optimality conditions for nonsmooth multiobjective bilevel optimization using tangential subdifferentialshttps://zbmath.org/1491.901812022-09-13T20:28:31.338867Z"Jennane, Mohsine"https://zbmath.org/authors/?q=ai:jennane.mohsine"Kalmoun, El Mostafa"https://zbmath.org/authors/?q=ai:kalmoun.el-mostafa"El Fadil, Lhoussain"https://zbmath.org/authors/?q=ai:el-fadil.lhoussainThe aim is to derive necessary optimality conditions for nonsmooth multiobjective bilevel optimization problems. The value function approach is used in combination with suitable constraint qualifications (Cottle, Mangasarian-Fromovitz, Kuhn-Tucker) in order to derive the optimality conditions by means of tangential subdifferentials. Neither convexity nor local Lipschitz properties are assumed for the upper level objectives and constraint functions. An example is also considered for illustration.
Reviewer: Ernö Robert Csetnek (Wien)A unified convergence rate analysis of the accelerated smoothed gap reduction algorithmhttps://zbmath.org/1491.901822022-09-13T20:28:31.338867Z"Tran-Dinh, Quoc"https://zbmath.org/authors/?q=ai:tran-dinh-quoc.Summary: In this paper, we develop a unified convergence analysis framework for the \textit{Accelerated Smoothed GAp ReDuction algorithm} (ASGARD) introduced in [\textit{Q. Tran-Dinh} et al., SIAM J. Optim. 28, No. 1, 96--134 (2018; Zbl 1386.90109)]. Unlike [loc. cit.], the new analysis covers three settings in a single algorithm: general convexity, strong convexity, and strong convexity and smoothness. Moreover, we establish the convergence guarantees on three criteria: (i) gap function, (ii) primal objective residual, and (iii) dual objective residual. Our convergence rates are optimal (up to a constant factor) in all cases. While the convergence rate on the primal objective residual for the general convex case has been established in [loc. cit.], we prove additional convergence rates on the gap function and the dual objective residual. The analysis for the last two cases is completely new. Our results provide a complete picture on the convergence guarantees of ASGARD. Finally, we present four different numerical experiments on a representative optimization model to verify our algorithm and compare it with the well-known Nesterov's smoothing algorithm.A primal-dual algorithm for nonnegative \(N\)-th order CP tensor decomposition: application to fluorescence spectroscopy data analysishttps://zbmath.org/1491.901832022-09-13T20:28:31.338867Z"EL Qate, Karima"https://zbmath.org/authors/?q=ai:el-qate.karima"El Rhabi, Mohammed"https://zbmath.org/authors/?q=ai:el-rhabi.mohammed"Hakim, Abdelilah"https://zbmath.org/authors/?q=ai:hakim.abdelilah"Moreau, Eric"https://zbmath.org/authors/?q=ai:moreau.eric"Thirion-Moreau, Nadège"https://zbmath.org/authors/?q=ai:thirion-moreau.nadegeSummary: This work concerns the resolution of inverse problems encountered in multidimensional signal processing problems. Here, we address the problem of tensor decomposition, more precisely the Canonical Polyadic (CP) Decomposition also known as Parafac. Yet, when the amount of data is very large, this inverse problem may become numerically ill-posed and consequently hard to solve. This requires to introduce a priori information about the system, often in the form of penalty functions. The difficulty that might arise is that the considered cost function may be non differentiable. It is the reason why we develop here a Primal-Dual Projected Gradient (PDPG) optimization algorithm to solve variational problems involving such cost functions. Moreover, despite its theoretical importance, most approaches of the literature developed for nonnegative CP decomposition, have not actually addressed the convergence issue. Furthermore, most methods with convergence rate guarantee require restrictive conditions for their theoretical analyses. Hence, a theoretical void still exists for the convergence rate problem under very mild conditions. Therefore, the two main aims of this work are (i) to design a CP-PDPG algorithm and then (ii) to prove, under mild conditions, its convergence to the set of minimizers at the rate \(O(1/k)\), \(k\) being the iteration number. The effectiveness and the robustness of the proposed approach are illustrated through numerical examples.Zeroth-order regularized optimization (ZORO): approximately sparse gradients and adaptive samplinghttps://zbmath.org/1491.901842022-09-13T20:28:31.338867Z"Cai, HanQin"https://zbmath.org/authors/?q=ai:cai.hanqin"McKenzie, Daniel"https://zbmath.org/authors/?q=ai:mckenzie.daniel"Yin, Wotao"https://zbmath.org/authors/?q=ai:yin.wotao"Zhang, Zhenliang"https://zbmath.org/authors/?q=ai:zhang.zhenliangA novel \(x\)-shaped binary particle swarm optimizationhttps://zbmath.org/1491.901852022-09-13T20:28:31.338867Z"Beheshti, Zahra"https://zbmath.org/authors/?q=ai:beheshti.zahraSummary: Definitive optimization algorithms are not able to solve high-dimensional optimization problems because the search space grows exponentially with the problem size and an exhaustive search will be impractical. Therefore, approximate algorithms are applied to solve them. A category of approximate algorithms are meta-heuristic algorithms. They have shown an acceptable efficiency to solve these problems. Among them, particle swarm optimization (PSO) is one of the well-known swarm intelligence algorithms to optimize continuous problems. A transfer function is applied in this algorithm to convert the continuous search space to the binary one. The role of the transfer function in binary PSO (BPSO) is very important to enhance its performance. Several transfer functions have been proposed for BPSO such as S-shaped, V-shaped, linear and other transfer functions. However, BPSO algorithm can sometimes find local optima or show slow convergence speed in some problems because of using the velocity of PSO and these transfer functions. In this study, a novel transfer function called \(x\)-shaped BPSO (XBPSO) is proposed to increase exploration and exploitation of BPSO in the binary search space. The transfer function uses two functions and improved rules to generate a new binary solution. The proposed method has been run on 33 benchmark instances of the 0-1 multidimensional knapsack problem (MKP), two discrete maximization functions and 23 minimization functions. The results have been compared with some well-known BPSO and discrete meta-heuristic algorithms. The results showed that \(x\)-shaped transfer function considerably increased the solution accuracy and convergence speed in BPSO algorithm. The average error of compared algorithms on all 0-1 MKP benchmark instances indicated that XBPSO has the minimum error of 8.9\%. Also, the mean absolute error (MAE) obtained by XBPSO on two discrete maximization functions is 0.45. Moreover, the proposed transfer function provides superior solutions in 18 functions from 23 minimization functions.Weight convergence analysis of DV-hop localization algorithm with GAhttps://zbmath.org/1491.901862022-09-13T20:28:31.338867Z"Cai, Xingjuan"https://zbmath.org/authors/?q=ai:cai.xingjuan"Wang, Penghong"https://zbmath.org/authors/?q=ai:wang.penghong"Cui, Zhihua"https://zbmath.org/authors/?q=ai:cui.zhihua"Zhang, Wensheng"https://zbmath.org/authors/?q=ai:zhang.wensheng"Chen, Jinjun"https://zbmath.org/authors/?q=ai:chen.jinjunSummary: The distance vector-hop (DV-hop) is a typical localization algorithm. It estimates sensor nodes location through detecting the hop count between nodes. To enhance the positional precision, the weight is used to estimate position, and the conventional wisdom is that the more hop counts are, the smaller value of weight will be. However, there has been no clear mathematical model among positioning error, hop count, and weight. This paper constructs a mathematical model between the weights and hops and analyzes the convergence of this model. Finally, the genetic algorithm is used to solve this mathematical weighted DV-hop (MW-GADV-hop) positioning model, the simulation results illustrate that the model construction is logical, and the positioning error of the model converges to \(1/4R\).A new QPSO based hybrid algorithm for constrained optimization problems via tournamenting processhttps://zbmath.org/1491.901872022-09-13T20:28:31.338867Z"Kumar, Nirmal"https://zbmath.org/authors/?q=ai:kumar.nirmal"Mahato, Sanat Kumar"https://zbmath.org/authors/?q=ai:mahato.sanat-kumar"Bhunia, Asoke Kumar"https://zbmath.org/authors/?q=ai:bhunia.asoke-kumarSummary: The goal of this paper is to propose a new hybrid algorithm based on advanced quantum behaved particle swarm optimization (QPSO) technique and binary tournamenting for solving constrained optimization problems. In binary tournamenting, six different situations/options are considered and accordingly six variants of hybrid algorithms are proposed. Then to test the efficiency and performance of these algorithms and also to select the best algorithm among these, six benchmark optimization problems are selected and solved. Then the computational results are compared graphically as well as numerically. Finally, the best found algorithm is applied to solve three engineering design problems and the computational results are compared with the existing algorithms available in the literature.Real power loss reduction by \textit{Duponchelia fovealis} optimization and enriched squirrel search optimization algorithmshttps://zbmath.org/1491.901882022-09-13T20:28:31.338867Z"Lenin, Kanagasabai"https://zbmath.org/authors/?q=ai:lenin.kanagasabaiSummary: In this work, \textit{Duponchelia fovealis} optimization (DFO) algorithm and enriched squirrel search optimization (ESSO) algorithm are designed to solve optimal reactive power problem. DFO algorithm is based on the natural progression of the \textit{Duponchelia fovealis}. In the exploration space, \textit{Duponchelia fovealis} population will act as search agent and the light source is considered as optimal places of \textit{Duponchelia fovealis} which attained so far. Around the light source, each \textit{Duponchelia fovealis} will explore and its position has been updated. Gaussian mutation, chaotic local search and Kernel extreme learning machine which are based on extreme learning machine are applied successively in order to perk up the performance of the algorithm. Then, in this work enriched squirrel search optimization (ESSO) algorithm is projected to solve the problem. Proposed algorithm is based on the actions of squirrel foraging behavior. Naturally, squirrels are very less active and consume the stored nuts in the winter time to get ample of energy. Hickory tree (hickory nuts are found), oak tree (acorn nuts are found) and normal tree are the three types of food sources for squirrel. Naturally, the behavior (foraging) will be varied with reference to the seasonal variations. Proposed \textit{Duponchelia fovealis} optimization (DFO) algorithm and enriched squirrel search optimization (ESSO) algorithm have been tested in standard IEEE 30, bus test system. The results show that the projected DFO and ESSO algorithms reduced the power loss comprehensively. Mainly, projected \textit{Duponchelia fovealis} optimization (DFO) algorithm and enriched squirrel search optimization (ESSO) algorithm solved the multi-objective formulation of the problem and with reference to power loss, voltage deviation minimization and voltage stability enhancement results have been analyzed.Grammatically uniform population initialization for grammar-guided genetic programminghttps://zbmath.org/1491.901892022-09-13T20:28:31.338867Z"Ramos Criado, Pablo"https://zbmath.org/authors/?q=ai:ramos-criado.pablo"Barrios Rolanía, D."https://zbmath.org/authors/?q=ai:barrios-rolania.d"Manrique, Daniel"https://zbmath.org/authors/?q=ai:manrique.daniel"Serrano, Emilio"https://zbmath.org/authors/?q=ai:serrano.emilioSummary: The initial population distribution is an essential issue in evolutionary computation performance. Population initialization methods for grammar-guided genetic programming have some difficulties generating a representative sample of the search space, which negatively affects the overall evolutionary process. This paper presents a grammatically uniform population initialization method to address this issue by improving the initial population uniformity: the equiprobability of obtaining any individual of the search space defined by the context-free grammar. The proposed initialization method assigns and updates probabilities dynamically to the production rules of the grammar to pursue uniformity and includes a code bloat control mechanism. We have conducted empirical experiments to compare the proposed algorithm with a standard initialization approach very often used in grammar-guided genetic programming. The results report that the proposed initialization method approximates very well a uniform distribution of the individuals in the search space. Moreover, the overall evolutionary process that takes place after the population initialization performs better in terms of convergence speed and quality of the final solutions achieved when the proposed method generates the initial population than when the usual approach does. The results also show that these performance differences are more significant when the experiments involve large search spaces.Nonconvex Nash games. Solution concepts and algorithmshttps://zbmath.org/1491.910052022-09-13T20:28:31.338867Z"Nowak, Daniel"https://zbmath.org/authors/?q=ai:nowak.danielSummary: Game theory is a mathematical approach to model competition between several parties, called players. The goal of each player is to choose a strategy, which solves his optimization problem, i.e. minimizes or maximizes his objective function. Due to the competitive setting, this strategy may influence the optimization problems of other players. In the non-cooperative setting each player acts selfish, meaning he does not care about the objective of his opponents. A solution concept for this problem is a Nash equilibrium, which was introduced by John Forbes Nash in his Ph.D. thesis in 1950. Convexity of the optimization problems is a crucial assumption for the existence of Nash equilibria. This work investigates settings, where this convexity assumption fails to hold.
The first part of this thesis extends results of \textit{J.-S. Pang} and \textit{G. Scutari} from their paper [SIAM J. Optim. 21, No. 4, 1491--1522 (2011; Zbl 1246.91022)]. In this publication, a game with possibly nonconvex objective functions and nonconvex individual and shared inequality constraints was investigated. We extend these results twofold. Firstly, we generalize the individual and shared polyhedral constraints to general convex constraints and, secondly, we introduce convex and nonconvex, individual and shared equality constraints. After a detailed comparison of solution concepts for the generalized Nash game and a related Nash game, we show that so-called quasi-Nash equilibria exist under similar assumptions than in the original work, provided some additional constraint qualification holds. Subsequently, we prove that the existence of Nash equilibria needs additional assumptions on the gradients of the equality constraints. Furthermore, a special case of a multi-leader multi-follower game is investigated. We show the convergence of epsilon-quasi-Nash equilibria to \(C\)-stationary points and prove that these are also Clarke-stationary under reasonable assumptions.
In the second part of this thesis, an application in computation off\/loading is investigated. We consider several mobile users that are able to off\/load parts of a computation task to a connected server. However, the server has limited computation capacities which leads to competition among the mobile users. If a user decides to off\/load a part of his computation, he needs to wait for the server to finish before he can assemble the results of his computation. This leads to a vanishing constraint in the optimization problem of the mobile users which is a nonconvex and nonsmooth condition. We show the existence of a unique Nash equilibrium for the computation off\/loading game and provide an efficient algorithm for its computation. Furthermore, we present two extensions to this game, which inherit similar properties and we also show the limitations of these formulations.
The third part investigates a hierarchical constrained Cournot game. In the upper level, several firms decide on capacities which act as constraints for the production variables. In the lower level the same firms engage in a Cournot competition, where they choose production variables to maximize profit. The prior chosen capacities are upper bounds on these production variables. This hierarchical setting induces nonconvexity and nonsmoothness in the upper level objective functions. After a detailed sensitivity analysis of the lower level, we give necessary optimality conditions for the upper level, i.e. for the hierarchical Cournot game. Using these conditions, we construct an algorithm which provably finds all Nash equilibria of the game, provided some assumptions are satisfied. This algorithm is numerically tested on several examples which are motivated by the gas market.Improved two sample revenue guarantees via mixed-integer linear programminghttps://zbmath.org/1491.910802022-09-13T20:28:31.338867Z"Ahunbay, Mete Şeref"https://zbmath.org/authors/?q=ai:ahunbay.mete-seref"Vetta, Adrian"https://zbmath.org/authors/?q=ai:vetta.adrian-rSummary: We study the performance of the empirical revenue maximizing (ERM) mechanism in a single-item, single-seller, single-buyer setting. We assume the buyer's valuation is drawn from a regular distribution \(F\) and that the seller has access to \textit{two} independently drawn samples from \(F\). By solving a family of mixed-integer linear programs (MILPs), the ERM mechanism is proven to guarantee at least .5914 times the optimal revenue in expectation. Using solutions to these MILPs, we also show that the worst-case efficiency of the ERM mechanism is at most .61035 times the optimal revenue. These guarantees improve upon the best known lower and upper bounds of .558 and .642, respectively, of
\textit{C. Daskalakis} and \textit{M. Zampetakis} [``More revenue from two samples via factor revealing SDPs'', in: Proceedings of the 21st ACM conference on economics and computation, EC 2020. New York, NY: Association for Computing Machinery (ACM). 257--272 (2020; \url{doi:10.1145/3391403.3399543})].
For the entire collection see [Zbl 1487.91002].Modelling human active search in optimizing black-box functionshttps://zbmath.org/1491.911002022-09-13T20:28:31.338867Z"Candelieri, Antonio"https://zbmath.org/authors/?q=ai:candelieri.antonio"Perego, Riccardo"https://zbmath.org/authors/?q=ai:perego.riccardo"Giordani, Ilaria"https://zbmath.org/authors/?q=ai:giordani.ilaria"Ponti, Andrea"https://zbmath.org/authors/?q=ai:ponti.andrea"Archetti, Francesco"https://zbmath.org/authors/?q=ai:archetti.francescoSummary: Modelling human function learning has been the subject of intense research in cognitive sciences. The topic is relevant in black-box optimization where information about the objective and/or constraints is not available and must be learned through function evaluations. In this paper, we focus on the relation between the behaviour of humans searching for the maximum and the probabilistic model used in Bayesian optimization. As surrogate models of the unknown function, both Gaussian processes and random forest have been considered: the Bayesian learning paradigm is central in the development of active learning approaches balancing exploration/exploitation in uncertain conditions towards effective generalization in large decision spaces. In this paper, we analyse experimentally how Bayesian optimization compares to humans searching for the maximum of an unknown 2D function. A set of controlled experiments with 60 subjects, using both surrogate models, confirm that Bayesian optimization provides a general model to represent individual patterns of active learning in humans.Strong stability for multiobjective investment problem with perturbed minimax risks of different types and parameterized optimalityhttps://zbmath.org/1491.911162022-09-13T20:28:31.338867Z"Emelichev, Vladimir A."https://zbmath.org/authors/?q=ai:emelichev.vladimir-a"Nikulin, Yury V."https://zbmath.org/authors/?q=ai:nikulin.yury-vSummary: A multicriteria investment Boolean problem of minimizing lost profits with parameterized efficiency and different types of risks is formulated. The lower and upper bounds on the radius of the strong stability of efficient portfolios are obtained. Several earlier known results regarding strong stability of Pareto efficient and extreme portfolios are confirmed.An analytic solution for multi-period uncertain portfolio selection problemhttps://zbmath.org/1491.911222022-09-13T20:28:31.338867Z"Li, Bo"https://zbmath.org/authors/?q=ai:li.bo.1"Sun, Yufei"https://zbmath.org/authors/?q=ai:sun.yufei"Teo, Kok Lay"https://zbmath.org/authors/?q=ai:teo.kok-laySummary: The return rates of risky assets in financial markets are usually assumed as random variables or fuzzy variables. For the ever-changing real asset market, this assumption may not always be satisfactory. Thus, it is sometimes more realistic to take the return rates as uncertain variables. However, for the existing works on multi-period uncertain portfolio selection problems, they do not find analytic optimal solutions. In this paper, we propose a method for deriving an analytic optimal solution to a multi-period uncertain portfolio selection problem. First, a new uncertain risk measure is defined to model the investment risk. Then, we formulate a bi-criteria optimization model, where the investment return is maximized, while the investment risk is minimized. On this basis, an equivalent transformation is presented to convert the uncertain bi-criteria optimization problem into an equivalent bi-criteria optimization problem. Then, by applying dynamic programming method, an analytic optimal solution is obtained. Finally, a numerical simulation is carried out to show that the proposed model is realistic and the method being developed is applicable and effective.A multi-period fuzzy mean-minimax risk portfolio model with investor's risk attitudehttps://zbmath.org/1491.911252022-09-13T20:28:31.338867Z"Yang, Xingyu"https://zbmath.org/authors/?q=ai:yang.xingyu"Liu, Weilong"https://zbmath.org/authors/?q=ai:liu.weilong"Chen, Sidou"https://zbmath.org/authors/?q=ai:chen.sidou"Zhang, Yong"https://zbmath.org/authors/?q=ai:zhang.yong.14|zhang.yong.4|zhang.yong.9|zhang.yong.1|zhang.yong.12|zhang.yong.10|zhang.yong.7|zhang.yong.15|zhang.yong.8|zhang.yong.2|zhang.yong.11|zhang.yong.13|zhang.yong.5|zhang.yongSummary: This paper deals with a multi-period portfolio selection problem considering investor's risk attitude in fuzzy environment. We regard the return rate of each risky asset as a fuzzy number and use the expected value and semi-absolute deviation to measure its return and risk, respectively. We adopt an \(l_{\infty}\) downside risk function to measure the portfolio's risk, which is represented by the maximum individual risk. Moreover, we formulate a reasonable diversification constraint for the portfolio involving risk-free asset. Then, we propose a multi-period portfolio selection model with the objectives of maximizing the final expected wealth and minimizing the final cumulative risk. Furthermore, we design a multiple particle swarm optimization to solve it. Finally, we illustrate the effectiveness of the model and algorithm by using a real case.Exact solutions and approximations for optimal investment strategies and indifference priceshttps://zbmath.org/1491.911502022-09-13T20:28:31.338867Z"Vellekoop, Michel"https://zbmath.org/authors/?q=ai:vellekoop.michel-h"Gaudenzi, Marcellino"https://zbmath.org/authors/?q=ai:gaudenzi.marcellinoOnline distributed design for control cost reductionhttps://zbmath.org/1491.930072022-09-13T20:28:31.338867Z"Axelson-Fisk, Magnus"https://zbmath.org/authors/?q=ai:axelson-fisk.magnus"Knorn, Steffi"https://zbmath.org/authors/?q=ai:knorn.steffiSummary: In this paper we consider the problem of reducing control costs in multi-agent systems in a distributed fashion, while maintaining or seeking a certain level of global performance. The problem is formulated to minimize the controller gains locally, while respecting a global bound on the state deviations due to effects of external disturbances We show that with the chosen performance measure this can be reformulated as a convex optimization problem. However, due to the limited knowledge of the agents and structural changes in the networks, currently existing algorithms for solving distributed optimization problems cannot be used as is. We therefore derive an algorithm that is capable of solving the problem, while also respecting limitations caused by the aforementioned issues. We end the paper with an example where the problem is solved in a multi-terminal DC-grid.Parametric identification of nonlinear structures using particle swarm optimization based on power flow balance criteriahttps://zbmath.org/1491.930262022-09-13T20:28:31.338867Z"Anish, R."https://zbmath.org/authors/?q=ai:anish.r"Shankar, K."https://zbmath.org/authors/?q=ai:shankar.krishnan|shankar.karthik-h|shankar.k-sSummary: This paper discusses a novel approach for nonlinear parameter identification of structures. An inverse problem was formulated as an optimization problem, using two objective functions in time domain. The first objective function is formulated as an error between measured acceleration and predicted acceleration of the model. While the second objective function minimizes the substructure Instantaneous power flow balance, which is the sum of input power, dissipated power, transmitted power and time rate of kinetic and strain energy to zero. Here a cubic nonlinearity in spring (Duffing equation) and a quadratic nonlinearity in damper are used to model the nonlinear system. Numerical simulations were performed on a 10-DOF nonlinear system under harmonic excitation using particle swarm optimization tool under noise-free and 5\% noisy cases. Identified results are compared in terms of mean absolute percentage error, with other methods in nonlinear parameter identification available in literature. Simulation results show the accuracy of proposed method in nonlinear parameter identification even at high noise contamination cases.
For the entire collection see [Zbl 1477.93111].Generalized maximum entropy based identification of graphical ARMA modelshttps://zbmath.org/1491.930302022-09-13T20:28:31.338867Z"You, Junyao"https://zbmath.org/authors/?q=ai:you.junyao"Yu, Chengpu"https://zbmath.org/authors/?q=ai:yu.chengpu"Sun, Jian"https://zbmath.org/authors/?q=ai:sun.jian.4|sun.jian.3|sun.jian.1|sun.jian|sun.jian.2"Chen, Jie"https://zbmath.org/authors/?q=ai:chen.jie.8|chen.jie.1|chen.jie.10|chen.jie|chen.jie.2|chen.jie.7|chen.jie.4|chen.jie.9|chen.jie.5|chen.jie.3|chen.jie.6Summary: This paper focuses on the joint estimation of parameters and topologies of multivariate graphical autoregressive moving-average (ARMA) processes. Since the graphical structure of a model can be characterized by the sparsity pattern of its inverse spectral density matrix, a generalized maximum entropy optimization model is derived by imposing a sparse regularization on the inverse spectrum from which the parameters of the AR part and the graph topology can be simultaneously estimated. Then, the whole graphical ARMA model is identified by alternatingly estimating the graphical AR part by solving the regularized maximum entropy problem and the MA part by applying the moment estimation method. The weight to the sparsity regularization term is chosen according to the information theoretic model selection criterion, and simulation examples are provided to illustrate the effectiveness of the proposed identification approach.Saddle point equilibrium model for uncertain discrete systemshttps://zbmath.org/1491.930732022-09-13T20:28:31.338867Z"Sun, Yun"https://zbmath.org/authors/?q=ai:sun.yun"Yan, Hongyan"https://zbmath.org/authors/?q=ai:yan.hongyan"Zhu, Yuanguo"https://zbmath.org/authors/?q=ai:zhu.yuanguoSummary: Uncertainty theory is a newly founded mathematical tool for modeling subjective indeterminacy. This type of indeterminate events is described as uncertain events and measured by belief degrees. In this paper, a saddle point equilibrium game model is studied for an uncertain discrete system, where the system is disturbed by an uncertain event at each stage. Applying Bellman's dynamic programming approach, recurrence equations for this model are presented. The explicit solution of a bang-bang game model for the uncertain discrete system is obtained. Furthermore, for general cases, a hybrid intelligent algorithm is provided to approximate the solution numerically. Finally, a discrete game of duopoly is discussed to show the effectiveness of our results.Inverse-adaptive multilayer T-S fuzzy controller for uncertain nonlinear system optimized by differential evolution algorithmhttps://zbmath.org/1491.930762022-09-13T20:28:31.338867Z"Van Kien, Cao"https://zbmath.org/authors/?q=ai:van-kien.cao"Anh, Ho Pham Huy"https://zbmath.org/authors/?q=ai:anh.ho-pham-huy"Son, Nguyen Ngoc"https://zbmath.org/authors/?q=ai:son.nguyen-ngocSummary: This paper initiatively proposes a novel inverse-adaptive multilayer T-S fuzzy controller (IFC + AF) optimized with differential evolution (DE) soft computing algorithm available for a class of robust control methods applied in uncertain nonlinear SISO and MISO systems. First, a novel multilayer T-S fuzzy model is created by combined multiple simple T-S fuzzy models with a sum function in the output. Then, the parameters of multilayer T-S fuzzy model are optimally identified using DE algorithm to create offline the inverse nonlinear system regarding uncertain system parameters. Second, an adaptive fuzzy-based sliding-mode surface is innovatively designed to guarantee that the closed-loop system is asymptotically stable using Lyapunov stability principle. Moreover, necessary benchmark tests are investigated in MATLAB/Simulink platform, including the spring-mass-damper SMD system and the fluid level of a double tank with uncertain parameters, in order to illustrate the effectiveness and the feasibility of the proposed IFC + AF control scheme. The IFC + AF control algorithm is adequately investigated with various control coefficients and is strictly compared with the advanced adaptive fuzzy control and the inverse fuzzy control (IFC) approaches. Simulation and experiment results are satisfactorily investigated and demonstrate the feasibility and performance of the proposed IFC + AF control method.Fixed-time control under spatiotemporal and input constraints: a quadratic programming based approachhttps://zbmath.org/1491.931072022-09-13T20:28:31.338867Z"Garg, Kunal"https://zbmath.org/authors/?q=ai:garg.kunal"Arabi, Ehsan"https://zbmath.org/authors/?q=ai:arabi.ehsan"Panagou, Dimitra"https://zbmath.org/authors/?q=ai:panagou.dimitraSummary: This paper presents a control synthesis framework for a general class of nonlinear, control-affine systems under spatiotemporal and input constraints. First, we study the problem of fixed-time convergence in the presence of input constraints. The relation between the domain of attraction for fixed-time stability with respect to input constraints and the required time of convergence is established. It is shown that increasing the control authority or the required time of convergence can expand the domain of attraction for fixed-time stability Then, we consider the problem of finding a control input that confines the closed-loop system trajectories in a safe set and steers them to a goal set within a fixed time. To this end, we present a quadratic programming (QP) formulation to compute the corresponding control input. We use slack variables to guarantee the feasibility of the proposed QP under input constraints. Furthermore, when strict complementary slackness holds, we show that the solution of the QP is a continuous function of the system states and establish the uniqueness of closed-loop solutions to guarantee forward invariance using Nagumo's theorem. To corroborate our proposed methods, we present two case studies, an example of an adaptive cruise control problem and a two-robot motion planning problem.Conjugate gradient-based FLANN algorithms in nonlinear active noise controlhttps://zbmath.org/1491.940232022-09-13T20:28:31.338867Z"Lu, Lu"https://zbmath.org/authors/?q=ai:lu.lu"Zhu, Guangya"https://zbmath.org/authors/?q=ai:zhu.guangya"Yang, Xiaomin"https://zbmath.org/authors/?q=ai:yang.xiaomin"Zhou, Kai"https://zbmath.org/authors/?q=ai:zhou.kai.3|zhou.kai|zhou.kai.2|zhou.kai.1Summary: The conjugate gradient (CG) method exhibits fast convergence speed than the steepest descent, which has received considerable attention. In this work, we propose two CG-based methods for nonlinear active noise control (NLANC). The proposed filtered-s Bessel CG (FsBCG)-I algorithm implements the functional link artificial neural network (FLANN) as a controller, and it is derived from the Matérn kernel to achieve enhanced performance in various environments. On the basis of the FsBCG-I algorithm, we further develop the FsBCG-II algorithm, which utilizes the Bessel function of the first kind to constrain outliers. As an alternative, the FsBCG-II algorithm has reduced computational complexity and similar performance as compared to the FsBCG-I algorithm. Moreover, the convergence property of the algorithms is analyzed. The proposed algorithms are compared with some highly cited previous works. Extensive simulation results demonstrate that the proposed algorithms can achieve robust performance when the noise source is impulsive, Gaussian, logistic, and time-varying.Automatic search for bit-based division propertyhttps://zbmath.org/1491.940522022-09-13T20:28:31.338867Z"Ghosh, Shibam"https://zbmath.org/authors/?q=ai:ghosh.shibam"Dunkelman, Orr"https://zbmath.org/authors/?q=ai:dunkelman.orrSummary: Division properties, introduced by \textit{Y. Todo} [Lect. Notes Comput. Sci. 9056, 287--314 (2015; Zbl 1370.94545)], are an extension of square attack (also called saturation attack or integral cryptanalysis). Given their importance, a large number of works tried to offer automatic tools to find division properties, primarily based on MILP or SAT/SMT. This paper studies better modeling techniques for finding division properties using the Constraint Programming and SAT/SMT-based automatic tools. We use the fact that the Quine-McCluskey algorithm produces a concise CNF representation corresponding to the division trail table of an S-box. As a result, we can offer significantly more compact models, which allow SAT and Constraint Programming tools to outperform previous results.
To show the strength of our new approach, we look at the NIST lightweight candidate KNOT and Ascon. We show several new distinguishers with a lower data complexity for 17-round KNOT-256, KNOT-384 and 19-round KNOT-512. In addition, for the 5-round Ascon, we get a lower data distinguisher than the previous division-based results.
Finally, we revisit the method to extend the integral distinguisher by composing linear layers at the input and output. We provide a formulation to find the optimal number of linear combinations that need to be considered. As a result of this new formulation, we prove that 18-round KNOT-256 and KNOT-384 have no integral distinguisher using conventional division property and we show this more efficiently than the previous methods.
For the entire collection see [Zbl 1487.94007].