Recent zbMATH articles in MSC 90https://zbmath.org/atom/cc/902023-01-20T17:58:23.823708ZWerkzeugOn the maxima of Motzkin-Straus programs and cliques of graphshttps://zbmath.org/1500.050262023-01-20T17:58:23.823708Z"Tang, Qingsong"https://zbmath.org/authors/?q=ai:tang.qingsong"Zhang, Xiangde"https://zbmath.org/authors/?q=ai:zhang.xiangde"Zhao, Cheng"https://zbmath.org/authors/?q=ai:zhao.cheng"Zhao, Peng"https://zbmath.org/authors/?q=ai:zhao.pengSummary: In this paper, we establish a connection between the local maximizers (global maximizers) of a Motzkin-Straus quadratic program and a specific type of regular multipartite cliques. Our work extends a remarkable connection between the maximum cliques and the global maximizers of the Motzkin-Straus quadratic program. This connection and its extensions can be successfully employed in optimization to provide heuristics for the maximal cliques in graphs. We provide two counterexamples to the results from previous work about the global and local maximizers of the Motzkin-Straus quadratic program. We then amend the previous theorems by introducing a new structure. We also answer two questions raised by \textit{M. Pelillo} and \textit{A. Jagota} [``Feasible and infeasible maxima in a quadratic program for maximum clique'', Artif. Neural Networks 2, No. 4, 411--420 (1995)] about the maxima of the Motzkin-Straus quadratic program.Efficient and non-efficient domination of \(\mathbb{Z}\)-stacked Archimedean latticeshttps://zbmath.org/1500.050512023-01-20T17:58:23.823708Z"Paskowitz, Lyle"https://zbmath.org/authors/?q=ai:paskowitz.lyle"Vallapureddy, Nathan"https://zbmath.org/authors/?q=ai:vallapureddy.nathan"Wierman, John"https://zbmath.org/authors/?q=ai:wierman.john-cSummary: On a graph, a vertex \(v\) dominates vertex \(v'\) if \(v=v'\) or \(v\) is adjacent to \(v'\). A graph has an efficient dominating set if there exists a subset of vertices \(D\) such that every vertex in the graph is dominated by exactly one vertex in \(D\). We investigate efficient domination on the stacked versions of each of the eleven Archimedean Lattices, and determine the existence or non-existence of efficient dominating sets on each lattice through integer programming. The proofs of existence are constructive, and the proofs of non-existence are generated by integer programs. We find efficient dominating sets on seven of the stacked lattices and prove that no such sets exist on the other four stacked lattices.
For the entire collection see [Zbl 1495.05003].On equivalent representations and properties of faces of the cone of copositive matriceshttps://zbmath.org/1500.150292023-01-20T17:58:23.823708Z"Kostyukova, O. I."https://zbmath.org/authors/?q=ai:kostyukova.olga-ivanova"Tchemisova, T. V."https://zbmath.org/authors/?q=ai:tchemisova.tatiana-vSummary: The paper is devoted to the study of the cone \(\mathcal{C}OP^p\) of copositive matrices. Based on the concept of immobile indices known from semi-infinite optimization, we define zero and minimal zero vectors of a subset of the cone \(\mathcal{C}OP^p\) and use them to obtain different representations of the faces of \(\mathcal{C}OP^p\) and the corresponding dual cones. The minimal face of \(\mathcal{C}OP^p\) containing a given convex subset of this cone is described, and some propositions are proved that allow obtaining equivalent descriptions of the feasible sets of copositive problems.On the equality of Clarke and Greenberg-Pierskalla differentialshttps://zbmath.org/1500.260102023-01-20T17:58:23.823708Z"Cerreia-Vioglio, Simone"https://zbmath.org/authors/?q=ai:cerreia-vioglio.simone"Maccheroni, Fabio"https://zbmath.org/authors/?q=ai:maccheroni.fabio"Marinacci, Massimo"https://zbmath.org/authors/?q=ai:marinacci.massimo"Montrucchio, Luigi"https://zbmath.org/authors/?q=ai:montrucchio.luigiSummary: We study the relations for quasiconcave and continuous functions between the two important notions of Clarke and Greenberg-Pierskalla differentiability. As an application, we generalize the classic Roy's identity of consumer theory.Parameter estimation of fractional uncertain differential equations via Adams methodhttps://zbmath.org/1500.340122023-01-20T17:58:23.823708Z"Wu, Guo-Cheng"https://zbmath.org/authors/?q=ai:wu.guocheng"Wei, Jia-Li"https://zbmath.org/authors/?q=ai:wei.jia-li"Luo, Cheng"https://zbmath.org/authors/?q=ai:luo.cheng"Huang, Lan-Lan"https://zbmath.org/authors/?q=ai:huang.lanlanSummary: Parameter estimation of uncertain differential equations becomes popular very recently. This paper suggests a new method based on fractional uncertain differential equations for the first time, which hold more parameter freedom degrees. The Adams numerical method and Adam algorithm are adopted for the optimization problems. The estimation results are compared to show a better forecast. Finally, the predictor-corrector method is adopted to solve the fractional uncertain differential equations. Numerical solutions are demonstrated with varied \(\alpha\)-paths.Multidelay differential equations: a Taylor expansion approachhttps://zbmath.org/1500.340682023-01-20T17:58:23.823708Z"Doldo, Philip"https://zbmath.org/authors/?q=ai:doldo.philip"Pender, Jamol"https://zbmath.org/authors/?q=ai:pender.jamolIn this paper a neutral delay differential equation (DDE) with only a single constant delay is studied. In order to approximate the change in stability in multidelay system the authors derive a critical delay in the neutral DDE via a Taylor expansion. Moreover, DDE with a single constant delay that has a delayed second-derivative term is presented and its critical delay is obtained. Analysis of the above approximations is developed for a two-delay system. The results presented in the paper may have applications in queueing systems in order to obtain updated waiting time or queue length information.
Reviewer: Angela Slavova (Sofia)Entropic regularization of nongradient systemshttps://zbmath.org/1500.352012023-01-20T17:58:23.823708Z"Adams, Daniel"https://zbmath.org/authors/?q=ai:adams.daniel-o|adams.daniel-j"Duong, Manh Hong"https://zbmath.org/authors/?q=ai:duong.manh-hong"dos Reis, Gonçalo"https://zbmath.org/authors/?q=ai:dos-reis.goncaloThe authors consider evolution equations of the form
\[
\partial_t\rho=\operatorname{div}(b\rho)+\operatorname{div}\left(\rho A\nabla\frac{\delta\mathcal{F}}{\delta \rho}\right)
\]
where \(b:\mathbb{R}^d\to\mathbb{R}^d\) is a given vector field, \(A\in \mathbb{R}^{d\times d}\) is a symmetric and possibly degenerate matrix and \(\mathcal{F}:\mathcal{P}(\mathbb{R}^d)\to \mathbb{R}\) is a suitable free energy functional which takes into account the effects of potential and internal energies. The unknown \(\rho\) is a time-dependent probability measure on \(\mathbb{R}^d\). The evolution equation considered here is in general a nongradient system since the drift \(b\) is not necessarily a gradient.
The paper studies an entropic regularized variational approximation scheme for the equation, in the spirit of the celebrated JKO-minimizing movement scheme. A central role in the analysis is played by the minimization problem
\[
\arg\min_{\nu \in \mathcal{P}_2^r(\mathbb{R}^d)} \left\{\frac{1}{2h}\mathcal{W}_{c_h,\varepsilon}(\mu,\nu)+\mathcal{F}(\nu)\right\}
\]
where \(\mu\) is a given probability measure, \(\mathcal{P}_2^r(\mathbb{R}^d)\) is the class of absolutely continuous probability measures with finite second moment, \(h\) corresponds to the time step of the scheme and \(\mathcal{W}_{c_h, \varepsilon}\) is the entropic-transportation cost which comes from a minimization problem involving the sum of an entropic term with strength \(\varepsilon\) and a transportation term with cost \(c_h\). In this level of generality, \(c_h\) depends on the time-step \(h\) and not necessarily comes from a distance.
Under certain conditions, the authors prove the well-posedness of the minimization problem and the convergence of their regularized scheme to a solution of the equation. Concrete applications are provided, together with methods to identify the cost \(c_h\). The paper also includes a numerical implementation of the scheme.
Reviewer: Nicolò De Ponti (Trieste)Estimate of traffic emissions through multiscale second order models with heterogeneous datahttps://zbmath.org/1500.352082023-01-20T17:58:23.823708Z"Balzotti, Caterina"https://zbmath.org/authors/?q=ai:balzotti.caterina"Briani, Maya"https://zbmath.org/authors/?q=ai:briani.mayaSummary: In this paper we propose a multiscale traffic model, based on the family of Generic Second Order Models, which integrates multiple trajectory data into the velocity function. This combination of a second order macroscopic model with microscopic information allows us to reproduce significant variations in speed and acceleration that strongly influence traffic emissions. We obtain accurate approximations even with a few trajectory data. The proposed approach is therefore a computationally efficient and highly accurate tool for calculating macroscopic traffic quantities and estimating emissions.Capra-convexity, convex factorization and variational formulations for the \(\ell_0\) pseudonormhttps://zbmath.org/1500.460562023-01-20T17:58:23.823708Z"Chancelier, Jean-Philippe"https://zbmath.org/authors/?q=ai:chancelier.jean-philippe"De Lara, Michel"https://zbmath.org/authors/?q=ai:de-lara.michelSummary: The so-called \(\ell_0\) pseudonorm, or cardinality function, counts the number of nonzero components of a vector. In this paper, we analyze the \(\ell_0\) pseudonorm by means of so-called Capra (constant along primal rays) conjugacies, for which the underlying source norm and its dual norm are both orthant-strictly monotonic (a notion that we formally introduce and that encompasses the \(\ell_p\)-norms, but for the extreme ones). We obtain three main results. First, we show that the \(\ell_0\) pseudonorm is equal to its Capra-biconjugate, that is, is a Capra-convex function. Second, we deduce an unexpected consequence, that we call convex factorization: the \(\ell_0\) pseudonorm coincides, on the unit sphere of the source norm, with a proper convex lower semicontinuous function. Third, we establish a variational formulation for the \(\ell_0\) pseudonorm by means of generalized top-\(k\) dual norms and \(k\)-support dual norms (that we formally introduce).Strong convergence of gradient projection method for generalized equilibrium problem in a Banach spacehttps://zbmath.org/1500.470922023-01-20T17:58:23.823708Z"Farid, Mohammad"https://zbmath.org/authors/?q=ai:farid.mohammad"Irfan, Syed Shakaib"https://zbmath.org/authors/?q=ai:irfan.syed-shakaib"Khan, Mohammad Firdosh"https://zbmath.org/authors/?q=ai:khan.m-firdosh"Khan, Suhel Ahmad"https://zbmath.org/authors/?q=ai:khan.suhel-ahmadSummary: In this paper, we propose and analyze a hybrid iterative method for finding a common element of the set of solutions of a generalized equilibrium problem, the set of solutions of a variational inequality problem, and the set of fixed points of a relatively nonexpansive mapping in a real Banach space. Further, we prove the strong convergence of the sequences generated by the iterative scheme. Finally, we derive some consequences from our main result. Our work is an improvement and extension of some previously known results recently obtained by many authors.Strong convergence of paths for split variational inclusion and fixed point problemshttps://zbmath.org/1500.470962023-01-20T17:58:23.823708Z"Jung, Jong Soo"https://zbmath.org/authors/?q=ai:jung.jong-sooThe purpose of this paper is to introduce a new path based on the hybrid steepest descent method for finding a common element of the solution set \(\Gamma\) of a split variational inclusion problem and the fixed point set of a continuous pseudocontractive mapping. Further, strong convergence of the path to a common point of \(\Gamma\) and \(\mathrm{Fix}(T)\) is established, which is a solution of a certain variational inequality. As a direct consequence, the author finds the common unique minimum-norm element of \(\Gamma\) and \(\mathrm{Fix}(T)\).
Reviewer: Hengyou Lan (Zigong)On approximation of the combination of variational inequality problem and equilibrium problem for nonlinear mappingshttps://zbmath.org/1500.470982023-01-20T17:58:23.823708Z"Suwannaut, Sarawut"https://zbmath.org/authors/?q=ai:suwannaut.sarawut"Kangtunyakarn, Atid"https://zbmath.org/authors/?q=ai:kangtunyakarn.atidSummary: Using the concept of the combination of equilibrium problem, we introduce the combination of variational inequality problem for a finite family of inverse-strongly monotone mappings. Under some control conditions, we prove the strong convergence theorem for these nonlinear problems and fixed points of nonspreading mapping. Finally, we give numerical examples in two-dimensional space of real numbers in order to compare numerical results between the combination of variational inequality problem and the variational inequality problem.Strongly convergent fixed point algorithm with applications to structured monotone inclusion problemshttps://zbmath.org/1500.471112023-01-20T17:58:23.823708Z"Matsushita, Shin-ya"https://zbmath.org/authors/?q=ai:matsushita.shinya|matsushita.shin-yaSummary: As described in this paper, we study the strong convergence properties of variants of the primal-dual splitting algorithm for structured monotone inclusion problems. For this purpose, we first consider a strongly convergent fixed point algorithm in a real Hilbert space and then provide a convergence analysis under mild assumptions. Then, by making use of primal-dual techniques we can employ the proposed fixed point algorithms when solving monotone inclusion problems involving parallel sums and compositions of maximally monotone operators with linear continuous ones. We show strong convergence of the iteratively generated sequences to the solution.Perturbed fixed point iterative methods based on pattern structurehttps://zbmath.org/1500.471122023-01-20T17:58:23.823708Z"Nikazad, T."https://zbmath.org/authors/?q=ai:nikazad.touraj"Abbasi, M."https://zbmath.org/authors/?q=ai:abbasi.mokhtarSummary: We introduce a new idea of algorithmic structure, called assigning algorithm, using a finite collection of a subclass of strictly quasi-nonexpansive operators. This new algorithm allows the iteration vectors to take steps on a pattern which is based on a connected directed acyclic graph. The sequential, simultaneous, and string-averaging methods for solving convex feasibility problems are the special cases of the new algorithm which may be used to reduce idle time of processors in parallel implementations. We give a convergence analysis for such algorithmic structure with perturbation. Also, we extend some existence results of the split common fixed point problem based on the new algorithm. The performance of the new algorithm is illustrated with numerical examples from computer tomography.Turnpike phenomenon and symmetric optimization problemshttps://zbmath.org/1500.490012023-01-20T17:58:23.823708Z"Zaslavski, Alexander J."https://zbmath.org/authors/?q=ai:zaslavski.alexander-jPublisher's description: ``Written by a leading expert in turnpike phenomenon, this book is devoted to the study of symmetric optimization, variational and optimal control problems in infinite dimensional spaces and turnpike properties of their approximate solutions. The book presents a systematic and comprehensive study of general classes of problems in optimization, calculus of variations, and optimal control with symmetric structures from the viewpoint of the turnpike phenomenon. The author establishes generic existence and well-posedness results for optimization problems and individual (not generic) turnpike results for variational and optimal control problems. Rich in impressive theoretical results, the author presents applications to crystallography and discrete dispersive dynamical systems which have prototypes in economic growth theory.
This book will be useful for researchers interested in optimal control, calculus of variations turnpike theory and their applications, such as mathematicians, mathematical economists, and researchers in crystallography, to name just a few.''
Reviewer: Costică Moroşanu (Iaşi)Simultaneous distributed-boundary optimal control problems driven by nonlinear complementarity systemshttps://zbmath.org/1500.490032023-01-20T17:58:23.823708Z"Cen, Jinxia"https://zbmath.org/authors/?q=ai:cen.jinxia"Haddad, Tahar"https://zbmath.org/authors/?q=ai:haddad.tahar"Nguyen, Van Thien"https://zbmath.org/authors/?q=ai:nguyen.van-thien"Zeng, Shengda"https://zbmath.org/authors/?q=ai:zeng.shengdaSummary: The primary goal of this paper is to study a nonlinear complementarity system (NCS, for short) with a nonlinear and nonhomogeneous partial differential operator and mixed boundary conditions, and a simultaneous distributed-boundary optimal control problem governed by (NCS), respectively. First, we formulate the weak formulation of (NCS) to a mixed variational inequality with double obstacle constraints (MVI, for short), and prove the existence and uniqueness of solution to (MVI). Then, a power penalty method is applied to (NCS) for introducing an approximating mixed variational inequality without constraints (AMVI, for short). After that, a convergence result that the unique solution of (MVI) can be approached by the unique solution of (AMVI) when a penalty parameter tends to infinity, is established. Moreover, we explore the solvability of the simultaneous distributed-boundary optimal control problem described by (MVI), and consider a family of approximating optimal control problems driven by (AMVI). Finally, we provide a result on asymptotic behavior of optimal controls, system states and minimal values to approximating optimal control problems.Second-order optimality conditions for strict Pareto minima and weak efficiency for nonsmooth constrained vector equilibrium problemshttps://zbmath.org/1500.490062023-01-20T17:58:23.823708Z"Su, Tran Van"https://zbmath.org/authors/?q=ai:su.tran-van"Luu, Do Van"https://zbmath.org/authors/?q=ai:do-van-luu.Summary: In this article, some types of lower and upper second-order strictly pseudoconvexity are provided for establishing sufficient conditions for the second-order strict local Pareto minima of nonsmooth vector equilibrium problem with set, inequality and equality constraints. Based on the notion of Gerstewitz mappings, some Kuhn-Tucker-type multiplier rules for the strict local Pareto minima of such problem are obtained. We also construct the second-order constraint qualification in terms of first- and second-order directional derivatives of the (CQ) and (CQ1) types. Using this constraint qualifications, some second-order primal and dual necessary optimality conditions in terms of second-order upper and lower Dini directional derivatives for such minima are derived. Under suitable assumptions on the lower and upper strictly pseudoconvexity of order two of objective and constraint functions, second-order necessary optimality conditions become sufficient optimality conditions to such problem. Some illustrative examples are also given for our findings.Sequential convex programming for non-linear stochastic optimal controlhttps://zbmath.org/1500.490142023-01-20T17:58:23.823708Z"Bonalli, Riccardo"https://zbmath.org/authors/?q=ai:bonalli.riccardo"Lew, Thomas"https://zbmath.org/authors/?q=ai:lew.thomas"Pavone, Marco"https://zbmath.org/authors/?q=ai:pavone.marco|pavone.marco.1Summary: This work introduces a sequential convex programming framework for non-linear, finitedimensional stochastic optimal control, where uncertainties are modeled by a multidimensional Wiener process. We prove that any accumulation point of the sequence of iterates generated by sequential convex programming is a candidate locally-optimal solution for the original problem in the sense of the stochastic Pontryagin Maximum Principle. Moreover, we provide sufficient conditions for the existence of at least one such accumulation point. We then leverage these properties to design a practical numerical method for solving non-linear stochastic optimal control problems based on a deterministic transcription of stochastic sequential convex programming.Undiscounted control policy generation for continuous-valued optimal control by approximate dynamic programminghttps://zbmath.org/1500.490152023-01-20T17:58:23.823708Z"Lock, Jonathan"https://zbmath.org/authors/?q=ai:lock.jonathan"McKelvey, Tomas"https://zbmath.org/authors/?q=ai:mckelvey.tomasSummary: We present a numerical method for generating the state-feedback control policy associated with general undiscounted, constant-setpoint, infinite-horizon, nonlinear optimal control problems with continuous state variables. The method is based on approximate dynamic programming, and is closely related to approximate policy iteration. Existing methods typically terminate based on the convergence of the control policy and either require a discounted problem formulation or demand the cost function to lie in a specific subclass of functions. The presented method extends on existing termination criteria by requiring both the control policy and the resulting system state to converge, allowing for use with undiscounted cost functions that are bounded and continuous. This paper defines the numerical method, derives the relevant underlying mathematical properties, and validates the numerical method with representative examples. A MATLAB implementation with the shown examples is freely available.Sequential linear integer programming for integer optimal control with total variation regularizationhttps://zbmath.org/1500.490162023-01-20T17:58:23.823708Z"Leyffer, Sven"https://zbmath.org/authors/?q=ai:leyffer.sven"Manns, Paul"https://zbmath.org/authors/?q=ai:manns.paulSummary: We propose a trust-region method that solves a sequence of linear integer programs to tackle integer optimal control problems regularized with a total variation penalty. The total variation penalty implies that the considered integer control problems admit minimizers. We introduce a local optimality concept for the problem, which arises from the infinite-dimensional perspective. In the case of a one-dimensional domain of the control function, we prove convergence of the iterates produced by our algorithm to points that satisfy first-order stationarity conditions for local optimality. We demonstrate the theoretical findings on a computational example.Lagrange-Hamilton approach in optimization problems with isoperimetric-type constraintshttps://zbmath.org/1500.490182023-01-20T17:58:23.823708Z"Treanţă, Savin"https://zbmath.org/authors/?q=ai:treanta.savinAuthor's abstract: This paper derives necessary optimality conditions for a certain class of optimal control problems without linearity or convexity assumptions. The optimal control problem has a general objective function of integral type and a finite number of isoperimetric type constraints. For proving the main result derived in this paper, the Lagrange function and the control Hamiltonian are introduced and an adjoint differential equation is stated. In addition, we formulate some examples where the derived necessary optimality conditions are applied.
Reviewer: Costică Moroşanu (Iaşi)A new hybrid method for shape optimization with application to semiconductor equationshttps://zbmath.org/1500.490252023-01-20T17:58:23.823708Z"Yazidi, Youness El"https://zbmath.org/authors/?q=ai:el-yazidi.youness"Ellabib, Abdellatif"https://zbmath.org/authors/?q=ai:ellabib.abdellatifSummary: The aim of this work is to reconstruct the depletion region in pn junction. Starting with famous drift diffusion model, we establish the simplified equation for the considered semiconductor. There we call the shape optimization technique to formulate a minimization problem from the inverse problem at hand. The existence of an optimal solution of the optimization problem is proved. The proposed numerical algorithm is a combined Domain Decomposition method with an efficient hybrid conjugate gradient guided by differential evolution heuristic algorithm, the finite element method is used to discretize the state equation. At the end we establish several numerical examples, to prove the validity of theoretical results using the proposed algorithm, in addition we show some simulation of the depletion region approximation under two different functioning modes.Tight bounds on the maximal perimeter and the maximal width of convex small polygonshttps://zbmath.org/1500.520012023-01-20T17:58:23.823708Z"Bingane, Christian"https://zbmath.org/authors/?q=ai:bingane.christianSummary: A small polygon is a polygon of unit diameter. The maximal perimeter and the maximal width of a convex small polygon with \(n=2^s\) vertices are not known when \(s \ge 4\). In this paper, we construct a family of convex small \(n\)-gons, \(n=2^s\) and \(s\ge 3\), and show that the perimeters and the widths obtained cannot be improved for large \(n\) by more than \(a/n^6\) and \(b/n^4\) respectively, for certain positive constants \(a\) and \(b\). In addition, assuming that a conjecture of Mossinghoff is true, we formulate the maximal perimeter problem as a nonlinear optimization problem involving trigonometric functions and, for \(n=2^s\) with \(3 \le s\le 7\), we provide global optimal solutions.Approximation to stochastic variance reduced gradient Langevin dynamics by stochastic delay differential equationshttps://zbmath.org/1500.600302023-01-20T17:58:23.823708Z"Chen, Peng"https://zbmath.org/authors/?q=ai:chen.peng"Lu, Jianya"https://zbmath.org/authors/?q=ai:lu.jianya"Xu, Lihu"https://zbmath.org/authors/?q=ai:xu.lihuThe authors consider the stochastic variance reduced gradient Langevin dynamics algorithm, and show that its output can be approximated by the solution of a stochastic differential equations with delay. Uniform error bounds are obtained using the Wasserstein distance under assumptions that cover both convex and non-convex minimization problems. The proof arguments rely on the Lindeberg principle and on semigroup gradient bounds obtained by the Malliavin calculus.
Reviewer: Nicolas Privault (Singapore)Block preconditioners for linear systems in interior point methods for convex constrained optimizationhttps://zbmath.org/1500.650132023-01-20T17:58:23.823708Z"Zilli, Giovanni"https://zbmath.org/authors/?q=ai:zilli.giovanni"Bergamaschi, Luca"https://zbmath.org/authors/?q=ai:bergamaschi.lucaSummary: In this paper, we address the preconditioned iterative solution of the saddle-point linear systems arising from the (regularized) Interior Point method applied to linear and quadratic convex programming problems, typically of large scale. Starting from the well-studied Constraint Preconditioner, we review a number of inexact variants with the aim to reduce the computational cost of the preconditioner application within the Krylov subspace solver of choice. In all cases we illustrate a spectral analysis showing the conditions under which a good clustering of the eigenvalues of the preconditioned matrix can be obtained, which foreshadows (at least in case PCG/MINRES Krylov solvers are used), a fast convergence of the iterative method. Results on a set of large size optimization problems confirm that the Inexact variants of the Constraint Preconditioner can yield efficient solution methods.Anderson accelerating the preconditioned modulus approach for linear complementarity problems on second-order coneshttps://zbmath.org/1500.650182023-01-20T17:58:23.823708Z"Li, Zhizhi"https://zbmath.org/authors/?q=ai:li.zhizhi"Zhang, Huai"https://zbmath.org/authors/?q=ai:zhang.huai"Jin, Yimin"https://zbmath.org/authors/?q=ai:jin.yimin"Ou-Yang, Le"https://zbmath.org/authors/?q=ai:ou-yang.leSummary: Second-order cone linear complementarity problems (SOCLCPs) have wide applications in real world, and the latest modulus method is proved to be an efficient solver. Here, inspired by the state-of-the-art modulus method and Anderson acceleration (AA), we construct the Anderson accelerating preconditioned modulus (AA+PMS) approach. Theoretically, in the first stage, we utilize the Fréchet-differentiability of the absolute value function in Jordan algebra to explore its new properties. On this basis, we establish the convergence theory for the PMS approach different from the previous analysis, and further discuss the selection strategy of parameters involved. In the second stage, we demonstrate the strong semi-smoothness of the absolute value function in Jordan algebra and, thus, establish the local convergence theory for the AA+PMS approach. Finally, we conduct rich numerical experiments with application to some well-structured examples, the second-order cone programming, the Signorini problem of the Laplacian and the three-dimensional frictional contact problem to verify the robustness and effectiveness of the AA+PMS approach.Convergence theorems for solving a system of pseudomonotone variational inequalities using Bregman distance in Banach spaceshttps://zbmath.org/1500.650212023-01-20T17:58:23.823708Z"Jolaoso, Lateef Olakunle"https://zbmath.org/authors/?q=ai:jolaoso.lateef-olakunle"Aphane, Maggie"https://zbmath.org/authors/?q=ai:aphane.maggie"Raji, Musiliu Tayo"https://zbmath.org/authors/?q=ai:raji.musiliu-tayo"Osinuga, Idowu Ademola"https://zbmath.org/authors/?q=ai:osinuga.idowu-ademola"Olajuwon, Bakai Ishola"https://zbmath.org/authors/?q=ai:olajuwon.bakai-isholaSummary: In this paper, we present two new parallel Bregman projection algorithms for finding a common solution of a system of pseudomonotone variational inequality problems in a real reflexive Banach space. The first algorithm combines a parallel Bregman subgradient extragradient method with the Halpern iterative method for approximating a common solution of variational inequalities in reflexive Banach spaces. The second algorithm involves a parallel Bregman subgradient extragradient method, Halpern iterative method and a line search procedure which aims to avoid the condition of finding prior estimate of the Lipschitz constant of each cost operator. Two strong convergence results were proved under standard assumptions imposed on the cost operators and control sequences. Finally, we provide some numerical experiments to illustrate the behaviour of the sequences generated by the proposed algorithms.Linesearch Newton-CG methods for convex optimization with noisehttps://zbmath.org/1500.650222023-01-20T17:58:23.823708Z"Bellavia, S."https://zbmath.org/authors/?q=ai:bellavia.stefania"Fabrizi, E."https://zbmath.org/authors/?q=ai:fabrizi.elena|fabrizi.enrico"Morini, B."https://zbmath.org/authors/?q=ai:morini.benedettaSummary: This paper studies the numerical solution of strictly convex unconstrained optimization problems by linesearch Newton-CG methods. We focus on methods employing inexact evaluations of the objective function and inexact and possibly random gradient and Hessian estimates. The derivative estimates are not required to satisfy suitable accuracy requirements at each iteration but with sufficiently high probability. Concerning the evaluation of the objective function we first assume that the noise in the objective function evaluations is bounded in absolute value. Then, we analyze the case where the error satisfies prescribed dynamic accuracy requirements. We provide for both cases a complexity analysis and derive expected iteration complexity bounds. We finally focus on the specific case of finite-sum minimization which is typical of machine learning applications.On the convergence properties of scaled gradient projection methods with non-monotone Armijo-like line searcheshttps://zbmath.org/1500.650232023-01-20T17:58:23.823708Z"Crisci, Serena"https://zbmath.org/authors/?q=ai:crisci.serena"Porta, Federica"https://zbmath.org/authors/?q=ai:porta.federica"Ruggiero, Valeria"https://zbmath.org/authors/?q=ai:ruggiero.valeria"Zanni, Luca"https://zbmath.org/authors/?q=ai:zanni.lucaSummary: In order to solve constrained optimization problems on convex sets, the class of scaled gradient projection methods is often exploited in combination with non-monotone Armijo-like line search strategies. These techniques are adopted for efficiently selecting the steplength parameter and can be realized by means of two different approaches: either the one along the arc or the one along the feasible directions. In this paper we deeply analyze the convergence properties of the scaled gradient projection methods equipped with the non-monotone version of both these Armijo-like line searches. To the best of our knowledge, not all the convergence results proved for either the non-scaled or the monotone gradient projection algorithm have been also stated for the non-monotone and scaled counterpart. The goal of this paper is to fill this gap of knowledge by detailing which hypotheses are needed to guarantee both the stationarity of the limit points and the convergence of the sequence generated by the non-monotone scaled gradient projection schemes. Moreover, in the case of polyhedral constraint set, we discuss the identification of the active set at the solution for the sequence generated by the along the arc approach. Several numerical experiments on quadratic and non-quadratic optimization problems have been carried out in order to compare the behaviour of the considered scaled gradient projection methods.An infeasible-start framework for convex quadratic optimization, with application to constraint-reduced interior-point and other methodshttps://zbmath.org/1500.650252023-01-20T17:58:23.823708Z"Laiu, M. Paul"https://zbmath.org/authors/?q=ai:laiu.m-paul"Tits, André L."https://zbmath.org/authors/?q=ai:tits.andre-lSummary: A framework is proposed for solving general convex quadratic programs (CQPs) from an infeasible starting point by invoking an existing \textit{feasible-start} algorithm tailored for \textit{inequality}-constrained CQPs. The central tool is an exact penalty function scheme equipped with a penalty-parameter updating rule. The feasible-start algorithm merely has to satisfy certain general requirements, and so is the updating rule. Under mild assumptions, the framework is proved to converge on CQPs with both inequality and equality constraints and, at a negligible additional cost per iteration, produces an infeasibility certificate, together with a feasible point for an (approximately) \(\ell_1\)-least relaxed feasible problem, when the given problem does not have a feasible solution. The framework is applied to a feasible-start constraint-reduced interior-point algorithm previously proved to be highly performant on problems with many more inequality constraints than variables (``imbalanced''). Numerical comparison with popular codes (OSQP, qpOASES, MOSEK) is reported on both randomly generated problems and support-vector machine classifier training problems. The results show that the former typically outperforms the latter on imbalanced problems. Finally, application of the proposed infeasible-start framework to other feasible-start algorithms is briefly considered, and is tested on a simplex iteration.Understanding the acceleration phenomenon via high-resolution differential equationshttps://zbmath.org/1500.650262023-01-20T17:58:23.823708Z"Shi, Bin"https://zbmath.org/authors/?q=ai:shi.bin"Du, Simon S."https://zbmath.org/authors/?q=ai:du.simon-s"Jordan, Michael I."https://zbmath.org/authors/?q=ai:jordan.michael-irwin"Su, Weijie J."https://zbmath.org/authors/?q=ai:su.weijie-jSummary: Gradient-based optimization algorithms can be studied from the perspective of limiting ordinary differential equations (ODEs). Motivated by the fact that existing ODEs do not distinguish between two fundamentally different algorithms -- Nesterov's accelerated gradient method for strongly convex functions (NAG-SC) and Polyak's heavy-ball method -- we study an alternative limiting process that yields \textit{high-resolution ODEs}. We show that these ODEs permit a general Lyapunov function framework for the analysis of convergence in both continuous and discrete time. We also show that these ODEs are more accurate surrogates for the underlying algorithms; in particular, they not only distinguish between NAG-SC and Polyak's heavy-ball method, but they allow the identification of a term that we refer to as ``gradient correction'' that is present in NAG-SC but not in the heavy-ball method and is responsible for the qualitative difference in convergence of the two methods. We also use the high-resolution ODE framework to study Nesterov's accelerated gradient method for (non-strongly) convex functions, uncovering a hitherto unknown result -- that NAG-C minimizes the squared gradient norm at an inverse cubic rate. Finally, by modifying the high-resolution ODE of NAG-C, we obtain a family of new optimization methods that are shown to maintain the accelerated convergence rates of NAG-C for smooth convex functions.Mathematical programs with vanishing constraints involving strongly invex functionshttps://zbmath.org/1500.650282023-01-20T17:58:23.823708Z"Joshi, Bhuwan Chandra"https://zbmath.org/authors/?q=ai:joshi.bhuwan-chandraSummary: In this paper, we study Wolfe and Mond-Weir type dual models for a special class of optimization problems known as the mathematical programs with vanishing constraints (MPVC). We establish the weak, strong, converse, restricted converse and strict converse duality results under the assumptions of generalized higher order invexity and strict invexity between the primal MPVC and the corresponding Wolfe type dual. We also derive the weak, strong, converse, restricted converse and strict converse duality results between the primal MPVC and the corresponding Mond-Weir type dual under the assumptions of generalized higher order pseudoinvex, strict pseudoinvex and quasiinvex functions.Cocoercivity, smoothness and bias in variance-reduced stochastic gradient methodshttps://zbmath.org/1500.650302023-01-20T17:58:23.823708Z"Morin, Martin"https://zbmath.org/authors/?q=ai:morin.martin"Giselsson, Pontus"https://zbmath.org/authors/?q=ai:giselsson.pontusSummary: With the purpose of examining biased updates in variance-reduced stochastic gradient methods, we introduce SVAG, a SAG/SAGA-like method with adjustable bias. SVAG is analyzed in a cocoercive root-finding setting, a setting which yields the same results as in the usual smooth convex optimization setting for the ordinary proximal-gradient method. We show that the same is not true for SVAG when biased updates are used. The step-size requirements for when the operators are gradients are significantly less restrictive compared to when they are not. This highlights the need to not rely solely on cocoercivity when analyzing variance-reduced methods meant for optimization. Our analysis either match or improve on previously known convergence conditions for SAG and SAGA. However, in the biased cases they still do not correspond well with practical experiences and we therefore examine the effect of bias numerically on a set of classification problems. The choice of bias seem to primarily affect the early stages of convergence and in most cases the differences vanish in the later stages of convergence. However, the effect of the bias choice is still significant in a couple of cases.On the convergence analysis of a penalty algorithm for nonsmooth optimization and its performance for solving hard-sphere problemshttps://zbmath.org/1500.650322023-01-20T17:58:23.823708Z"Prado, Renan W."https://zbmath.org/authors/?q=ai:prado.renan-w"Santos, Sandra A."https://zbmath.org/authors/?q=ai:santos.sandra-augusta"Simões, Lucas E. A."https://zbmath.org/authors/?q=ai:simoes.lucas-eduardo-azevedoSummary: This work builds upon the theoretical and numerical results of the recently proposed Penalized Algorithm for Constrained Nonsmooth Optimization (PACNO). Our contribution is threefold. Instead of resting upon approximate stationary points of the subproblems, approximate local minimizers are assumed to be computed. Consequently, a stronger convergence result is obtained, based on a new sequential optimality condition. Moreover, using a blackbox minimization framework and hard-sphere instances, the intrinsic parameters of PACNO have been adjusted, improving outcomes from the literature for the kissing problem, which consists of determining the maximum number of non-overlapping and equal spheres that can touch simultaneously a given sphere of the same size. Finally, the so-called double-kissing problem has been modeled: two equal and touching spheres are provided, and one aims at finding the maximum number of non-overlapping spheres, having the same radius of the given pair, which can be arranged so that each of them touches at least one of the stated spheres. A nonsmooth formulation for the double-kissing problem is devised, and the solutions of bi-, three-, and four-dimensional instances are successfully achieved.A geometric integration approach to nonsmooth, nonconvex optimisationhttps://zbmath.org/1500.650332023-01-20T17:58:23.823708Z"Riis, Erlend S."https://zbmath.org/authors/?q=ai:riis.erlend-skaldehaug"Ehrhardt, Matthias J."https://zbmath.org/authors/?q=ai:ehrhardt.matthias-joachim"Quispel, G. R. W."https://zbmath.org/authors/?q=ai:quispel.gilles-reinout-willem"Schönlieb, Carola-Bibiane"https://zbmath.org/authors/?q=ai:schonlieb.carola-bibianeSummary: The optimisation of nonsmooth, nonconvex functions without access to gradients is a particularly challenging problem that is frequently encountered, for example in model parameter optimisation problems. Bilevel optimisation of parameters is a standard setting in areas such as variational regularisation problems and supervised machine learning. We present efficient and robust derivative-free methods called randomised Itoh-Abe methods. These are generalisations of the Itoh-Abe discrete gradient method, a well-known scheme from geometric integration, which has previously only been considered in the smooth setting. We demonstrate that the method and its favourable energy dissipation properties are well defined in the nonsmooth setting. Furthermore, we prove that whenever the objective function is locally Lipschitz continuous, the iterates almost surely converge to a connected set of Clarke stationary points. We present an implementation of the methods, and apply it to various test problems. The numerical results indicate that the randomised Itoh-Abe methods can be superior to state-of-the-art derivative-free optimisation methods in solving nonsmooth problems while still remaining competitive in terms of efficiency.Optimal Runge-Kutta stability polynomials for multidimensional high-order methodshttps://zbmath.org/1500.650612023-01-20T17:58:23.823708Z"Hedayati Nasab, Siavash"https://zbmath.org/authors/?q=ai:hedayati-nasab.siavash"Pereira, Carlos A."https://zbmath.org/authors/?q=ai:pereira.carlos-alberto-de-b"Vermeire, Brian C."https://zbmath.org/authors/?q=ai:vermeire.brian-cSummary: In this paper we generate optimized Runge-Kutta stability polynomials for multidimensional discontinuous Galerkin methods recovered using the flux reconstruction approach. Results from linear stability analysis demonstrate that these stability polynomials can yield significantly larger time-step sizes for triangular, quadrilateral, hexahedral, prismatic, and tetrahedral elements with speedup factors of up to 1.97 relative to classical Runge-Kutta methods. Furthermore, performing optimization for multidimensional elements yields modest performance benefits for the triangular, prismatic, and tetrahedral elements. Results from linear advection demonstrate these schemes obtain their designed order of accuracy. Results from Direct Numerical Simulation (DNS) of a Taylor-Green vortex demonstrate the performance benefit of these schemes for unsteady turbulent flows, with negligible impact on accuracy.Exact and heuristic methods in combinatorial optimization. A study on the linear ordering and the maximum diversity problemhttps://zbmath.org/1500.900012023-01-20T17:58:23.823708Z"Martí, Rafael"https://zbmath.org/authors/?q=ai:marti.rafael"Reinelt, Gerhard"https://zbmath.org/authors/?q=ai:reinelt.gerhardThis book gives a precise overview of solution methods in combinatorial optimization by showing how the linear ordering problem and the maximum diversity problem can be solved by Branch-and-Bound or Branch-and-Cut together with several heuristics.
The first version of the book [\textit{R. Martí} and \textit{G. Reinelt}, The linear ordering problem. Exact and heuristic methods in combinatorial optimization. Berlin: Springer (2011; Zbl 1213.90005)] only considers the linear ordering problem as an example. By giving a second example in this edition with quite different characteristics, the authors show that many of their methods still work and can be applied to various combinatorial optimization problems. The book is well suited for readers who want to learn how to solve real world combinatorial optimization problems as the methods are well explained and a lot of algorithms are given with pseudo-code. The authors explain all terms they use and the book is well understandable. Some knowledge in linear programming might be useful for some parts of the book.
The book contains the following chapters:
\begin{itemize}
\item Chapter 1 defines the linear ordering and maximum diversity problem. It introduces alternative formulations and variants. Furthermore, benchmark instances for both problems are described.
\item The second chapter gives some simple construction and improving heuristics for both problems, for example local search heuristics which can be easily adjusted for many other combinatorial optimization problems. All heuristics introduced in this chapter are tested on the benchmark instances of the first chapter.
\item Chapter 3 shows how the simple search heuristics can be improved by using meta-heuristics as for example tabu search, variable neighborhood search or simulated annealing. Tests on the benchmark instances show a performance gain compared to the simple heuristics.
\item The fourth chapter introduces the branch-and-bound method, which is a method to solve general combinatorial optimization methods, not only the linear ordering or maximum diversity problem. It can be used as an exact method or as a heuristic if one stops the process before it is finished.
\item The fifth chapter shows how branch-and-bound can be improved by adding cutting planes leading to the so-called branch-and-cut method. This method is very powerful in solving real world combinatorial optimization methods. Again this method is tested on the benchmark instances described in the first chapter.
\item The sixth chapter investigates the linear ordering polytope by describing several classes of inequalities that are facet defining. The authors also show how to calculate a complete description of the linear ordering polytope for small dimensional polytopes. To really understand this chapter some knowledge in linear programming is helpful though the authors define all terms needed.
\item The last chapter is a summary of further aspects related to the linear ordering or maximum diversity problem as for example approximation algorithms.
\end{itemize}
All in all this book can be recommended to anyone interested in combinatorial optimization who wants to get an overview of the classical solution approaches in this field.
Reviewer: Isabel Beckenbach (Berlin)Distributionally robust front distribution center inventory optimization with uncertain multi-item ordershttps://zbmath.org/1500.900022023-01-20T17:58:23.823708Z"Zhang, Yuli"https://zbmath.org/authors/?q=ai:zhang.yuli"Han, Lin"https://zbmath.org/authors/?q=ai:han.lin"Zhuang, Xiaotian"https://zbmath.org/authors/?q=ai:zhuang.xiaotianSummary: As a new retail model, the front distribution center (FDC) has been recognized as an effective instrument for timely order delivery. However, the high customer demand uncertainty, multi-item order pattern, and limited inventory capacity pose a challenging task for FDC managers to determine the optimal inventory level. To this end, this paper proposes a two-stage distributionally robust (DR) FDC inventory model and an efficient row-and-column generation (RCG) algorithm. The proposed DR model uses a Wasserstein distance-based distributional set to describe the uncertain demand and utilizes a robust conditional value at risk decision criterion to mitigate the risk of distribution ambiguity. The proposed RCG is able to solve the complex max-min-max DR model exactly by repeatedly solving relaxed master problems and feasibility subproblems. We show that the optimal solution of the non-convex feasibility subproblem can be obtained by solving two linear programming problems. Numerical experiments based on real-world data highlight the superior out-of-sample performance of the proposed DR model in comparison with an existing benchmark approach and validate the computational efficiency of the proposed algorithm.Modification of the Teichmann model of population evacuation in conditions of shuttle transporthttps://zbmath.org/1500.900032023-01-20T17:58:23.823708Z"Šeda, Miloš"https://zbmath.org/authors/?q=ai:seda.milos"Peterek, Kamil"https://zbmath.org/authors/?q=ai:peterek.kamilSummary: \%Summary: In this paper, we deal with the modification of one of D. Teichmann models, which he proposed in the habilitation thesis for the evacuation of the population living in an industrial zone with a potential threat of large scale accidents. The author focused on the emergency planning zone of the Dukovany Nuclear Power Plant, which is specific by the necessity of immediate evacuation, and thus by sufficient means of transport, which are ready for evacuation. Here, we will assume that the evacuation can be gradual and with a shuttle service.
For the entire collection see [Zbl 1481.90010].A CVRP model for an in-plant milk run systemhttps://zbmath.org/1500.900042023-01-20T17:58:23.823708Z"Teresa Pereira, M."https://zbmath.org/authors/?q=ai:teresa-pereira.m"Lopes, Cristina"https://zbmath.org/authors/?q=ai:lopes.cristina-videira"Ferreira, Luís Pinto"https://zbmath.org/authors/?q=ai:ferreira.luis-pinto"Oliveira, Silvana"https://zbmath.org/authors/?q=ai:oliveira.silvanaSummary: Automotive industry is very competitive and efficiency is needed in key processes such as logistics in all companies at the supply chain. Using optimization models to increment the efficiency of inbound logistics, such as the in-plant transportation systems, can bring competitive advantages to these companies. This work presents a Vehicle Routing Problem (VRP) model to support and optimize the in-plant transportation system routing, Milk-Run, at the worldwide manufacturing plant of manual assembly in an automotive company, where the routing was planed manually, and too much time and resources were used on this task. At the present, 4 or 5 routes were necessary to supply and collect both raw material and finished items in the plant with 21 workstations. A VRP model was developed into a decision support tool using Integer Programming. The IBM ILOG CPLEX 12.0.8 was used to run the developed model using the production plan of two months of 2018. The results point out an optimal number of 2 milk-runs, a reduction of circa 50\%, allowing the freed resources to be used in other production areas of the plant.
For the entire collection see [Zbl 1480.90002].A genetic algorithm for solving the inventory routing problem with time windowshttps://zbmath.org/1500.900052023-01-20T17:58:23.823708Z"Zapata-Cortes, Julian Andres"https://zbmath.org/authors/?q=ai:zapata-cortes.julian-andres"Arango-Serna, Martin Darío"https://zbmath.org/authors/?q=ai:arango-serna.martin-dario"Serna-Úran, Conrado Augusto"https://zbmath.org/authors/?q=ai:serna-uran.conrado-augusto"Gil-Gómez, Hermenegildo"https://zbmath.org/authors/?q=ai:gil-gomez.hermenegildoSummary: Logistics decisions of inventory allocation and transportation are strongly linked to each other since one directly affects the other, as is the case of how reducing the inventory in a customer facility may lead to a more intensive transportation process, because the smaller the quantities the more frequently the vehicles must visit the customer. For this reason, inventory and transportation decisions should not be considered separately but together, by using the inventory routing problem -- IRP. This work presents a genetic algorithm to solve the IRP with time windows, which allows to simultaneously optimize inventory allocation and transport routes to supply a set of customers for a specific time horizon. This model allows to obtain a minimum total cost as a result of a better combination of the inventories at customers' facilities and the transportation required to supply them. The proposed model and the algorithm developed for its solution allowed to obtain significant savings compared to the routing optimization to supply all customers in each period using the vehicle routing problem model with time windows, which allows to optimize customers' inventory and minimize transport costs in each period. However, when this solution is compared with the total distribution cost throughout the time horizon, it generates higher costs than the solution generated by the IRP with time windows presented in this work.
For the entire collection see [Zbl 1492.90011].Optimization and coordination in a service-constrained supply chain with the bidirectional option contract under conditional value-at-riskhttps://zbmath.org/1500.900062023-01-20T17:58:23.823708Z"Zhao, Han"https://zbmath.org/authors/?q=ai:zhao.han"Sun, Bangdong"https://zbmath.org/authors/?q=ai:sun.bangdong"Wang, Hui"https://zbmath.org/authors/?q=ai:wang.hui.4|wang.hui.10|wang.hui.8|wang.hui.13|wang.hui.18|wang.hui.17|wang.hui.20|wang.hui.7|wang.hui.6|wang.hui.15|wang.hui.12|wang.hui.40|wang.hui.11|wang.hui.9|wang.hui.16|wang.hui|wang.hui.14|wang.hui.22"Song, Shiji"https://zbmath.org/authors/?q=ai:song.shiji"Zhang, Yuli"https://zbmath.org/authors/?q=ai:zhang.yuli"Wang, Liejun"https://zbmath.org/authors/?q=ai:wang.liejunSummary: This paper investigates the optimal operational decisions for the risk-neutral supplier and the risk-averse retailer in the supply chain with a service requirement under the conditional value-at-risk. Specifically, the optimal order and production policies with and without the bidirectional option contract are derived. Further, this paper shows that the optimal conditional value-at-risk of the retailer is non-increasing in the service requirement, while the optimal expected profit of the supplier is non-decreasing in the service requirement. When the service requirement is binding, the optimal conditional value-at-risk of the retailer is increasing in the risk aversion, while the optimal expected profit of the supplier is decreasing in the risk aversion. In addition, it is shown that with the bidirectional option contract, the service level provided by the retailer is equivalent to (higher than) that without them when the service requirement is (not) binding. Finally, this paper demonstrates that the bidirectional option contract can mitigate the effect of risk aversion on the retailer's order quantity, benefit both the retailer and supplier, and improve the performance of the supply chain. Numerical experiments are conducted to further confirm our results.On the approximability of robust network designhttps://zbmath.org/1500.900072023-01-20T17:58:23.823708Z"Al-Najjar, Yacine"https://zbmath.org/authors/?q=ai:al-najjar.yacine"Ben-Ameur, Walid"https://zbmath.org/authors/?q=ai:ben-ameur.walid"Leguay, Jérémie"https://zbmath.org/authors/?q=ai:leguay.jeremieSummary: Given the dynamic nature of traffic, we investigate the variant of robust network design where we have to determine the capacity to reserve on each link so that each demand vector belonging to a polyhedral set can be routed. The objective is either to minimize congestion or a linear cost. Routing is assumed to be fractional and dynamic (i.e., dependent on the current traffic vector). We first prove that the robust network design problem with minimum congestion cannot be approximated within any constant factor. Then, using the ETH conjecture, we get a \(\Omega(\frac{\log n}{\log\log n})\) lower bound for the approximability of this problem. This implies that the well-known \(O(\log n)\) approximation ratio established by \textit{H. Räcke} [in: Proceedings of the 40th annual ACM symposium on theory of computing, STOC 2008. Victoria, Canada, May 17--20, 2008. New York, NY: Association for Computing Machinery (ACM). 255--264 (2008; Zbl 1231.68051)] in 2008 is tight. Using Lagrange relaxation, we obtain a new proof of the \(O(\log n)\) approximation. An important consequence of the Lagrange-based reduction and our inapproximability results is that the robust network design problem with linear reservation cost cannot be approximated within any constant ratio. This answers a long-standing open question of \textit{C. Chekuri} [``Routing and network design with robustness to changing or uncertain traffic demands'', ACM SIGACT News 38, No. 3, 106--129 (2007; \url{doi:10.1145/1324215.1324236})]. We also give another proof of the result of \textit{N. Goyal} et al. [Lect. Notes Comput. Sci. 5757, 277--288 (2009; Zbl 1256.90046)] stating that the optimal linear cost under static routing can be \(\Omega(\log n)\) more expensive than the cost obtained under dynamic routing. Finally, we show that even if only two given paths are allowed for each commodity, the robust network design problem with minimum congestion or linear cost is hard to approximate within some constant.Dynamic Katz and related network measureshttps://zbmath.org/1500.900082023-01-20T17:58:23.823708Z"Arrigo, Francesca"https://zbmath.org/authors/?q=ai:arrigo.francesca"Higham, Desmond J."https://zbmath.org/authors/?q=ai:higham.desmond-j"Noferini, Vanni"https://zbmath.org/authors/?q=ai:noferini.vanni"Wood, Ryan"https://zbmath.org/authors/?q=ai:wood.ryanSummary: We study walk-based centrality measures for time-ordered network sequences. For the case of standard dynamic walk-counting, we show how to derive and compute centrality measures induced by analytic functions. We also prove that dynamic Katz centrality, based on the resolvent function, has the unique advantage of allowing computations to be performed entirely at the node level. We then consider two distinct types of backtracking and develop a framework for capturing dynamic walk combinatorics when either or both is disallowed.Technical note -- Bifurcating constraints to improve approximation ratios for network revenue management with reusable resourceshttps://zbmath.org/1500.900092023-01-20T17:58:23.823708Z"Baek, Jackie"https://zbmath.org/authors/?q=ai:baek.jackie"Ma, Will"https://zbmath.org/authors/?q=ai:ma.willSummary: Network revenue management (NRM) describes a general online allocation problem in which combinations of capacity-constrained resources are sold to a stream of arriving customers. Existing papers propose one-size-fits-all methods for controlling the resource capacities over time. In this paper, we study how different methods can be used to control different resource constraints based on the network structure of each instance. Specifically, we propose a heuristic that ``bifurcates'' the resources of a given NRM instance into a structured part resembling a \textit{matroid} and an unstructured part. We then derive an NRM policy with an approximation ratio of \(1 / ( 2 ( 1 + L^\prime ) )\), where \(L^\prime\) denotes the maximum number of distinct unstructured resources required by a customer. Our approach improves theoretical and empirical performance both in applications where the structured constraints arise naturally and in randomly generated NRM instances where the structured constraints must be found by our heuristic. Along the way, our paper also provides a unified framework for analyzing NRM problems, which simplifies existing results and allows for the resources to be \textit{reusable}.Optimisation of electrical network configuration: complexity and algorithms for ring topologieshttps://zbmath.org/1500.900102023-01-20T17:58:23.823708Z"Barth, Dominique"https://zbmath.org/authors/?q=ai:barth.dominique"Mautor, Thierry"https://zbmath.org/authors/?q=ai:mautor.thierry"de Moissac, Arnaud"https://zbmath.org/authors/?q=ai:de-moissac.arnaud"Watel, Dimitri"https://zbmath.org/authors/?q=ai:watel.dimitri"Weisser, Marc-Antoine"https://zbmath.org/authors/?q=ai:weisser.marc-antoineSummary: We consider power distribution networks containing source nodes producing electricity and nodes representing electricity consumers. These sources and these consumers are interconnected by a switched network. Configuring this network consists in deciding which switches are activated and the orientation of the links between these switches, so as to obtain a directed acyclic graph (DAG) from the producer nodes to the consumer nodes. This DAG is valid if the electric flow it induces satisfies the demand of each consumer without exceeding the production capacity of each source and the flow capacity of each switch. We show that the problem of deciding if such a valid DAG exists is NP-complete. In the case where such a valid DAG exists, we study the problem of determining a valid DAG that balances the ratio between the amount of electricity produced and the maximum production capacity for each source. We show that this minimization problem is also NP-complete in the general case but that it becomes polynomial in the case of ring network topologies.Data-driven control of hydraulic servo actuator based on adaptive dynamic programminghttps://zbmath.org/1500.900112023-01-20T17:58:23.823708Z"Djordjevic, Vladimir"https://zbmath.org/authors/?q=ai:djordjevic.vladimir"Stojanovic, Vladimir"https://zbmath.org/authors/?q=ai:stojanovic.vladimir-marko"Tao, Hongfeng"https://zbmath.org/authors/?q=ai:tao.hongfeng"Song, Xiaona"https://zbmath.org/authors/?q=ai:song.xiaona"He, Shuping"https://zbmath.org/authors/?q=ai:he.shuping"Gao, Weinan"https://zbmath.org/authors/?q=ai:gao.weinanSummary: The hydraulic servo actuators (HSA) are often used in the industry in tasks that request great powers, high accuracy and dynamic motion. It is well known that HSA is a highly complex nonlinear system, and that the system parameters cannot be accurately determined due to various uncertainties, inability to measure some parameters, and disturbances. This paper considers control problem of the HSA with unknown dynamics, based on adaptive dynamic programming via output feedback. Due to increasing practical application of the control algorithm, a linear discrete model of HSA is considered and an online learning data-driven controller is used, which is based on measured input and output data instead of unmeasurable states and unknown system parameters. Hence, the ADP based data-driven controller in this paper requires neither the knowledge of the HSA dynamics nor exosystem dynamics. The convergence of the ADP based control algorithm is also theoretically shown. Simulation results verify the feasibility and effectiveness of the proposed approach in solving the optimal control problem of HSA.Data aggregation in mobile wireless sensor networks represented as stationary edge-Markovian evolving graphshttps://zbmath.org/1500.900122023-01-20T17:58:23.823708Z"Kenyeres, Martin"https://zbmath.org/authors/?q=ai:kenyeres.martin"Kenyeres, Jozef"https://zbmath.org/authors/?q=ai:kenyeres.jozefSummary: \%Summary: Over the past years, mobile wireless sensor networks have significantly attracted the attention of both the industry and the academy as they outperform their static variant in many aspects. This paper addresses the average consensus algorithm, a data aggregation mechanism, over mobile wireless sensor networks modeled as stationary edge-Markovian evolving graphs with a varied birth-rate and death-rate of the edges. We evaluate the performance of four various initial configurations of this algorithm by applying the mean square error in several scenarios in order to find the best performing initial configuration of the average consensus algorithm as well as to show for which graph parameters the algorithm achieves the highest performance.
For the entire collection see [Zbl 1481.90010].A stochastic model for the patient-bed assignment problem with random arrivals and departureshttps://zbmath.org/1500.900132023-01-20T17:58:23.823708Z"Heydar, Mojtaba"https://zbmath.org/authors/?q=ai:heydar.mojtaba"O'Reilly, Małgorzata M."https://zbmath.org/authors/?q=ai:oreilly.malgorzata-m"Trainer, Erin"https://zbmath.org/authors/?q=ai:trainer.erin"Fackrell, Mark"https://zbmath.org/authors/?q=ai:fackrell.mark"Taylor, Peter G."https://zbmath.org/authors/?q=ai:taylor.peter-g"Tirdad, Ali"https://zbmath.org/authors/?q=ai:tirdad.aliSummary: We consider the patient-to-bed assignment problem that arises in hospitals. Both emergency patients who require hospital admission and elective patients who have had surgery need to be found a bed in the most appropriate ward. The patient-to-bed assignment problem arises when a bed request is made, but a bed in the most appropriate ward is unavailable. In this case, the next-best decision out of a many alternatives has to be made, according to some suitable decision making algorithm. We construct a Markov chain to model this problem in which we consider the effect on the length of stay of a patient whose treatment and recovery consists of several stages, and can be affected by stays in or transfers to less suitable wards. We formulate a dynamic program recursion to optimise an objective function and calculate the optimal decision variables, and discuss simulation techniques that are useful when the size of the problem is too large. We illustrate the theory with some numerical examples.A fluid-diffusion-hybrid limiting approximation for priority systems with fast and slow customershttps://zbmath.org/1500.900142023-01-20T17:58:23.823708Z"Yu, Lun"https://zbmath.org/authors/?q=ai:yu.lun"Iravani, Seyed"https://zbmath.org/authors/?q=ai:iravani.seyed-m-r"Perry, Ohad"https://zbmath.org/authors/?q=ai:perry.ohadSummary: We consider a large service system with two customer classes that are distinguished by their urgency and service requirements. In particular, one of the customer classes is considered urgent, and is therefore prioritized over the other class; further, the average service time of customers from the urgent class is significantly larger than that of the nonurgent class. We therefore refer to the urgent class as ``slow,'' and to the nonurgent class as ``fast.'' Due to the complexity and intractability of the system's dynamics, our goal is to develop and analyze an asymptotic approximation, which captures the prevalent fact that, in practice, customers from both classes are likely to experience delays before entering service. However, under existing many-server limiting regimes, only two of the following options can be captured in the limit: (i) either the customers from the prioritized (slow) customer class do not wait at all, or (ii) the fast-class customers do not receive any service. We therefore propose a novel \textit{Fluid-Diffusion Hybrid} (FDH) many-server asymptotic regime, under which the queue of the slow class behaves like a diffusion limit, whereas the queue of the fast class evolves as a (random) fluid limit that is driven by the diffusion process. That FDH limit is achieved by assuming that the service rate of the fast class scales with the system's size, whereas the service rate of the slow class is kept fixed. Numerical examples demonstrate that our FDH limit is accurate when the difference between the service rates of the two classes is sufficiently large. We then employ the FDH approximation to study the costs and benefits of de-pooling the service pool, by reserving a small number of servers for the fast class. We prove that, in some cases, a two-pool structure is the asymptotically optimal system design.Optimal scale sizes in input-output allocative data envelopment analysis modelshttps://zbmath.org/1500.900152023-01-20T17:58:23.823708Z"Haghighatpisheh, Hajar"https://zbmath.org/authors/?q=ai:haghighatpisheh.hajar"Kordrostami, Sohrab"https://zbmath.org/authors/?q=ai:kordrostami.sohrab"Amirteimoori, Alireza"https://zbmath.org/authors/?q=ai:amirteimoori.alireza"Lotfi, Farhad Hosseinzadeh"https://zbmath.org/authors/?q=ai:hosseinzadeh-lotfi.farhadSummary: In production theory, industrial units do business in such a way that they use minimum amount of resources to produce maximum amount of products. So, inefficient units decrease their inputs level and increase their outputs level to meet the efficient frontier. By changing inputs and outputs, achieving an optimal scale size (OSS) in industrial units is one of the most important attempts and has attracted considerable attention among researchers. In this paper, an optimal scale size in input-output allocative DEA model is defined to each production firm in which the costs of inputs and the revenues of outputs are considered. We first rearrange the average-revenue efficiency measure that combines scale and output allocative efficiencies. Next, we simultaneously consider both of inputs and outputs in a new average-cost/revenue efficiency measure (ACRE). It has been shown that the proposed ACRE measure is the ratio of the profitability efficiency to ray average productivity. A numerical heuristic procedure is proposed to calculate a relatively good approximation of the new OSS in a convex and continuous technology set. To illustrate the real applicability of the proposed approach, we use a real case on 39 electricity distribution companies.Scheduling jobs of equal length: complexity, facets and computational resultshttps://zbmath.org/1500.900162023-01-20T17:58:23.823708Z"Crama, Yves"https://zbmath.org/authors/?q=ai:crama.yves"Spieksma, Frits C. R."https://zbmath.org/authors/?q=ai:spieksma.frits-c-rSummary: Given are \(n\) jobs, which have to be performed on a single machine within a fixed timespan \(\{1,2, \dots T\}\). The processing time (or length) of each job equals \(p\), \(p \in \mathbb{N}\). The processing cost of each job is an arbitrary function of its start-time. The problem is to schedule all jobs so as to minimize the sum of the processing costs.
This problem is proved to be NP-hard, already for \(p=2\) and \(0-1\) processing costs. On the other hand, when \(T= np+c\), with \(c\) constant, the problem can be solved in polynomial time. A partial polyhedral description of the set of feasible solutions is presented. In particular, two classes of facet-defining inequalities are described, for which the separation problem is polynomially solvable. Also, we exhibit a class of objective functions for which the inequalities in the LP-relaxation guarantee integral solutions.
Finally, we present a simple cutting plane algorithm and report on its performance on randomly generated problem instances.
For the entire collection see [Zbl 0875.90002].Formulating a scheduling problem with almost identical jobs by using positional completion timeshttps://zbmath.org/1500.900172023-01-20T17:58:23.823708Z"Hoogeveen, Han"https://zbmath.org/authors/?q=ai:hoogeveen.johannes-adzer"van de Velde, Steef"https://zbmath.org/authors/?q=ai:van-de-velde.steef-lSummary: We present a new type of formulation for scheduling problems where jobs are identical in all aspects but one. This type of formulation is particularly useful for deriving Lagrangian lower and upper bounds for flowshop problems, where the jobs only differ in their processing times. We illustrate the effectiveness of this type of formulation for a two-machine flowshop problem where the first machine is a batching machine.
For the entire collection see [Zbl 0875.90002].Scheduling unit jobs with compatible release dates on parallel machines with nonstationary speedshttps://zbmath.org/1500.900182023-01-20T17:58:23.823708Z"Queyranne, Maurice"https://zbmath.org/authors/?q=ai:queyranne.maurice"Schulz, Andreas S."https://zbmath.org/authors/?q=ai:schulz.andreas-sSummary: We consider the problem of nonpreemptively scheduling a set of jobs with identical processing requirements (\textit{unit jobs}) on parallel machines with nonstationary speeds. In addition to the case of uniform machines, this allows for such predictable effects as operator learning and tool wear and tear, as well as such planned activities as machine upgrades, maintenance and the preassignment of other operations, all of which may affect the available processing speed of the machine at different points in time. We also allow release dates that satisfy a certain \textit{compatibility} property. We show that the convex hull of feasible completion time vectors is a supermodular polyhedron. For nonidentical but compatible release dates, the supermodular function defining this polyhedron is the Dilworth truncation of a (non supermodular) function defined in a natural way from the release dates. This supermodularity result implies that the total weighted flow time can be minimized by a greedy algorithm. Supermodular polyhedra thus provide a general framework for several unit job, parallel machine scheduling problems and for their solution methods.
For the entire collection see [Zbl 0875.90002].Solving the quadratic assignment problemhttps://zbmath.org/1500.900192023-01-20T17:58:23.823708Z"Sergienko, I. V."https://zbmath.org/authors/?q=ai:sergienko.ivan-v"Shylo, V. P."https://zbmath.org/authors/?q=ai:shylo.vladimir-p"Chupov, S. V."https://zbmath.org/authors/?q=ai:chupov.sergey-v"Shylo, P. V."https://zbmath.org/authors/?q=ai:shylo.p-vSummary: Two modifications of the repeated iterated tabu algorithm for solving the quadratic assignment problem (with and without kernel allocation technology) are proposed. These modifications are analyzed and compared with the best modern algorithms for solving this problem. The efficiency of the developed algorithms is shown, in particular, for large-scale problems for which new records were found with their help.An analysis of multiple contaminant warning system design objectives for sensor placement optimization in water distribution networkshttps://zbmath.org/1500.900202023-01-20T17:58:23.823708Z"Watson, Jean-Paul"https://zbmath.org/authors/?q=ai:watson.jean-paul"Hart, William E."https://zbmath.org/authors/?q=ai:hart.william-e"Greenberg, Harvey J."https://zbmath.org/authors/?q=ai:greenberg.harvey-j"Phillips, Cynthia A."https://zbmath.org/authors/?q=ai:phillips.cynthia-aSummary: A key strategy for protecting municipal water supplies is the use of sensors to detect the presence of contaminants in associated water distribution systems. Deploying a contamination warning system involves the placement of a limited number of sensors -- placed in order to maximize the level of protection afforded. Researchers have proposed several models and algorithms for generating such placements, each optimizing with respect to a different design objective. The use of disparate design objectives raises several questions: (1) What is the relationship between optimal sensor placements for different design objectives? and (2) Is there any risk in focusing on specific design objectives? We model the sensor placement problem via a mixed-integer programming formulation of the well-known \(p\)-median problem from facility location theory to answer these questions. Our model can express a broad range of design objectives. Using three large test networks, we show that optimal solutions with respect to one design objective are often highly sub-optimal with respect to other design objectives. However, it is sometimes possible to construct solutions that are simultaneously near-optimal with respect to a range of design objectives. The design of contamination warning systems thus requires careful and simultaneous consideration of multiple, disparate design objectives.
For the entire collection see [Zbl 1490.90002].Information sharing in a collectors-led closed-loop supply chainhttps://zbmath.org/1500.900212023-01-20T17:58:23.823708Z"Cai, Keyuan"https://zbmath.org/authors/?q=ai:cai.keyuan"Zhang, Yiwen"https://zbmath.org/authors/?q=ai:zhang.yiwen"Lou, Yaqi"https://zbmath.org/authors/?q=ai:lou.yaqi"He, Shuguang"https://zbmath.org/authors/?q=ai:he.shuguangSummary: This paper considers a closed-loop supply chain (CLSC) in which two collectors provide used products to a manufacturer for remanufacturing. The collectors act as the channel leader, while the manufacturer is the follower and possesses private demand forecast information. We aim to investigate the manufacturer's information sharing strategy and the effect of different information sharing strategies on the participants in the CLSC. We find that the manufacturer has an incentive to share its demand forecast information with the collectors. When the collectors' investment cost-efficiency is high, the manufacturer prefers to share its information with only one collector. Under this scenario, the collector obtains the highest expected profit in all the information sharing cases. In addition, when the investment cost-efficiency is low, the manufacturer is willing to share its information with both collectors.Pivot rules for circuit-augmentation algorithms in linear optimizationhttps://zbmath.org/1500.900222023-01-20T17:58:23.823708Z"De Loera, Jesús A."https://zbmath.org/authors/?q=ai:de-loera.jesus-a"Kafer, Sean"https://zbmath.org/authors/?q=ai:kafer.sean"Sanità, Laura"https://zbmath.org/authors/?q=ai:sanita.lauraSummary: Circuit-augmentation algorithms are generalizations of the simplex method, where in each step one is allowed to move along a fixed set of directions, called circuits, that is a superset of the edges of a polytope. We show that in the circuit-augmentation framework the greatest-improvement and Dantzig pivot rules are NP-hard, already for 0/1-LPs. Differently, the steepest-descent pivot rule can be carried out in polynomial time in the 0/1 setting, and the number of circuit augmentations required to reach an optimal solution according to this rule is strongly polynomial for 0/1-LPs. The number of circuit augmentations has been of interest as a proxy for the number of steps in the simplex method, and the circuit-diameter of polyhedra has been studied as a lower bound to the combinatorial diameter of polyhedra. Extending prior results, we show that for any polyhedron \(P\) the circuit-diameter is bounded by a polynomial in the input bit-size of \(P\). This is in contrast with the best bounds for the combinatorial diameter of polyhedra. Interestingly, we show that the circuit-augmentation framework can be exploited to make novel conclusions about the classical simplex method itself: In particular, as a byproduct of our circuit results, we prove that (i) computing the shortest (monotone) path to an optimal solution on the 1-skeleton of a polytope is NP-hard, and hard to approximate within a factor better than 2, and (ii) for \(0/1\) polytopes, a monotone path of strongly polynomial length can be constructed using steepest improving edges.Quantile inverse optimization: improving stability in inverse linear programminghttps://zbmath.org/1500.900232023-01-20T17:58:23.823708Z"Shahmoradi, Zahed"https://zbmath.org/authors/?q=ai:shahmoradi.zahed"Lee, Taewoo"https://zbmath.org/authors/?q=ai:lee.taewooSummary: Inverse linear programming (LP) has received increasing attention because of its potential to infer efficient optimization formulations that can closely replicate the behavior of a complex system. However, inversely inferred parameters and corresponding forward solutions from the existing inverse LP methods can be highly sensitive to noise, errors, and uncertainty in the input data, limiting their applicability in data-driven settings. We introduce the notion of inverse and forward stability in inverse LP and propose a novel inverse LP method that determines a set of objective functions that are stable under data imperfection and generate forward solutions close to the relevant subset of the data. We formulate the inverse model as a large-scale mixed-integer program (MIP) and elucidate its connection to biclique problems, which we exploit to develop efficient algorithms that solve much smaller MIPs instead to construct a solution to the original problem. We numerically evaluate the stability of the proposed method and demonstrate its use in the diet recommendation and transshipment applications.A two-stage structure with undesirable outputs: slacks-based and additive slacks-based measures DEA modelshttps://zbmath.org/1500.900242023-01-20T17:58:23.823708Z"Asanimoghadam, Keyvan"https://zbmath.org/authors/?q=ai:asanimoghadam.keyvan"Salahi, Maziar"https://zbmath.org/authors/?q=ai:salahi.maziar"Jamalian, Ali"https://zbmath.org/authors/?q=ai:jamalian.ali"Shakouri, Rita"https://zbmath.org/authors/?q=ai:shakouri.ritaSummary: The slacks-based measure (SBM) and additive SBM (ASBM) models are two widely used DEA models acting based on inputs and outputs slacks and giving efficiency scores between zero and unity. In this paper, we use both models with the application of the weak disposability axiom for outputs to evaluate efficiency in a two-stage structure in the presence of undesirable outputs. In the external evaluation, the SBM model is reformulated as a linear program and the ASBM model is reformulated as a second-order cone program (SOCP) that is a convex programming problem. In the internal evaluation, the SBM model for a specific choice of weights is linearized while the ASBM model is presented as an SOCP for arbitrary choice of weights. Finally, the proposed models are applied on a real dataset for which efficiency comparison and Pearson correlation coefficients analysis show advantages of the ASBM model to the SBM model.Centralized resource allocation to create new most productive scale size (MPSS) DMUshttps://zbmath.org/1500.900252023-01-20T17:58:23.823708Z"Nojoumi, Kamyar"https://zbmath.org/authors/?q=ai:nojoumi.kamyar"Saati, Saber"https://zbmath.org/authors/?q=ai:saati.saber"Khoshandam, Leila"https://zbmath.org/authors/?q=ai:khoshandam.leilaSummary: Data envelopment analysis (DEA) is a mathematical programming-based technique to evaluate the performance of a homogeneous group of decision-making units (DMUs) with multiple inputs and outputs. One of the DEA applications involves aggregating input resources and reallocating them to create efficient DMUs. The present study employs the centralized resource allocation (CRA) approach to develop a model for creating new DMUs. These new DMUs are the most productive scale size (MPSS), and all new DMUs lie on a strong supporting hyperplane. In this case, a dual model is used to obtain the strong supporting hyperplane which all new DMUs lie on. This hyperplane is derived by solving the dual model and generating a common set of weights. Then, it is shown that all new DMUs lie on a strong supporting hyperplane, and an MPSS facet is the intersection of this hyperplane with the production possibility set (PPS).Drainage area maximization in unconventional hydrocarbon fields with integer linear programming techniqueshttps://zbmath.org/1500.900262023-01-20T17:58:23.823708Z"Aliaga, Fernando"https://zbmath.org/authors/?q=ai:aliaga.fernando"Delle Donne, Diego"https://zbmath.org/authors/?q=ai:donne.diego-delle"Durán, Guillermo"https://zbmath.org/authors/?q=ai:duran.guillermo-alfredo"Marenco, Javier"https://zbmath.org/authors/?q=ai:marenco.javier-lSummary: The drainage area maximization problem for an unconventional hydrocarbon field is addressed with the objective of designing a development plan that optimizes total production while satisfying environmental and operating constraints. The various characteristics of the problem are presented and a solution approach is developed around an integer linear programming model applied to a discretisation of the field's geographical area. Computational experiments show that the approach provides a practical response to the problem, generating solutions that comply with all of the constraints. The algorithm implemented under this approach has been incorporated into a software tool for planning and managing unconventional hydrocarbon operations and has been used since 2018 by two leading petroleum companies in Argentina to improve unconventional development plans for the country's ``Vaca Muerta'' geological formation.Polyhedra and optimization in connection with a weak majorization orderinghttps://zbmath.org/1500.900272023-01-20T17:58:23.823708Z"Dahl, Geir"https://zbmath.org/authors/?q=ai:dahl.geirSummary: We introduce the concept of weak \(k\)-majorization extending the classical notion of weak sub-majorization. For integers \(k\) and \(n\) with \(k \leq n\) a vector \(x \in \mathbb{R}^n\) is weakly \(k\)-majorized by a vector \(q \in \mathbb{R}^k\) if the sum of the \(r\) largest components of \(x\) does not exceed the sum of the \(r\) largest components of \(q\), for \(r=1,\dots, k\). For a given \(q\) the set of vectors weakly \(k\)-majorized by \(q\) defines a polyhedron \(P(q; k)\), and we determine all its vertices. We also determine the vertices and a complete and nonredundant linear description of the integer hull of \(P(q; k)\). The results are used to give simple and efficient (polynomial time) algorithms for associated linear and integer linear programming problems.
For the entire collection see [Zbl 0875.90002].Integrated lot-sizing and one-dimensional cutting stock problem with usable leftovershttps://zbmath.org/1500.900282023-01-20T17:58:23.823708Z"do Nascimento, D. N."https://zbmath.org/authors/?q=ai:do-nascimento.d-n"de Araujo, S. A."https://zbmath.org/authors/?q=ai:de-araujo.silvio-alexandre"Cherri, A. C."https://zbmath.org/authors/?q=ai:cherri.adriana-cristinaSummary: This paper addresses the integration of the lot-sizing problem and the one-dimensional cutting stock problem with usable leftovers (LSP-CSPUL). This integration aims to minimize the cost of cutting items from objects available in stock, allowing the bringing forward production of items that have known demands in a future planning horizon. The generation of leftovers, that will be used to cut future items, is also allowed and these leftovers are not considered waste in the current period. Inventory costs for items and leftovers are also considered. A mathematical model for the LSP-CSPUL is proposed to represent this problem and an approach, using the simplex method with column generation, is proposed to solve the linear relaxation of this model. A heuristic procedure, based on a \textit{relax-and-fix} strategy, was also proposed to find integer solutions. Computational tests were performed and the results show the contributions of the proposed mathematical model, as well as, the quality of the solutions obtained using the proposed method.An MIP-CP based approach for two- and three-dimensional cutting problems with staged guillotine cutshttps://zbmath.org/1500.900292023-01-20T17:58:23.823708Z"do Nascimento, Oliviana Xavier"https://zbmath.org/authors/?q=ai:nascimento.oliviana-xavier-do"de Queiroz, Thiago Alves"https://zbmath.org/authors/?q=ai:alves-de-queiroz.thiago"Junqueira, Leonardo"https://zbmath.org/authors/?q=ai:junqueira.leonardoSummary: This work presents guillotine constraints for two- and three-dimensional cutting problems. These problems look for a subset of rectangular items of maximum value that can be cut from a single rectangular container. Guillotine constraints seek to ensure that items are arranged in such a way that cuts from one edge of the container to the opposite edge completely separate them. In particular, we consider the possibility of 2, 3, and 4 cutting stages in a predefined sequence. These constraints are considered within a two-level iterative approach that combines the resolution of integer linear programming and constraint programming models. Experiments with instances of the literature are carried out, and the results show that the proposed approach can solve in less than 500 s approximately 60\% and 50\% of the instances for the two- and three-dimensional cases, respectively. For the two-dimensional case, in comparison with the recent literature, it was possible to improve the upper bound for 16\% of the instances.Sequence independent lifting of cover inequalitieshttps://zbmath.org/1500.900302023-01-20T17:58:23.823708Z"Gu, Zonghao"https://zbmath.org/authors/?q=ai:gu.zonghao"Nemhauser, George L."https://zbmath.org/authors/?q=ai:nemhauser.george-l"Savelsbergh, Martin W. P."https://zbmath.org/authors/?q=ai:savelsbergh.martin-w-pSummary: We investigate lifting, i.e., the process of taking a valid inequality and using it to construct a valid inequality in a higher dimensional space. Lifting is usually applied sequentially; variables in a set are lifted one after the other. This is computationally unattractive since it involves the solution of an optimization problem to compute a lifting coefficient for each variable. To relieve this computational burden, we study the potential of sequence independent lifting techniques.
For the entire collection see [Zbl 0875.90002].The Hilbert basis of the cut cone over the complete graph \(K_6\)https://zbmath.org/1500.900312023-01-20T17:58:23.823708Z"Laburthe, François"https://zbmath.org/authors/?q=ai:laburthe.francoisSummary: Given a polyhedral cone \(C\), generated by the integer vectors \(x_1,\dots, x_n\), the set of integer vectors of \(C\) is an additive semi-group, whose minimal set of generators (for linear cobinations with coefficients in \(\mathbb{Z}^+)\) is called the \textit{Hilbert basis} of \(C\). Integer points of \(C\) which are not in the integer cone \(\mathbb{Z}^+(x_1,\dots, x_n)\) are called \textit{quasi}-\(h\) \textit{points}.
The set of cuts on a graph forms a Hilbert basis for all graphs strictly smaller than \(K_6\), but not for \(K_6\).
Two results are proven here:
\begin{itemize}
\item The Hilbert basis for the cut cone over \(K_6\) is explicitly given: it consists of the 31 non-zero cuts and of the 15 vectors \(d^e\) defined for each edge \(e\) by: \(d_f^e = 2\) for \(f \neq e\) and \(d_e^e = 4\).
\item The quasi-\(h\) points for \(K_6\) are characterized: they are exactly the \(d^e + n \delta(v)\) for \(v\) non adjacent to \(e\) and \(n \in \mathbb{Z}^+\).
\end{itemize}
For the entire collection see [Zbl 0875.90002].Optimal sales and operations planning for integrated steel industrieshttps://zbmath.org/1500.900322023-01-20T17:58:23.823708Z"Almeida, J. F. F."https://zbmath.org/authors/?q=ai:almeida.j-f-f"Conceição, S. V."https://zbmath.org/authors/?q=ai:conceicao.samuel-vieira"Pinto, L. R."https://zbmath.org/authors/?q=ai:pinto.l-r"Oliveira, B. R. P."https://zbmath.org/authors/?q=ai:oliveira.b-r-p"Rodrigues, L. F."https://zbmath.org/authors/?q=ai:rodrigues.lucas-f-m|rodrigues.lasara-fabriciaSummary: This paper aims at inspiring a change in the traditional approach of sales and operations planning (S\&OP) of integrated steel industries that have reached a maturity planning level allowing the use of sophisticated technologies. Therefore, we propose a two-stage stochastic programming model suitable for reproducing the rolling horizon framework of tactical planning. Simulations of one year of operations in an integrated steel industry showed a potential improvement of about 15\% on overall supply chain profit in comparison with a plan produced traditionally. Besides, we evaluated two scenarios to analyze the effects of a flexible plan when facing a tax raise and a capacity reduction. As a practical implication, the model provides a consensus view of the S\&OP stages at once, supporting top managers to make better decisions. The two-stage modeling approach also reinforces the need for interaction of the sales team with procurement, production, and distribution teams.Stochastic approximation methods for the two-stage stochastic linear complementarity problemhttps://zbmath.org/1500.900332023-01-20T17:58:23.823708Z"Chen, Lin"https://zbmath.org/authors/?q=ai:chen.lin.7"Liu, Yongchao"https://zbmath.org/authors/?q=ai:liu.yongchao"Yang, Xinmin"https://zbmath.org/authors/?q=ai:yang.xinmin"Zhang, Jin"https://zbmath.org/authors/?q=ai:zhang.jin.2Summary: The two-stage stochastic linear complementarity problem (TSLCP), which can be regarded as a special and important reformulation of two-stage stochastic linear programming, has arisen in various fields, such as stochastic programming, game theory, traffic equilibrium, and theoretical economics. Considerable effort has been devoted to designing numerical methods for solving TSLCPs. A popular approach is to integrate the progressive hedging algorithm (PHA) as a sub-algorithm into a discretization framework. In this paper, aiming to solve large-scale TSLCPs, we propose two kinds of stochastic methods: the stochastic approximation method based on projection (SAP) and the dynamic sampling SAP (DS-SAP), both of which offering more direct and improved control of the computational costs of the involved subproblems, especially compared with the PHA. In particular, the linear complementarity subproblems are solved inexactly during each iteration, and the convergence analysis of both SAP and DS-SAP with an inexactness criterion is presented. Moreover, numerical implementations and practical applications demonstrate the efficiency of our proposed methods.Vaidya's method for convex stochastic optimization problems in small dimensionhttps://zbmath.org/1500.900342023-01-20T17:58:23.823708Z"Gladin, E. L."https://zbmath.org/authors/?q=ai:gladin.e-l"Gasnikov, A. V."https://zbmath.org/authors/?q=ai:gasnikov.alexander-v"Ermakova, E. S."https://zbmath.org/authors/?q=ai:ermakova.e-sSummary: The paper deals with a general problem of convex stochastic optimization in a space of small dimension (for example, 100 variables). It is known that for deterministic problems of convex optimization in small dimensions, the methods of centers of gravity type (for example, Vaidya's method) provide the best convergence. For stochastic optimization problems, the question of the possibility of applying Vaidya's method can be reduced to the question of how it accumulates inaccuracies in the subgradient. A recent result of the authors stating that there is no accumulation of inaccuracies at the iterations of Vaidya's method allows the authors to propose its analog for solving stochastic optimization problems. The main technique is to replace the subgradient in Vaidya's method by its batched analogue (the arithmetic mean of stochastic subgradients). In the present paper, this plan is implemented, which results in an efficient method (under conditions of the possibility of parallel calculations with batching) for solving problems of convex stochastic optimization in spaces of small dimensions. The work of the algorithm is illustrated by a numerical experiment.A primal-dual algorithm for risk minimizationhttps://zbmath.org/1500.900352023-01-20T17:58:23.823708Z"Kouri, Drew P."https://zbmath.org/authors/?q=ai:kouri.drew-p"Surowiec, Thomas M."https://zbmath.org/authors/?q=ai:surowiec.thomasSummary: In this paper, we develop an algorithm to efficiently solve risk-averse optimization problems posed in reflexive Banach space. Such problems often arise in many practical applications as, e.g., optimization problems constrained by partial differential equations with uncertain inputs. Unfortunately, for many popular risk models including the coherent risk measures, the resulting risk-averse objective function is nonsmooth. This lack of differentiability complicates the numerical approximation of the objective function as well as the numerical solution of the optimization problem. To address these challenges, we propose a primal-dual algorithm for solving large-scale nonsmooth risk-averse optimization problems. This algorithm is motivated by the classical method of multipliers and by epigraphical regularization of risk measures. As a result, the algorithm solves a sequence of smooth optimization problems using derivative-based methods. We prove convergence of the algorithm even when the subproblems are solved inexactly and conclude with numerical examples demonstrating the efficiency of our method.Asset liability management for the bank of Uganda defined benefits scheme by stochastic programminghttps://zbmath.org/1500.900362023-01-20T17:58:23.823708Z"Mukalazi, Herbert"https://zbmath.org/authors/?q=ai:mukalazi.herbert"Larsson, Torbjörn"https://zbmath.org/authors/?q=ai:larsson.torbjorn"Kasozi, Juma"https://zbmath.org/authors/?q=ai:kasozi.juma"Mayambala, Fred"https://zbmath.org/authors/?q=ai:mayambala.fredSummary: We develop a model for asset liability management of pension funds, which is solved by stochastic programming techniques. Using data provided by the Bank of Uganda Defined Benefits Scheme, which is closed to new members, we obtain the optimal investment policies. Randomly sampled scenario trees using the mean and covariance structure of the return distribution are used for generating the coefficients of the stochastic program. Liabilities are modelled by remaining years of life expectancy and guaranteed period for monthly pension. We obtain the funding situation of the scheme at each stage, and the terminal cash injection by the sponsor required to meet all future benefit payments, in absence of contributing members.Risk-averse stochastic programming: time consistency and optimal stoppinghttps://zbmath.org/1500.900372023-01-20T17:58:23.823708Z"Pichler, Alois"https://zbmath.org/authors/?q=ai:pichler.alois"Liu, Rui Peng"https://zbmath.org/authors/?q=ai:liu.ruipeng"Shapiro, Alexander"https://zbmath.org/authors/?q=ai:shapiro.alexanderSummary: This paper addresses time consistency of risk-averse optimal stopping in stochastic optimization. It is demonstrated that time-consistent optimal stopping entails a specific structure of the functionals describing the transition between consecutive stages. The stopping risk measures capture this structural behavior and allow natural dynamic equations for risk-averse decision making over time. Consequently, associated optimal policies satisfy Bellman's principle of optimality, which characterizes optimal policies for optimization by stating that a decision maker should not reconsider previous decisions retrospectively. We also discuss numerical approaches to solving such problems.Technical note -- A near-optimal algorithm for real-time order acceptance: an application in postacute healthcare serviceshttps://zbmath.org/1500.900382023-01-20T17:58:23.823708Z"Qu, Zihao"https://zbmath.org/authors/?q=ai:qu.zihao"Dawande, Milind"https://zbmath.org/authors/?q=ai:dawande.milind-w"Janakiraman, Ganesh"https://zbmath.org/authors/?q=ai:janakiraman.ganeshSummary: Motivated by an application at a postacute healthcare provider, we study an infinite-horizon, stochastic optimization problem with a set of long-term capacity investment decisions and a sequence of real-time order acceptance/rejection decisions. The goal is to maximize the long-run average expected profit per period. The firm employs full-time resources of various kinds, such as nurses and therapists. For each kind of resource, multiple types are available. For example, registered nurses (RNs) are more expensive to employ than licensed practical nurses (LPNs); however, RNs can serve a greater range of patients than LPNs. Thus, the long-term capacity decision for this firm is the number of each type of resource to employ full time. A full-time resource may, at times, become unavailable (i.e., be absent); this absenteeism is stochastic. When the need for resources cannot be met from the pool of full-time employees, the firm has access to on-demand, part-time resources, who are paid a higher hourly rate than an equivalent full-time resource. On the demand side, the firm receives \textit{referrals} -- requests to commit service to patients over a time window (whose duration is stochastic), which is referred to as an \textit{episode} -- in real time. The referral arrival process is stochastic. A referral is characterized by the revenue it provides to the firm, the resources required to serve that patient, the frequency with which each of these resources is required, and the distribution of the episode duration. The decision to accept or reject a referral has to be instantaneous; if accepted, the service episode starts immediately. We develop a simple solution to the optimization problem, derive a worst-case guarantee on its optimality gap, and demonstrate that this gap vanishes in a meaningful asymptotic regime. We also illustrate the impressive performance of our solution numerically on a testbed of problem instances whose input parameters are drawn using publicly available healthcare data.An energy system optimization model accounting for the interrelations of multiple stochastic energy priceshttps://zbmath.org/1500.900392023-01-20T17:58:23.823708Z"Ren, Hongtao"https://zbmath.org/authors/?q=ai:ren.hongtao"Zhou, Wenji"https://zbmath.org/authors/?q=ai:zhou.wenji"Wang, Hangzhou"https://zbmath.org/authors/?q=ai:wang.hangzhou"Zhang, Bo"https://zbmath.org/authors/?q=ai:zhang.bo.10"Ma, Tieju"https://zbmath.org/authors/?q=ai:ma.tiejuSummary: The variation of and the interrelation between different energy markets significantly affect the competitiveness of various energy technologies, therefore complicate the decision-making problem for a complex energy system consisting of multiple competing technologies, especially in a long-term time frame. The interrelations between these markets have not been accounted for in the existing energy system modelling efforts, leading to a distortion of understanding of the market impact on the technological choices and operations in the real world. This study investigates the strategic and operational decision-making problem for such an energy system characterized by three competing technologies from crude oil, natural gas, and coal. A stochastic programming model is constructed by incorporating multiple volatile energy prices interrelated with each other. Oil price is modelled by the mean-reverting Ornstein-Uhlenbeck process and serves as the exogenous variable in the ARIMAX models for natural gas and downstream plastic prices. The \(K\)-means clustering method is employed to extract a handful of distinctive patterns from a large number of simulated price projections to enhance the computing efficiency without losing retaining critical information and insights from the price co-movement. The model results suggest that the high volatility of the energy market weakens the possibility of selecting the corresponding technology. The oil-based route, for example, gradually loses its market share to the coal approach, attributed to a higher volatile oil market. The proposed method is applicable to other problems of the same kind with high-dimensional stochastic variables.Stochastic programming model for production planning with stochastic aggregate demand and spreadsheet-based solution heuristicshttps://zbmath.org/1500.900402023-01-20T17:58:23.823708Z"Saadouli, Nasreddine"https://zbmath.org/authors/?q=ai:saadouli.nasreddineSummary: By discretising the stochastic demand, a deterministic nonlinear programming formulation is developed. Then, a hybrid simulation-optimisation heuristic that capitalises on the nature of the problem is designed. The outcome is an evaluation problem that is efficiently solved using a spreadsheet model. The main contribution of the paper is providing production managers with a tractable formulation of the production planning problem in a stochastic environment and an efficient solution scheme. A key benefit of this approach is that it provides quick near-optimal solutions without requiring in-depth knowledge or significant investments in optimisation techniques and software.Solving geometric programming problems with triangular and trapezoidal uncertainty distributionshttps://zbmath.org/1500.900412023-01-20T17:58:23.823708Z"Mondal, Tapas"https://zbmath.org/authors/?q=ai:mondal.tapas-kumar"Ojha, Akshay Kumar"https://zbmath.org/authors/?q=ai:ojha.akshay-kumar"Pani, Sabyasachi"https://zbmath.org/authors/?q=ai:pani.sabyasachiSummary: The geometric programming problem is an important optimization technique that is often used to solve different nonlinear optimization problems and engineering problems. The geometric programming models that are commonly used are generally based on deterministic and accurate parameters. However, it is observed that in real-world geometric programming problems, the parameters are frequently inaccurate and ambiguous. In this paper, we consider chance-constrained geometric programming problems with uncertain coefficients and with geometric programming techniques in the uncertain-based framework. We show that the associated chance-constrained uncertain geometric programming problem can be converted into a crisp geometric programming problem by using triangular and trapezoidal uncertainty distributions for the uncertain variables. The main aim of this paper is to provide the solution procedures for geometric programming problems under triangular and trapezoidal uncertainty distributions. To show how well the procedures and algorithms work, two numerical examples and an application in the inventory model are given.\(N-1-1\) contingency-constrained unit commitment with renewable integration and corrective actionshttps://zbmath.org/1500.900422023-01-20T17:58:23.823708Z"Zuniga Vazquez, Daniel A."https://zbmath.org/authors/?q=ai:zuniga-vazquez.daniel-a"Ruiz Duarte, Jose L."https://zbmath.org/authors/?q=ai:ruiz-duarte.jose-luis"Fan, Neng"https://zbmath.org/authors/?q=ai:fan.neng"Qiu, Feng"https://zbmath.org/authors/?q=ai:qiu.fengSummary: Meeting the customer's power demands is crucial for energy companies, even when unexpected and consecutive failures are present. This task has considerably increased its complexity due to the high integration of renewable energies and their intermittent behaviors. Therefore, it is important to achieve reliable power supply based on a criterion closer to real-life system operations and capable of addressing consecutive failures. The \(N-1-1\) contingency involves the loss of a single transmission line or generation unit, followed by systems adjustments. Afterward, the power system experiences a subsequent loss of an additional generation unit or transmission line. This paper presents a power system unit commitment problem considering the \(N-1-1\) reliability criterion with operations compliance check on economic dispatch and power flows under contingency states and renewable energy integration. Corrective actions are also included to determine the time that the failed components are restored. To address the complexity caused by renewable energy integration, the reliable unit commitment is achieved under the worst-case renewable output. The formulation results in an extremely large-scale adaptive robust mixed-integer linear programming model. For an efficient solution, a variation of the nested column-and-constraint generation algorithm is designed. Besides using the susceptance and phase angles to model the power flow, the linear sensitivity factors are also applied for improving the computational performance. The proposed models and algorithms are evaluated on modified IEEE 6-bus, 14-bus, and 118-bus test systems to confirm their effectiveness.Limited memory BFGS method for least squares semidefinite programming with banded structurehttps://zbmath.org/1500.900432023-01-20T17:58:23.823708Z"Xue, Wenjuan"https://zbmath.org/authors/?q=ai:xue.wenjuan"Shen, Chungen"https://zbmath.org/authors/?q=ai:shen.chungen"Yu, Zhensheng"https://zbmath.org/authors/?q=ai:yu.zhenshengSummary: This work is intended to solve the least squares semidefinite program with a banded structure. A limited memory BFGS method is presented to solve this structured program of high dimension. In the algorithm, the inverse power iteration and orthogonal iteration are employed to calculate partial eigenvectors instead of full decomposition of \(n\times n\) matrices. One key feature of the algorithm is that it is proved to be globally convergent under inexact gradient information. Preliminary numerical results indicate that the proposed algorithm is comparable with the inexact smoothing Newton method on some large instances of the structured problem.A hierarchy of standard polynomial programming formulations for the maximum clique problemhttps://zbmath.org/1500.900442023-01-20T17:58:23.823708Z"Butenko, Sergiy"https://zbmath.org/authors/?q=ai:butenko.sergiy-i"Makovenko, Mykyta"https://zbmath.org/authors/?q=ai:makovenko.mykyta"Pardalos, Miltiades"https://zbmath.org/authors/?q=ai:pardalos.miltiades-pSummary: The maximum clique problem is a classical optimization problem which finds many important applications. Some of the most theoretically interesting and computationally successful approaches to this problem are based on the Motzkin-Straus formulation, expressing the clique number in terms of the maximal value of a standard quadratic program. This intriguing relationship has also motivated significant developments in quadratic optimization, copositive programming, and complexity in nonlinear optimization. Inspired by such developments, this paper establishes a hierarchy of polynomial programming formulations \(\mathbf{P}^k)\), \(k\in\{2,\dots,\omega\}\) for the maximum clique problem, relating the clique number \(\omega\) of a given graph to the global maximal value of a multilinear polynomial function \(f_k(x)\) of degree \(k\) over the standard simplex in \(\mathbb{R}^{|V|}\). The case of \(k=2\) corresponds to the Motzkin-Straus formulation, and the hierarchical feature is in the fact that the set of local maxima of \(\mathbf{P}^{k+1})\) is a subset of the set of local maxima of \(\mathbf{P}^k)\), \(k\in\{2,\dots, \omega-1\}\). In particular, every local maximum of \(\mathbf{P}^{\omega})\) is global.SONC optimization and exact nonnegativity certificates via second-order cone programminghttps://zbmath.org/1500.900452023-01-20T17:58:23.823708Z"Magron, Victor"https://zbmath.org/authors/?q=ai:magron.victor"Wang, Jie"https://zbmath.org/authors/?q=ai:wang.jie.2Summary: The second-order cone (SOC) is a class of simple convex cones and optimizing over them can be done more efficiently than with semidefinite programming. It is interesting both in theory and in practice to investigate which convex cones admit a representation using SOCs, given that they have a strong expressive ability. In this paper, we prove constructively that the cone of sums of nonnegative circuits (SONC) admits a SOC representation. Based on this, we give a new algorithm for unconstrained polynomial optimization via SOC programming. We also provide a hybrid numeric-symbolic scheme which combines the numerical procedure with a rounding-projection algorithm to obtain exact nonnegativity certificates. Numerical experiments demonstrate the efficiency of our algorithm for polynomials with fairly large degree and number of variables.Differential stability of discrete optimal control problems with possibly nondifferentiable costshttps://zbmath.org/1500.900462023-01-20T17:58:23.823708Z"An, D. T. V."https://zbmath.org/authors/?q=ai:an.d-t-v"Huong, V. T."https://zbmath.org/authors/?q=ai:huong.vu-thi"Xu, H. K."https://zbmath.org/authors/?q=ai:xu.haikun|xu.hong-kun|xu.hongkui|xu.huakang|xu.haokun|xu.huaying-karenSummary: In this paper, a family of discrete optimal control problems that depend on parameters is considered. The control problems are reformulated as parametric optimization problems. By establishing/exploiting abstract results on subdifferentials of optimal value functions of parametric optimization problems, we derive formulas for estimating/computing subdifferentials of optimal value functions of parametric discrete optimal control problems in both nonconvex and convex cases. Namely, for control problems with nonconvex costs, upper-evaluations on the regular subdifferential and the limiting (Mordukhovich) subdifferential of the optimal value function are obtained without using the (strict) differentiability of the costs. Meanwhile, for control problems with convex costs, besides results on estimating/computing the subdifferential (in the sense of convex analysis) of the optimal value function, it is worth pointing out that some properties of the optimal value function are first discussed in this paper.Convex optimization without convexity of constraints on non-necessarily convex sets and its applications in customer satisfaction in automotive industryhttps://zbmath.org/1500.900472023-01-20T17:58:23.823708Z"Jalilian, Kamran"https://zbmath.org/authors/?q=ai:jalilian.kamran"Pirbazari, Kameleh Nasiri"https://zbmath.org/authors/?q=ai:pirbazari.kameleh-nasiriSummary: In the present paper, some necessary and sufficient optimality conditions for a convex optimization problem over inequality constraints are presented which are not necessarily convex and are based on convex intersection of non-necessarily convex sets. The oriented distance function and a characterization of the normal cone of the feasible set are used to obtain the optimality conditions. In the second part of the paper, a non-linear smooth optimization model for customer satisfaction in automotive industry is introduced. The results of the first part are applied to solve this problem theoretically.A primal-dual flow for affine constrained convex optimizationhttps://zbmath.org/1500.900482023-01-20T17:58:23.823708Z"Luo, Hao"https://zbmath.org/authors/?q=ai:luo.haoSummary: We introduce a novel primal-dual flow for affine constrained convex optimization problems. As a modification of the standard saddle-point system, our flow model is proved to possess the exponential decay property, in terms of a tailored Lyapunov function. Then two primal-dual methods are obtained from numerical discretizations of the continuous problem, and global nonergodic linear convergence rate is established \textit{via} a discrete Lyapunov function. Instead of solving the subproblem of the primal variable, we apply the semi-smooth Newton iteration to the inner problem with respect to the multiplier, provided that there are some additional properties such as semi-smoothness and sparsity. Finally, numerical tests on the linearly constrained \(l_1-l_2\) minimization and the tot al-variation based image denoising model have been provided.Discrete Fenchel duality for a pair of integrally convex and separable convex functionshttps://zbmath.org/1500.900492023-01-20T17:58:23.823708Z"Murota, Kazuo"https://zbmath.org/authors/?q=ai:murota.kazuo"Tamura, Akihisa"https://zbmath.org/authors/?q=ai:tamura.akihisaSummary: Discrete Fenchel duality is one of the central issues in discrete convex analysis. The Fenchel-type min-max theorem for a pair of integer-valued \(M^{\natural}\)-convex functions generalizes the min-max formulas for polymatroid intersection and valuated matroid intersection. In this paper we establish a Fenchel-type min-max formula for a pair of integer-valued integrally convex and separable convex functions. Integrally convex functions constitute a fundamental function class in discrete convex analysis, including both \(M^{\natural}\)-convex functions and \(L^{\natural}\)-convex functions, whereas separable convex functions are characterized as those functions which are both \(M^{\natural}\)-convex and \(L^{\natural}\)-convex. The theorem is proved by revealing a kind of box integrality of subgradients of an integer-valued integrally convex function. The proof is based on the Fourier-Motzkin elimination.Preconditioned three-operator splitting algorithm with applications to image restorationhttps://zbmath.org/1500.900502023-01-20T17:58:23.823708Z"Tang, Yuchao"https://zbmath.org/authors/?q=ai:tang.yuchao"Wen, Meng"https://zbmath.org/authors/?q=ai:wen.meng"Zeng, Tieyong"https://zbmath.org/authors/?q=ai:zeng.tie-yongSummary: In this paper, we present primal-dual splitting algorithms for the convex minimization problem involving smooth functions with Lipschitzian gradient, finite sum of nonsmooth proximable functions, and linear composite functions. Many total variation-based image processing problems are special cases of such problems. The obtained primal-dual splitting algorithms are derived from a preconditioned three-operator splitting algorithm applied to primal-dual optimality conditions in a proper product space. The convergence of the proposed algorithms under appropriate assumptions on the parameters has been proved. Numerical experiments on a novel image restoration problem are presented to demonstrate the efficiency and effectiveness of the proposed algorithms.The projection onto the crosshttps://zbmath.org/1500.900512023-01-20T17:58:23.823708Z"Bauschke, Heinz H."https://zbmath.org/authors/?q=ai:bauschke.heinz-h"Lal, Manish Krishan"https://zbmath.org/authors/?q=ai:krishan-lal.manish"Wang, Xianfu"https://zbmath.org/authors/?q=ai:wang.xianfuSummary: We consider the set of pairs of orthogonal vectors in Hilbert space, which is also called the cross because it is the union of the horizontal and vertical axes in the Euclidean plane when the underlying space is the real line. Crosses, which are nonconvex sets, play a significant role in various branches of nonsmooth analysis such as feasibility problems and optimization problems. In this work, we study crosses and show that in infinite-dimensional settings, they are never weakly (sequentially) closed. Nonetheless, crosses do turn out to be proximinal (i.e., they always admit projections) and we provide explicit formulas for the projection onto the cross in all cases.Strong Condorcet criterion for the linear ordering problemhttps://zbmath.org/1500.900522023-01-20T17:58:23.823708Z"Ando, Kazutoshi"https://zbmath.org/authors/?q=ai:ando.kazutoshi"Sukegawa, Noriyoshi"https://zbmath.org/authors/?q=ai:sukegawa.noriyoshi"Takagi, Shota"https://zbmath.org/authors/?q=ai:takagi.shotaSummary: The linear ordering problem is, given a nonnegative square matrix, to find a simultaneous permutation of the row and column indices such that the sum of the elements below the main diagonal of the permuted matrix is minimized. The linear ordering problem is an NP-hard optimization problem with a wide variety of applications, including triangulation of input-output tables and aggregation of individual preferences. In the context of the preference aggregation, \textit{M. Truchon} [An extension of the Condorcet criterion and Kemeny orders, Techn. Rep., cahier 98--15 du Centre de Recherche en Économie et Finance Appliquées, Université Laval, Québec, Canada (1998)] showed that every optimal solution of a linear ordering problem satisfies a property called the extended Condorcet criterion, which states that there exists an ordered partition of the row and column index set of the given matrix such that the relative order of every pair of indices belonging to different components of the partition is fixed for all optimal solutions. Such fixing is desirable for investigating the whole set of the optimal solutions since we can reduce the computational time for enumerating them. In this study, we show that the optimal solutions of the linear ordering problem have an even stronger property than the extended Condorcet criterion, which enables us to have more pairs of indices whose relative orders are fixed for all optimal solutions.Dynamic stochastic matching under limited timehttps://zbmath.org/1500.900532023-01-20T17:58:23.823708Z"Aouad, Ali"https://zbmath.org/authors/?q=ai:aouad.ali"Sarıtaç, Ömer"https://zbmath.org/authors/?q=ai:saritac.omerSummary: In centralized matching markets such as carpooling platforms and kidney exchange schemes, new participants constantly enter the market and remain available for potential matches during a limited period of time. To reach an efficient allocation, the ``timing'' of the matching decisions is a critical aspect of the platform's operations. There is a fundamental tradeoff between increasing market thickness and mitigating the risk that participants abandon the market. Nonetheless, the dynamic properties of matching markets have been mostly overlooked in the algorithmic literature. In this paper, we introduce a general dynamic matching model over edge-weighted graphs, where the agents' arrivals and abandonments are stochastic and heterogeneous. Our main contribution is to design simple matching algorithms that admit strong worst-case performance guarantees for a broad class of graphs. In contrast, we show that the performance of widely used batching algorithms can be arbitrarily bad on certain graph-theoretic structures motivated by carpooling services. Our approach involves the development of a host of new techniques, including linear programming benchmarks, value function approximations, and proxies for continuous-time Markov chains, which may be of broader interest. In extensive experiments, we simulate the matching operations of a car-pooling platform using real-world taxi demand data. The newly developed algorithms can significantly improve cost efficiency against batching algorithms.On the rectangular knapsack problemhttps://zbmath.org/1500.900542023-01-20T17:58:23.823708Z"Bökler, Fritz"https://zbmath.org/authors/?q=ai:bokler.fritz"Chimani, Markus"https://zbmath.org/authors/?q=ai:chimani.markus"Wagner, Mirko H."https://zbmath.org/authors/?q=ai:wagner.mirko-hSummary: A recent paper by \textit{B. Schulze} et al. [Math. Methods Oper. Res. 92, No. 1, 107--132 (2020; Zbl 1454.90078)] presented the Rectangular Knapsack Problem (RKP) as a crucial subproblem in the study on the Cardinality-constrained Bi-objective Knapsack Problem (CBKP). To this end, they started an investigation into its complexity and approximability. The key results are an NP-hardness proof for a more general scenario than RKP, and a 4.5-approximation for RKP, raising the question of improvements for either result. In this note we settle both questions conclusively: we show that (a) RKP is indeed NP-hard in the considered setting (and even in more restricted settings), and (b) there exists both a pseudopolynomial algorithm and a fully-polynomial time approximation scheme (i.e., efficient approximability within any desired ratio \(\alpha >1\)) for RKP.The effective BRKGA algorithm for the \(k\)-medoids clustering problemhttps://zbmath.org/1500.900552023-01-20T17:58:23.823708Z"Brito, Jose Andre"https://zbmath.org/authors/?q=ai:brito.jose-andre-m"Semaan, Gustavo"https://zbmath.org/authors/?q=ai:semaan.gustavo-silva"Fadel, Augusto"https://zbmath.org/authors/?q=ai:fadel.augusto-cesarSummary: This paper presents a biased random-key genetic algorithm for \(k\)-medoids clustering problem. A novel heuristic operator was implemented and combined with a parallelized local search procedure. Experiments were carried out with fifty literature data sets with small, medium, and large sizes, considering several numbers of clusters, showed that the proposed algorithm outperformed eight other algorithms, for example, the classics PAM and CLARA algorithms. Furthermore, with the results of a linear integer programming formulation, we found that our algorithm obtained the global optimal solutions for most cases and, despite its stochastic nature, presented stability in terms of quality of the solutions obtained and the number of generations required to produce such solutions. In addition, considering the solutions (clusterings) produced by the algorithms, a relative validation index (average silhouette) was applied, where, again, was observed that our method performed well, producing cluster with a good structure.Open-end bin packing: new and old analysis approacheshttps://zbmath.org/1500.900562023-01-20T17:58:23.823708Z"Epstein, Leah"https://zbmath.org/authors/?q=ai:epstein.leahSummary: We analyze a recently introduced concept, called the price of clustering, for variants of bin packing called open-end bin packing problems (OEBP). Input items have sizes, and they also belong to a certain number of types. The new concept of price of clustering deals with the comparison of optimal solutions for the cases where items of distinct types can and cannot be packed together, respectively. The problem is related to greedy bin packing algorithms and to batched bin packing, and we discuss some of those concepts as well. We analyze max-OEBP, where a packed bin is valid if by excluding its largest item, the total size of items is below 1. For this variant, we study the case of general item sizes, and the parametric case with bounded item sizes, which shows the effect of small items. Finally, we briefly discuss min-OEBP, where a bin is valid if the total size of its items excluding the smallest item is strictly below 1, which is known to be an entirely different problem.Minimizing submodular functions on diamonds via generalized fractional matroid matchingshttps://zbmath.org/1500.900572023-01-20T17:58:23.823708Z"Fujishige, Satoru"https://zbmath.org/authors/?q=ai:fujishige.satoru"Király, Tamás"https://zbmath.org/authors/?q=ai:kiraly.tamas"Makino, Kazuhisa"https://zbmath.org/authors/?q=ai:makino.kazuhisa"Takazawa, Kenjiro"https://zbmath.org/authors/?q=ai:takazawa.kenjiro"Tanigawa, Shin-ichi"https://zbmath.org/authors/?q=ai:tanigawa.shin-ichiSummary: In this paper we show the first polynomial-time algorithm for the problem of minimizing submodular functions on the product of diamonds of finite size. This submodular function minimization problem is reduced to the membership problem for an associated polyhedron, which is equivalent to the optimization problem over the polyhedron, based on the ellipsoid method. The latter optimization problem is a generalization of the weighted fractional matroid matching problem. We give a combinatorial polynomial-time algorithm for this optimization problem by extending the result by \textit{D. Gijswijt} and \textit{G. Pap} [J. Comb. Theory, Ser. B 103, No. 4, 509--520 (2013; Zbl 1301.05057)].Packing algorithms for arborescences (and spanning trees) in capacitated graphshttps://zbmath.org/1500.900582023-01-20T17:58:23.823708Z"Gabow, Harold N."https://zbmath.org/authors/?q=ai:gabow.harold-n"Manu, K. S."https://zbmath.org/authors/?q=ai:manu.k-sSummary: In a digraph with real-valued edge capacities, we pack the greatest number of arborescences in time \(O(n^3m \log (n^2/m))\); the packing uses at most \(m\) distinct arborescences. Here \(n\) and \(m\) denote the number of vertices and edges respectively. Similar results hold for integral packing: we pack the greatest number of arborescences in time \(O(\min \{n, \log (nN)\} n^2m \log (n^2/m))\), using at most \(m+n -2\) distinct arborescences; here \(N\) denotes the largest capacity. These results improve all previous strong- and weak-polynomial bounds for capacitated digraphs. The algorithm extends to related problems, including packing spanning trees in an undirected capacitated graph, and covering such graphs by forests. The algorithm provides a new proof of Edmonds' theorem for arborescence packing, for both integral and real capacities. The algorithm works by maintaining a certain laminar family of sets.
For the entire collection see [Zbl 0875.90002].Real time read-frequency optimization for railway monitoring systemhttps://zbmath.org/1500.900592023-01-20T17:58:23.823708Z"Jemmali, Mahdi"https://zbmath.org/authors/?q=ai:jemmali.mahdi"Melhim, Loai Kayed B."https://zbmath.org/authors/?q=ai:melhim.loai-kayed-b"al Fayez, Fayez"https://zbmath.org/authors/?q=ai:al-fayez.fayezSummary: Trains have a key role in transporting people and goods with the option of moving from source to destinations by passing through several stations, with time-based features like date scheduling and known arrival times, which makes time a critical factor. The main challenge here, is to ensure that the train trip or train schedules are not affected or delayed in any way during the whole train trip; by giving the control unit in the railway system, the required time to process requests regarding all collected data. This an NP-hard problem with an optimal solution of handling all collected data and all service requests by the control unit of the railway system. Operational research will be used to solve this problem by developing many heuristics to deal with tasks of real-time systems, to produce a significant time optimization in the railway systems. To solve this problem, the proposed approach employs optimization by adapting 22 heuristics based on two categories of algorithms, the separated blocks category algorithm and the blocks interference category algorithm. The proposed approach receives data from many different sources at the same time, then collects the received data and save it to a data base in the railway system control unit. Experimental results showed the effectiveness of the developed heuristics, more over the proposed approach minimized the maximum completion time that was elapsed in handling the received requests.Technical note -- Online hypergraph matching with delayshttps://zbmath.org/1500.900602023-01-20T17:58:23.823708Z"Pavone, Marco"https://zbmath.org/authors/?q=ai:pavone.marco.1|pavone.marco"Saberi, Amin"https://zbmath.org/authors/?q=ai:saberi.amin"Schiffer, Maximilian"https://zbmath.org/authors/?q=ai:schiffer.maximilian"Tsao, Matt Wu"https://zbmath.org/authors/?q=ai:tsao.matt-wuSummary: We study an online hypergraph matching problem inspired by ridesharing and delivery applications. The vertices of a hypergraph are revealed sequentially and must be matched shortly thereafter, otherwise they will leave the system in favor of an outside option. Hyperedges are revealed to the algorithm once all of its vertices have arrived, and can only be included into the matching before any of its vertices leave the system. The cardinality of hyperedges are bounded by a small constant which represents the capacity of service vehicles. We study utility maximization and cost minimization objectives in this model. In the utility maximization setting, we present a polynomial time algorithm which provably achieves the optimal competitive ratio provided that the vehicle capacity is at least 3. For the cost minimization setting, we assume costs are monotone, which is a natural assumption in ridesharing and delivery problems. When the vehicle capacity is 2, we present a polynomial-time thresholding algorithm and prove that it has the optimal competitive ratio among deterministic algorithms. For higher vehicle capacities, we show that the performance of a randomized batching algorithm is within a small constant of the optimal competitive ratio achievable in polynomial time.Fast algorithm for the quadratic knapsack problemhttps://zbmath.org/1500.900612023-01-20T17:58:23.823708Z"Plotkin, A. V."https://zbmath.org/authors/?q=ai:plotkin.a-vSummary: The paper considers the quadratic programming problem with a strictly convex separable objective function, a single linear constraint, and two-sided constraints on variables. This problem is commonly called the convex separable quadratic knapsack problem (CQKnP). We are interested in an algorithm for solving the CQKnP with linear complexity. The works devoted to this topic contain inaccuracies in the description of algorithms and inefficient implementations. In this work, the existing difficulties are overcome.A tabu search algorithm to solve a green logistics bi-objective bi-level problemhttps://zbmath.org/1500.900622023-01-20T17:58:23.823708Z"Camacho-Vallejo, José-Fernando"https://zbmath.org/authors/?q=ai:camacho-vallejo.jose-fernando"López-Vera, Lilian"https://zbmath.org/authors/?q=ai:lopez-vera.lilian"Smith, Alice E."https://zbmath.org/authors/?q=ai:smith.alice-e"González-Velarde, José-Luis"https://zbmath.org/authors/?q=ai:gonzalez-velarde.jose-luisSummary: This paper addresses a supply chain situation, in which a company distributes commodities over a selected subset of customers while a manufacturer produces the commodities demanded by the customers. The distributor company has two objectives: the maximization of the profit gained by the distribution process and the minimization of \(CO_2\) emissions. The latter is important due to the regulations imposed by the government. A compromise between both objectives exists, since profit maximization only will attempt to include as many customers as possible. But, longer routes will be needed, causing more \(CO_2\) emissions. The manufacturer aims to minimize its manufacturing and shipping costs. Since a predefined hierarchy between both companies exists in the supply chain, a bi-level programming approach is employed. This problem is modelled as a bi-level programming problem with two objectives in the upper level and a single objective in the lower level. The upper level is associated with the distributor, while the lower level is associated with the manufacturer. Due to the inherent complexity to optimally solve this problem, a heuristic scheme is proposed. A nested bi-objective tabu search algorithm is designed to obtain non-dominated bi-level feasible solutions regarding the upper level. Considering simultaneously both objectives of the distributor allow us to focus on the minimization of \(CO_2\) emissions caused by the supply chain, but bearing in mind the distributor's profit. Numerical experimentation shows that the Pareto frontiers obtained by the proposed algorithm provide good alternatives for the decision-making process and also, some managerial insights are given.Solving fuzzy multi objective linear programming problems: an \(\alpha\)-cut approachhttps://zbmath.org/1500.900632023-01-20T17:58:23.823708Z"Fayyaz Rouhbakhsh, Fatemeh"https://zbmath.org/authors/?q=ai:fayyaz-rouhbakhsh.fatemeh"Hassanpour, Hassan"https://zbmath.org/authors/?q=ai:hassanpour.hassan"Effati, Sohrab"https://zbmath.org/authors/?q=ai:effati.sohrabSummary: In this paper, as an extension of Pareto optimality concepts for multi objective programming problems to fuzzy multi objective linear programming (FMOLP) problems, different types of Pareto optimal solutions (POSs), namely, weakly, strictly, and properly POSs are defined on the basis of \(\alpha\)-cuts of fuzzy numbers. Then a method for solving FMOLP problems is proposed to obtain them. It is shown that they can be obtained by solving some non fuzzy multi objective linear programming problems. A numerical example is solved to illustrate the method.An algorithm for quadratically constrained multi-objective quadratic fractional programming with pentagonal fuzzy numbershttps://zbmath.org/1500.900642023-01-20T17:58:23.823708Z"Goyal, Vandana"https://zbmath.org/authors/?q=ai:goyal.vandana"Rani, Namrata"https://zbmath.org/authors/?q=ai:rani.namrata"Gupta, Deepak"https://zbmath.org/authors/?q=ai:gupta.deepak-kumarSummary: This study proposes a methodology to obtain an efficient solution for a programming model which is multi-objective quadratic fractional with pentagonal fuzzy number as coefficients in all the objective functions and constraints. The proposed approach consists of three stages. In the first stage, defuzzification of the coefficients is carried out using the mean method of \(\alpha\)-cut. Then, in the second stage, a crisp multi-objective quadratic fractional programming model (MOQFP) is constructed to obtain a non-fractional model based on an iterative parametric approach. In the final stage, this multi-objective non-fractional model is transformed to obtain a model with a single objective by applying the \(\varepsilon\)-constraint method. This final model is then solved to get the desired solution. In addition, an algorithm and flowchart expressing the methodology are provided to present a clear picture of the approach. Finally, a numerical example is given to illustrate the complete approach.The set of all the possible compromises of a multi-level multi-objective linear programming problemhttps://zbmath.org/1500.900652023-01-20T17:58:23.823708Z"Kaci, Mustapha"https://zbmath.org/authors/?q=ai:kaci.mustapha"Radjef, Sonia"https://zbmath.org/authors/?q=ai:radjef.sonia(no abstract)Approximations for Pareto and proper Pareto solutions and their KKT conditionshttps://zbmath.org/1500.900662023-01-20T17:58:23.823708Z"Kesarwani, P."https://zbmath.org/authors/?q=ai:kesarwani.poonam"Shukla, P. K."https://zbmath.org/authors/?q=ai:shukla.pradyumn-kumar"Dutta, J."https://zbmath.org/authors/?q=ai:dutta.joydeep"Deb, K."https://zbmath.org/authors/?q=ai:deb.kalyanmoySummary: In this article, we view the Pareto and weak Pareto solutions of the multiobjective optimization by using an approximate version of KKT type conditions. In one of our main results Ekeland's variational principle for vector-valued maps plays a key role. We also focus on an improved version of Geoffrion proper Pareto solutions and it's approximation and characterize them through saddle point and KKT type conditions.Application of goal programming in the textile apparel industry to resolve production planning problems a meta-goal programming technique using weightshttps://zbmath.org/1500.900672023-01-20T17:58:23.823708Z"Malik, Zahid Amin"https://zbmath.org/authors/?q=ai:malik.zahid-amin"Kumar, Rakesh"https://zbmath.org/authors/?q=ai:kumar.rakesh.5|kumar.rakesh.3|kumar.rakesh.4|kumar.rakesh.1|kumar.rakesh.6|kumar.rakesh.2"Pathak, Govind"https://zbmath.org/authors/?q=ai:pathak.govind"Roy, Haridas"https://zbmath.org/authors/?q=ai:roy.haridasSummary: In the present business environment, rapidly developing technology and the competitive world market pose challenges to the available assets of industries. Hence, industries need to allocate and use available assets at the optimum level. Thus, industrialists must create a good decision plan to guide their performance in the production sector. As a result, the present study applies the Meta-Goal Programming technique to attain several objectives simultaneously in the textile production sector. The importance of this study lies in pursuing different objectives simultaneously, which has been almost ignored till now. The production scheduling problem in a textile firm is used to illustrate the practicability and mathematical validity of the suggested approach. Analysis of the results obtained demonstrates that the solution met all three meta-goals with some original goals being met partially. An analysis of the sensitivity of the approach to the weights of the preferences was conducted.A mixed finite differences scheme for gradient approximationhttps://zbmath.org/1500.900682023-01-20T17:58:23.823708Z"Boresta, Marco"https://zbmath.org/authors/?q=ai:boresta.marco"Colombo, Tommaso"https://zbmath.org/authors/?q=ai:colombo.tommaso"De Santis, Alberto"https://zbmath.org/authors/?q=ai:de-santis.alberto"Lucidi, Stefano"https://zbmath.org/authors/?q=ai:lucidi.stefanoThe authors focus on linear functionals defining an approximate version of the gradient of a function. These functionals are often used when dealing with optimization problems where the computation of the gradient of the objective function is costly or the objective function values are affected by some noise. These functionals have been recently considered to estimate the gradient of the objective function by the expected value of the function variations in the space of directions. The expected value is then approximated by a sample average over a proper (random) choice of sample directions in the domain of integration. In this way, the approximation error is characterized by statistical properties of the sample average estimate, typically its variance. Therefore, while useful and attractive bounds for the error variance can be expressed in terms of the number of function evaluations, nothing can be said on the error of a single experiment that could be quite large. This work instead is aimed at deriving an approximation scheme for linear functionals approximating the gradient, whose error of approximation can be characterized by a deterministic point of view in the case of noise-free data. The previously mentioned linear functionals are no longer considered as expected values over the space of directions, but rather as the filtered derivative of the objective function by a Gaussian kernel. By using this new approach, a gradient estimation based on a suitable linear combination of central finite differences at different step sizes is proposed and deterministic bounds that do not depend on the particular sample of points considered are computed. In the noisy setting, on the other end, the variance of the estimation error of the proposed method is shown to be strictly lower than the one of the estimation error of the central finite difference scheme. Numerical experiments on a set of test functions with a significant benchmark are encouraging, showing good performances compared to those of some methods commonly used in the literature, also in the noisy setting.
Reviewer: Paulo Mbunga (Kiel)A limited-memory Riemannian symmetric rank-one trust-region method with a restart strategyhttps://zbmath.org/1500.900692023-01-20T17:58:23.823708Z"Huang, Wen"https://zbmath.org/authors/?q=ai:huang.wen"Gallivan, Kyle A."https://zbmath.org/authors/?q=ai:gallivan.kyle-aSummary: Limited-memory versions of quasi-Newton methods are efficient approaches to solving large-scale optimization problems in a Euclidean space. In particular, a quasi-Newton symmetric rank-one update used in a trust-region setting has proven to be an effective method. In this paper, a limited-memory Riemannian symmetric rank-one trust-region method with a restart strategy is proposed by combining the intrinsic representation of tangent vectors with a recently proposed efficient solver for the Euclidean limited-memory symmetric rank-one trust-region subproblem. The global convergence is established under the assumptions that the vector transport is isometric and the cost function is Lipschitz continuously differentiable. The computational and spatial complexity are analyzed, and detailed implementations are described. The performance of the proposed method is compared to limited-memory Riemannian BFGS method and other state-of-the-art methods using problems from matrix completion, independent component analysis, phase retrieval, and blind deconvolution. The proposed method is novel for problems on Riemannian and Euclidean spaces.A benchmark study on steepest descent and conjugate gradient methods-line search conditions combinations in unconstrained optimizationhttps://zbmath.org/1500.900702023-01-20T17:58:23.823708Z"Kiran, Kadir"https://zbmath.org/authors/?q=ai:kiran.kadir(no abstract)B-subdifferential of the projection onto the generalized spectraplexhttps://zbmath.org/1500.900712023-01-20T17:58:23.823708Z"Lin, Youyicun"https://zbmath.org/authors/?q=ai:lin.youyicun"Hu, Shenglong"https://zbmath.org/authors/?q=ai:hu.shenglongSummary: In this paper, a complete characterization of the B-subdifferential with explicit formula for the projection mapping onto the generalized spectraplex (aka generalized matrix simplex) is derived. The derivation is based on complete characterizations of the B-subdifferential as well as the Han-Sun Jacobian of the projection mapping onto the generalized simplex. The formula provides tools for further computations and nonsmooth analysis involving this projection.An efficient gradient method with approximately optimal stepsizes based on regularization models for unconstrained optimizationhttps://zbmath.org/1500.900722023-01-20T17:58:23.823708Z"Liu, Zexian"https://zbmath.org/authors/?q=ai:liu.zexian"Chu, Wangli"https://zbmath.org/authors/?q=ai:chu.wangli"Liu, Hongwei"https://zbmath.org/authors/?q=ai:liu.hongwei|liu.hongwei.1Summary: It is widely accepted that the stepsize is of great significance to gradient method. An efficient gradient method with approximately optimal stepsizes mainly based on regularization models is proposed for unconstrained optimization. More specifically, if the objective function is not close to a quadratic function on the line segment between the current and latest iterates, regularization model is exploited carefully to generate approximately optimal stepsize. Otherwise, quadratic approximation model is used. In addition, when the curvature is non-positive, special regularization model is developed. The convergence of the proposed method is established under some weak conditions. Extensive numerical experiments indicated the proposed method is very promising. Due to the surprising efficiency, we believe that gradient methods with approximately optimal stepsizes can become strong candidates for large-scale unconstrained optimization.Robust grey wolf optimizer for multimodal optimizations: a cross-dimensional coordination approachhttps://zbmath.org/1500.900732023-01-20T17:58:23.823708Z"Wang, Bingkun"https://zbmath.org/authors/?q=ai:wang.bingkun"Liu, Lei"https://zbmath.org/authors/?q=ai:liu.lei|liu.lei.2|liu.lei.3|liu.lei.1"Li, Yuchong"https://zbmath.org/authors/?q=ai:li.yuchong"Khishe, Mohammad"https://zbmath.org/authors/?q=ai:khishe.mohammadSummary: This paper proposes a variant of the Gray Wolf Optimizer (GWO) called the Cross-Dimensional Coordination Gray Wolf Optimizer (CDCGWO), which utilizes a novel learning technique in which all prior best knowledge is gained by candid solutions (wolves) is used to update the best solution (prey positions). This method maintains the wolf's diversity, preventing premature convergence in multimodal optimization tasks. In addition, CDCGWO provides a unique constraint management approach for real-world constrained engineering optimization problems. The CDCGWO's performance on fifteen widely used multimodal numerical test functions, ten complex IEEE CEC06-2019 suit tests, a randomly generated landscape, and twelve constrained real-world optimization problems in a variety of engineering fields, including industrial chemical producer, power system, process design, and synthesis, mechanical design, power-electronic, and livestock feed ration was evaluated. For all 25 numerical functions and 12 engineering problems, the CDCGWO beats all benchmarks and sixteen out of eighteen state-of-the-art algorithms by an average rank of Friedman test of higher than 78 percent, while exceeding jDE100 and DISHchain1e+12 by 21\% and 39\%, respectively. For all numerical functions and engineering problems, the Bonferroni-Dunn and Holm's tests indicated that CDCGWO is statistically superior to all benchmark and state-of-the-art algorithms, while its performance is statistically equivalent to jDE100 and DISHchain1e+12. The proposed CDCGWO might be utilized to solve challenges involving multimodal search spaces. In addition, compared to rival benchmarks, CDCGWO is suitable for a broader range of engineering applications.Dual-density-based reweighted \(\ell_1\)-algorithms for a class of \(\ell_0\)-minimization problemshttps://zbmath.org/1500.900742023-01-20T17:58:23.823708Z"Xu, Jialiang"https://zbmath.org/authors/?q=ai:xu.jialiang"Zhao, Yun-Bin"https://zbmath.org/authors/?q=ai:zhao.yunbinLet \(A\in \mathbb{R}^{m\times n}\) (with \(m\ll n\)), \(B\in \mathbb{R}^{l\times n}\) (with \(l\leq n\)), \(y\in \mathbb{R}^{m}\), \(b\in \mathbb{R}^{l}\), and \(\epsilon \geq 0\). The \(l_{0}\)-minimization problem considered in this paper consists in minimizing the number of nonzero components of \(x\) subject to \(\left\Vert y-Ax\right\Vert _{2}\leq \epsilon \) and \(Bx\leq b\). To this problem, the authors associate the weighted \(l_{1}\)-minimization problem consisting in minimizing \(w^{T}\left\vert x\right\vert \) subject to the same constraints, \(w\in \mathbb{R}_{+}^{n}\) being a weight vector. Under suitable assumptions, the authors formulate a bilevel optimization problem for obtaining an optimal weight \(w\), propose several convex relaxations of this bilevel problem, present so-called dual-density-based algorithms for solving them, and report the outcomes of some numerical experiments aimed at comparing such algorithms.
Reviewer: Juan-Enrique Martínez-Legaz (Barcelona)A valuable remark on Lipschitz in the first variable definition and system of nonlinear variational inequalitieshttps://zbmath.org/1500.900752023-01-20T17:58:23.823708Z"Benhadid, Ayache"https://zbmath.org/authors/?q=ai:benhadid.ayache(no abstract)Adaptive discretization-based algorithms for semi-infinite programs with unbounded variableshttps://zbmath.org/1500.900762023-01-20T17:58:23.823708Z"Jungen, Daniel"https://zbmath.org/authors/?q=ai:jungen.daniel"Djelassi, Hatim"https://zbmath.org/authors/?q=ai:djelassi.hatim"Mitsos, Alexander"https://zbmath.org/authors/?q=ai:mitsos.alexanderSummary: The proof of convergence of adaptive discretization-based algorithms for semi-infinite programs (SIPs) usually relies on compact host sets for the upper- and lower-level variables. This assumption is violated in some applications, and we show that indeed convergence problems can arise when discretization-based algorithms are applied to SIPs with unbounded variables. To mitigate these convergence problems, we first examine the underlying assumptions of adaptive discretization-based algorithms. We do this paradigmatically using the lower-bounding procedure of \textit{A. Mitsos} [Optimization 60, No. 10--12, 1291--1308 (2011; Zbl 1231.90361)], which uses the algorithm proposed by \textit{J. W. Blankenship} and \textit{J. E. Falk} [J. Optim. Theory Appl. 19, 261--281 (1976; Zbl 0307.90071)]. It is noteworthy that the considered procedure and assumptions are essentially the same in the broad class of adaptive discretization-based algorithms. We give sharper, slightly relaxed, assumptions with which we achieve the same convergence guarantees. We show that the convergence guarantees also hold for certain SIPs with unbounded variables based on these sharpened assumptions. However, these sharpened assumptions may be difficult to prove a priori. For these cases, we propose additional, stricter, assumptions which might be easier to prove and which imply the sharpened assumptions. Using these additional assumptions, we present numerical case studies with unbounded variables. Finally, we review which applications are tractable with the proposed additional assumptions.Solving continuous set covering problems by means of semi-infinite optimization. With an application in product portfolio optimizationhttps://zbmath.org/1500.900772023-01-20T17:58:23.823708Z"Krieg, Helene"https://zbmath.org/authors/?q=ai:krieg.helene"Seidel, Tobias"https://zbmath.org/authors/?q=ai:seidel.tobias"Schwientek, Jan"https://zbmath.org/authors/?q=ai:schwientek.jan"Küfer, Karl-Heinz"https://zbmath.org/authors/?q=ai:kufer.karl-heinzSummary: This article introduces the new class of continuous set covering problems. These optimization problems result, among others, from product portfolio design tasks with products depending continuously on design parameters and the requirement that the product portfolio satisfies customer specifications that are provided as a compact set. We show that the problem can be formulated as semi-infinite optimization problem (SIP). Yet, the inherent non-smoothness of the semi-infinite constraint function hinders the straightforward application of standard methods from semi-infinite programming. We suggest an algorithm combining adaptive discretization of the infinite index set and replacement of the non-smooth constraint function by a two-parametric smoothing function. Under few requirements, the algorithm converges and the distance of a current iterate can be bounded in terms of the discretization and smoothing error. By means of a numerical example from product portfolio optimization, we demonstrate that the proposed algorithm only needs relatively few discretization points and thus keeps the problem dimensions small.The facets of the spanning trees polytopehttps://zbmath.org/1500.900782023-01-20T17:58:23.823708Z"Chaourar, Brahim"https://zbmath.org/authors/?q=ai:chaourar.brahimSummary: Let \(G=(V, E)\) be an undirected graph. The spanning trees polytope \(P(G)\) is the convex hull of the characteristic vectors of all spanning trees of \(G\). In this paper, we describe all facets of \(P(G)\) as a consequence of the facets of the bases polytope \(P(M)\) of a matroid \(M\), i.e., the convex hull of the characteristic vectors of all bases of \(M\).Minimum cost dynamic flows: the series-parallel casehttps://zbmath.org/1500.900792023-01-20T17:58:23.823708Z"Klinz, Bettina"https://zbmath.org/authors/?q=ai:klinz.bettina"Woeginger, Gerhard J."https://zbmath.org/authors/?q=ai:woeginger.gerhard-jSummary: A dynamic network consists of a directed graph with capacities, costs and integral transit times on the arcs. In the \textit{minimum cost dynamic flow problem} MCDFP, the goal is to compute for a given dynamic network with source \(s\), sink \(t\) and two integers \(\upsilon\) and \(T\), a feasible dynamic flow from \(s\) to \(t\) of value \(\upsilon\), obeying the time bound \(T\), and having minimum total cost. MCDFP contains as subproblems the \textit{minimum cost maximum dynamic flow problem} (fix \(\upsilon\) to the maximum amount of flow that can be sent from \(s\) to \(t\) within time \(T)\), and the \textit{minimum cost quickest flow problem} (fix \(T\) to the minimum time needed for sending \(\upsilon\) units of flow from \(s\) to \(t)\). We first prove that both subproblems are NP-hard even on two-terminal series-parallel graphs with unit capacities. As main result, we formulate a greedy algorithm for MCDFP and provide a full characterization via forbidden subgraphs of the class \(\mathcal{G}\) of graphs, for which this greedy algorithm always yields an optimum solution (for arbitrary choices of problem parameters). \(\mathcal{G}\) is a subclass of the class of two-terminal series-parallel graphs. It is shown that the greedy algorithm solves MCDFP restricted to graphs in \(\mathcal{G}\) in polynomial time.
For the entire collection see [Zbl 0875.90002].Finding all minimum cost flows and a faster algorithm for the \(K\) best flow problemhttps://zbmath.org/1500.900802023-01-20T17:58:23.823708Z"Könen, David"https://zbmath.org/authors/?q=ai:konen.david"Schmidt, Daniel"https://zbmath.org/authors/?q=ai:schmidt.daniel-f|schmidt.daniel-d|schmidt.daniel-r"Spisla, Christiane"https://zbmath.org/authors/?q=ai:spisla.christianeSummary: This paper addresses the problem of determining all optimal integer solutions of a linear integer network flow problem, which we call the \textit{all optimal integer flow (AOF)} problem. We derive an \(\mathcal{O} (F (m + n) + m n + M)\) time algorithm to determine all \(F\) many optimal integer flows in a directed network with \(n\) nodes and \(m\) arcs, where \(M\) is the best time needed to find one minimum cost flow. We remark that stopping Hamacher's well-known method for the determination of the \(K\) best integer flows [\textit{H. W. Hamacher}, Ann. Oper. Res. 57, 65--72 (1995; Zbl 0838.90033)] at the first sub-optimal flow results in an algorithm with a running time of \(\mathcal{O} (F m (n \log n + m) + M)\) for solving the AOF problem. Our improvement is essentially made possible by replacing the shortest path sub-problem with a more efficient way to determine a so-called \textit{proper zero cost cycle} using a modified depth-first search technique. As a byproduct, our analysis yields an enhanced algorithm to determine the \(K\) best integer flows that runs in \(\mathcal{O} (K n^3 + M)\). Besides, we give lower and upper bounds for the number of all optimal integer and feasible integer solutions. Our bounds are based on the fact that any optimal solution can be obtained by an initial optimal \textit{tree solution} plus a conical combination of incidence vectors of all \textit{induced cycles} with bounded coefficients.Technical note -- On matrix exponential differentiation with application to weighted sum distributionshttps://zbmath.org/1500.900812023-01-20T17:58:23.823708Z"Das, Milan Kumar"https://zbmath.org/authors/?q=ai:das.milan-kumar"Tsai, Henghsiu"https://zbmath.org/authors/?q=ai:tsai.henghsiu"Kyriakou, Ioannis"https://zbmath.org/authors/?q=ai:kyriakou.ioannis"Fusai, Gianluca"https://zbmath.org/authors/?q=ai:fusai.gianlucaSummary: In this note, we revisit the innovative transform approach introduced by Cai, Song, and Kou [\textit{N. Cai} et al., Oper. Res. 63, No. 3, 540--554 (2015; Zbl 1377.91156)] for accurately approximating the probability distribution of a weighted stochastic sum or time integral under general one-dimensional Markov processes. Since then, Song, Cai, and Kou [\textit{Y. Song} et al., INFORMS J. Comput. 30, No. 4, 634--645 (2018; Zbl 07281464)] and Cui, Lee, and Liu [\textit{Z. Cui} et al., Eur. J. Oper. Res. 266, No. 3, 1134--1139 (2018; Zbl 1403.91336)] have achieved an efficient reduction of the original double to a single-transform approach. We move one step further by approaching the problem from a new angle and, by dealing with the main obstacle relating to the differentiation of the exponential of a matrix, we bypass the transform inversion. We highlight the benefit from the new result by means of some numerical examples.Optimal treatment of chronic kidney disease with uncertainty in obtaining a transplantable kidney: an MDP based approachhttps://zbmath.org/1500.900822023-01-20T17:58:23.823708Z"Fan, Wenjuan"https://zbmath.org/authors/?q=ai:fan.wenjuan"Zong, Yang"https://zbmath.org/authors/?q=ai:zong.yang"Kumar, Subodha"https://zbmath.org/authors/?q=ai:kumar.subodhaSummary: Chronic kidney disease (CKD) is one of the most serious and prevalent health issues all over the world. The evolution of CKD can last for many years until the death of patients, and the method of treatment mainly includes medication, dialysis, and transplantation with the evolution of the disease. It has been validated by many empirical studies that for severe CKD patients, the optimal treatment is transplantation if a suitable kidney is available, otherwise the patients should initiate dialysis at a suitable time. It has also been validated that the initiation time of dialysis significantly impacts not only the direct treatment results, but also the success of a future possible kidney transplantation. Motivated by this consideration, we investigate the decision-making problem of the optimal treatment approach to maximize the patient's total reward including pre-transplant reward and post-transplant reward (if applicable), considering the possibility of having a suitable kidney transplantation in the future. A Markov decision process model is established in which the status of the process is described by the patient health status. We present some structural properties of the decision-making problem, which are used to choose the optimal treatment approach in different health status of patients. We collect the clinical data in the simulation experiments to obtain the fitted curves of the evolution process of different CKD patients, and compare the simulation results with the actual clinical data to demonstrate the advantage of our model.Learning Markov models via low-rank optimizationhttps://zbmath.org/1500.900832023-01-20T17:58:23.823708Z"Zhu, Ziwei"https://zbmath.org/authors/?q=ai:zhu.ziwei"Li, Xudong"https://zbmath.org/authors/?q=ai:li.xudong"Wang, Mengdi"https://zbmath.org/authors/?q=ai:wang.mengdi"Zhang, Anru"https://zbmath.org/authors/?q=ai:zhang.anruSummary: Modeling unknown systems from data is a precursor of system optimization and sequential decision making. In this paper, we focus on learning a Markov model from a single trajectory of states. Suppose that the transition model has a small rank despite having a large state space, meaning that the system admits a low-dimensional latent structure. We show that one can estimate the full transition model accurately using a trajectory of length that is proportional to the total number of states. We propose two maximum-likelihood estimation methods: a convex approach with nuclear norm regularization and a nonconvex approach with rank constraint. We explicitly derive the statistical rates of both estimators in terms of the Kullback-Leiber divergence and the \(\ell_2\) error and also establish a minimax lower bound to assess the tightness of these rates. For computing the nonconvex estimator, we develop a novel DC (difference of convex function) programming algorithm that starts with the convex M-estimator and then successively refines the solution till convergence. Empirical experiments demonstrate consistent superiority of the nonconvex estimator over the convex one.The random linear bottleneck assignment problemhttps://zbmath.org/1500.900842023-01-20T17:58:23.823708Z"Pferschy, Ulrich"https://zbmath.org/authors/?q=ai:pferschy.ulrichSummary: In this contribution asymptotic properties of the linear bottleneck assignment problem \textit{LBAP} are investigated. It is shown that the expected value of the optimal solution of an \(n \times n\) \textit{LBAP} with independently and identically distributed costs tends towards the infimum of the cost range as \(n\) tends to infinity. Furthermore, explicit upper and lower bounds for the uniform cost distribution are given as functions in \(n\).
Exploiting results from evolutionary random graph theory an algorithm with \(O(n^2)\) expected running time is presented.
For the entire collection see [Zbl 0875.90002].Some results for the minimal optimal solution of min-max programming problem with addition-min fuzzy relational inequalitieshttps://zbmath.org/1500.900852023-01-20T17:58:23.823708Z"Wu, Yan-Kuen"https://zbmath.org/authors/?q=ai:wu.yan-kuen"Wen, Ching-Feng"https://zbmath.org/authors/?q=ai:wen.chingfeng"Hsu, Yuan-Teng"https://zbmath.org/authors/?q=ai:hsu.yuan-teng"Wang, Ming-Xian"https://zbmath.org/authors/?q=ai:wang.mingxianSummary: In this study, a BitTorrent-like peer-to-peer (BT-P2P) file-sharing system is reduced into a system of fuzzy relational inequalities (FRI) with addition-min composition. To study the stability of data transmission and network congestion, a min-max programming problem subject to addition-min FRI is proposed. From a cost-saving perspective, the optimal solution to the min-max programming problem may not be the minimal optimal solution. Furthermore, while the ``optimal'' solution provides better cost performance, the ``minimal'' solution provides for the least congestion of the file-sharing system. In this paper, we propose adopting a binding variable approach based on certain new theoretical properties to find a minimal optimal solution for the min-max programming problem. It is for these new properties that the minimal optimal solution obtained via the binding variable approach would minimize the maximum transmission level; further, the amounts of data download in the optimal solution would be as balanced as possible. Some numerical examples are provided after each of the new properties to illustrate the advantages of our approach.Fast global convergence of natural policy gradient methods with entropy regularizationhttps://zbmath.org/1500.900862023-01-20T17:58:23.823708Z"Cen, Shicong"https://zbmath.org/authors/?q=ai:cen.shicong"Cheng, Chen"https://zbmath.org/authors/?q=ai:cheng.chen"Chen, Yuxin"https://zbmath.org/authors/?q=ai:chen.yuxin"Wei, Yuting"https://zbmath.org/authors/?q=ai:wei.yuting"Chi, Yuejie"https://zbmath.org/authors/?q=ai:chi.yuejieSummary: Natural policy gradient (NPG) methods are among the most widely used policy optimization algorithms in contemporary reinforcement learning. This class of methods is often applied in conjunction with entropy regularization -- an algorithmic scheme that encourages exploration -- and is closely related to soft policy iteration and trust region policy optimization. Despite the empirical success, the theoretical underpinnings for NPG methods remain limited even for the tabular setting. This paper develops \textit{nonasymptotic} convergence guarantees for entropy-regularized NPG methods under softmax parameterization, focusing on discounted Markov decision processes (MDPs). Assuming access to exact policy evaluation, we demonstrate that the algorithm converges linearly -- even quadratically, once it enters a local region around the optimal policy -- when computing optimal value functions of the regularized MDP. Moreover, the algorithm is provably stable vis-à-vis inexactness of policy evaluation. Our convergence results accommodate a wide range of learning rates and shed light upon the role of entropy regularization in enabling fast convergence.Combining and strengthening Gomory cutshttps://zbmath.org/1500.900872023-01-20T17:58:23.823708Z"Ceria, Sebastián"https://zbmath.org/authors/?q=ai:ceria.sebastian"Cornuéjols, Gérard"https://zbmath.org/authors/?q=ai:cornuejols.gerard-p"Dawande, Milind"https://zbmath.org/authors/?q=ai:dawande.milind-wSummary: In this paper, we show how to generate Gomory cuts using more than one row of the tableau at a time. We generate a strong cutting plane in this family by solving a sequence of single Diophantine equations. We report computational experience on several instances of pure \(0-1\) programs.
For the entire collection see [Zbl 0875.90002].Zero-inertia limit: from particle swarm optimization to consensus-based optimizationhttps://zbmath.org/1500.900882023-01-20T17:58:23.823708Z"Cipriani, Cristina"https://zbmath.org/authors/?q=ai:cipriani.cristina"Huang, Hui"https://zbmath.org/authors/?q=ai:huang.hui"Qiu, Jinniao"https://zbmath.org/authors/?q=ai:qiu.jinniaoSummary: Recently a continuous description of particle swarm optimization (PSO) based on a system of stochastic differential equations was proposed by \textit{S. Grassi} and \textit{L. Pareschi} [Math. Models Methods Appl. Sci. 31, No. 8, 1625--1657 (2021; Zbl 1473.35570)] where the authors formally showed the link between PSO and the consensus-based optimization (CBO) through the zero-inertia limit. This paper is devoted to solving this theoretical open problem proposed in [loc. cit.] by providing a rigorous derivation of CBO from PSO through the limit of zero inertia, and a quantified convergence rate is obtained as well. The proofs are based on a probabilistic approach by investigating both the weak and strong convergence of the corresponding stochastic differential equations of Mckean type in the continuous path space and the results are illustrated with some numerical examples.A new fitness-based selection operator for genetic algorithms to maintain the equilibrium of selection pressure and population diversityhttps://zbmath.org/1500.900892023-01-20T17:58:23.823708Z"Naqvi, Fakhra Batool"https://zbmath.org/authors/?q=ai:naqvi.fakhra-batool"Shad, Muhammad Yousaf"https://zbmath.org/authors/?q=ai:shad.muhammad-yousaf(no abstract)A modified comprehensive learning particle swarm optimizer and its application in cylindricity error evaluation problemhttps://zbmath.org/1500.900902023-01-20T17:58:23.823708Z"Wu, Qing"https://zbmath.org/authors/?q=ai:wu.qing"Zhang, Chunjiang"https://zbmath.org/authors/?q=ai:zhang.chunjiang"Zhang, Mengya"https://zbmath.org/authors/?q=ai:zhang.mengya"Yang, Fajun"https://zbmath.org/authors/?q=ai:yang.fajun"Gao, Liang"https://zbmath.org/authors/?q=ai:gao.liangThe Latin hypercube sampling method is used as a local search method for the global best particle in the comprehensive learning particle swarm optimizer. Eight benchmark functions with 10 and 30 dimensions in 19 are used to demonstrate the efficiency of the hybrid algorithm. The application of the hybrid algorithm in a cylindricity error evaluation problem is presented.
Reviewer: Hang Lau (Montréal)Introduction to optimization-based decision-makinghttps://zbmath.org/1500.910022023-01-20T17:58:23.823708Z"de Miranda, Joao Luis"https://zbmath.org/authors/?q=ai:de-miranda.joao-luisThe book is divided into 10 chapters. The fist chapter shows on small real life numerical examples the process of successive finding the optimal decision. Chapter 2 is devoted to basic introduction to linear algebra, especially methods of solving systems of linear algebraic equations (Cramer's rule, Gauss elimination). In the third chapter the reader obtains a basic knowledge concerning linear programming. The main features of linear programming are presented using small numerical examples accompanied by graphical illustration with two structural variables. Similar approach is used in the next Chapter 4, where the concept of duality in linear programming is explained and in Chapter 5 containing necessary concepts and mathematical tools for minimization or maximization of nonlinear functions. The explanation is confined mostly to continuous functions of one and two variables with a brief outline of possible further generalization to more variables. Besides, the concept of saddle point is shortly mentioned. Sensitivity and post-optimality analysis of linear programming problems are studied in Chapter 6. This chapter analyzes also how introducing new variables influences the optimal solution of the problems and provides a basic information about linear parametric optimization. Solution methods of integer linear programming problems, i.e. problems, in which either all or a part of structural variables must be integer are presented in Chapter 7. Principles of branch and bound problems are explained, special attention is devoted to binary integer programming problems. Chapter 8 contains introduction to the game theory with emphasis on zero-sum games and their solution via linear programming. Practical examples explaining the principles and aims of multi-criteria decision making are presented in Chapter 9. Chapter 10 develops various approaches to problems containing uncertainty. Stochastic optimization and construction of an adequate deterministic model to a given stochastic problem are explained.
The book is an elementary and self-contained textbook introducing to decision-making based on optimization with minimum pre-requisites required from the readers
Reviewer: Karel Zimmermann (Praha)Should I stay or should I go? Congestion pricing and equilibrium selection in a transportation networkhttps://zbmath.org/1500.910452023-01-20T17:58:23.823708Z"Carbone, Enrica"https://zbmath.org/authors/?q=ai:carbone.enrica"Dixit, Vinayak V."https://zbmath.org/authors/?q=ai:dixit.vinayak-v"Rutstrom, E. Elisabet"https://zbmath.org/authors/?q=ai:rutstrom.e-elisabetSummary: When imposing traffic congestion pricing around downtown commercial centers, there is a concern that commercial activities will have to consider relocating due to reduced demand, at a cost to merchants. Concerns like these were important in the debates before the introductions of congestion charges in both London and Stockholm and influenced the final policy design choices. This study introduces a sequential experimental game to study reactions to congestion pricing in the commercial sector. In the game, merchants first make location choices. Consumers, who drive to do their shopping, subsequently choose where to shop. Initial responses to the introduction of congestion pricing and equilibrium selection adjustments over time are observed. These observations are compared to responses and adjustments in a condition where congestion pricing is reduced from an initially high level. Payoffs are non-linear and non-transparent, making it less than obvious that the efficient equilibrium will be selected, and introducing possibilities that participants need to discover their preferences and anchor on past experiences. We find that initial responses reflect standard inverse price-demand relations, and that adjustments over time rely on signaling by consumers leading to the efficient equilibrium. There is also evidence that priming from initial experiences influence play somewhat. We confirm that commercial activities relocate following the introduction of congestion pricing and that the adjustment process is costly to merchants.Decision-based scenario clustering for decision-making under uncertaintyhttps://zbmath.org/1500.910542023-01-20T17:58:23.823708Z"Hewitt, Mike"https://zbmath.org/authors/?q=ai:hewitt.mike"Ortmann, Janosch"https://zbmath.org/authors/?q=ai:ortmann.janosch"Rei, Walter"https://zbmath.org/authors/?q=ai:rei.walterSummary: In order to make sense of future uncertainty, managers have long resorted to creating scenarios that are then used to evaluate how uncertainty affects decision-making. The large number of scenarios that are required to faithfully represent several sources of uncertainty leads to major computational challenges in using these scenarios in a decision-support context. Moreover, the complexity induced by the large number of scenarios can stop decision makers from reasoning about the interplay between the uncertainty modelled by the data and the decision-making processes (i.e., how uncertainty affects the decisions to be made). To meet this challenge, we propose a new approach to group scenarios based on the decisions associated to them. We introduce a graph structure on the scenarios based on the opportunity cost of predicting the wrong scenario by the decision maker. This allows us to apply graph clustering methods and to obtain groups of scenarios with mutually acceptable decisions (i.e., decisions that remain efficient for all scenarios within the group). In the present paper, we test our approach by applying it in the context of stochastic optimization. Specifically, we use it as a means to derive both lower and upper bounds for stochastic network design models and fleet planning problems under uncertainty. Our numerical results indicate that our approach is particularly effective to derive high-quality bounds when dealing with complex problems under time limitations.Choice of mixed strategy in matrix game with nature by Hurwitz criterionhttps://zbmath.org/1500.910572023-01-20T17:58:23.823708Z"Ponomarëv, Stepan Yu."https://zbmath.org/authors/?q=ai:ponomarev.stepan-yu"Khutoretskiĭ, Aleksandr B."https://zbmath.org/authors/?q=ai:khutoretskii.aleksandr-bSummary: The article solves the problem of choosing an optimal, by the Hurwitz criterion, mixed strategy for arbitrary matrix game against nature. We reduce the problem to solving \(n\) linear programming problems (where \(n\) is the number of scenarios). As far as we know, this is a new result. It can be used to make decisions in uncertain environments, if the game situation is repeated many times, or physical mixture of pure strategies is realizable.Technical note -- Revenue volatility under uncertain network effectshttps://zbmath.org/1500.910712023-01-20T17:58:23.823708Z"Baron, Opher"https://zbmath.org/authors/?q=ai:baron.opher"Hu, Ming"https://zbmath.org/authors/?q=ai:hu.ming"Malekian, Azarakhsh"https://zbmath.org/authors/?q=ai:malekian.azarakhshSummary: We study the revenue volatility of a monopolist selling a divisible good to consumers in the presence of local network externalities among consumers. The utility of consumers depends on their consumption level as well as those of their neighbors in a network through network externalities. In the eye of the seller, there exist uncertainties in the network externalities, which may be the result of unanticipated shocks or a lack of exact knowledge of the externalities. However, the seller has to commit to prices ex ante. We quantify the magnitude of revenue volatility under the optimal pricing in the presence of those random externalities. We consider both a given uncertainty set (from a robust optimization perspective) and a known uncertainty distribution (from a stochastic optimization perspective) and carry out the analyses separately. For a given uncertainty set, we show that the worst case of revenue fluctuation is determined by the largest eigenvalue of the matrix that represents the underlying network. Our results indicate that in networks with a smaller largest eigenvalue, the monopolist has a less volatile revenue. For the known uncertainty, we model the random noise in the form of a Wigner matrix and investigate large networks such as social networks. For such networks, we establish that the expected revenue is the sum of the revenue associated with the underlying expected network externalities and a term that depends on the noise variance and the weighted sum of all walks of different lengths in the expected network. We demonstrate that in a less connected network, the revenue is less volatile to uncertainties, and perhaps counterintuitively, the expected revenue increases with the level of uncertainty in the network. We show that a seller in the two settings favors the opposite type of network. In particular, if the underlying network is such that all the edge weights equal 1, the seller in the robust optimization setting prefers more asymmetry and the seller in the stochastic optimization setting prefers less asymmetry in the underlying network; by contrast, if the underlying network is such that the sum of all the edge weights is fixed, the seller in the robust optimization setting prefers less symmetry and the seller in the stochastic optimization setting prefers more asymmetry.Online resource allocation with personalized learninghttps://zbmath.org/1500.910832023-01-20T17:58:23.823708Z"Zhalechian, Mohammad"https://zbmath.org/authors/?q=ai:zhalechian.mohammad"Keyvanshokooh, Esmaeil"https://zbmath.org/authors/?q=ai:keyvanshokooh.esmaeil"Shi, Cong"https://zbmath.org/authors/?q=ai:shi.cong"Van Oyen, Mark P."https://zbmath.org/authors/?q=ai:van-oyen.mark-pSummary: Joint online learning and resource allocation is a fundamental problem inherent in many applications. In a general setting, heterogeneous customers arrive sequentially, each of which can be allocated to a resource in an online fashion. Customers stochastically consume the resources, allocations yield stochastic rewards, and the system receives feedback outcomes with delay. We introduce a generic framework that judiciously synergizes online learning with a broad class of online resource allocation mechanisms, where the sequence of customer contexts is adversarial, and the customer reward and the resource consumption are stochastic and unknown. First, we propose an online algorithm for a general resource allocation problem, called personalized resource allocation while learning with delay, which strikes a three-way balance between exploration, exploitation, and hedging against adversarial arrival sequence. We provide a performance guarantee for this online algorithm in terms of Bayesian regret. Next, we develop our second online algorithm for an advance scheduling problem, called personalized advance scheduling while learning with delay (PAS-LD), and evaluate its theoretical performance. The PAS-LD algorithm has a more delicate structure and offers multiday scheduling while accounting for the no-show behavior of customers. We demonstrate the practicality and efficacy of our PAS-LD algorithm using clinical data from a partner health system. Our results show that the proposed algorithm provides promising results compared with several benchmark policies.Negligence rules coping with hidden precautionhttps://zbmath.org/1500.910882023-01-20T17:58:23.823708Z"Schweizer, Urs"https://zbmath.org/authors/?q=ai:schweizer.ursSummary: This paper investigates the implementation of negligence rules when the negligent act constitutes a hidden action in the sense of principal-agent theory and where the available evidence is generated by a signal. Any liability rule exclusively based on the available evidence comes with an incentive threshold. Agents with cost savings from the negligent act above this threshold commit the negligent act whereas the others do not. In a first step, liability rules are examined that implement a given incentive threshold at minimum error costs. As this is a linear programming problem, the present paper makes heavy use of duality theory. The multiplier of the incentive constraint if understood as shadow price allows for an intuitive explanation of the results. As a second step, a comparative statics analysis with respect to the incentive threshold is provided. Surprisingly enough, the relation between the threshold and minimum error costs need not be monotonic.Seeding with costly network informationhttps://zbmath.org/1500.911012023-01-20T17:58:23.823708Z"Eckles, Dean"https://zbmath.org/authors/?q=ai:eckles.dean"Esfandiari, Hossein"https://zbmath.org/authors/?q=ai:esfandiari.hossein"Mossel, Elchanan"https://zbmath.org/authors/?q=ai:mossel.elchanan"Rahimian, M. Amin"https://zbmath.org/authors/?q=ai:rahimian.mohammad-aminSummary: We study the task of selecting \(k\) nodes, in a social network of size \(n\), to seed a diffusion with maximum expected spread size, under the independent cascade model with cascade probability \(p\). Most of the previous work on this problem (known as influence maximization) focuses on efficient algorithms to approximate the optimal seed set with provable guarantees given knowledge of the entire network; however, obtaining full knowledge of the network is often very costly in practice. Here we develop algorithms and guarantees for approximating the optimal seed set while bounding how much network information is collected. First, we study the achievable guarantees using a sublinear influence sample size. We provide an almost tight approximation algorithm with an additive \(\varepsilon\) \( n\) loss and show that the squared dependence of sample size on \(k\) is asymptotically optimal when \(\varepsilon\) is small. We then propose a probing algorithm that queries edges from the graph and use them to find a seed set with the same almost tight approximation guarantee. We also provide a matching (up to logarithmic factors) lower-bound on the required number of edges. This algorithm is implementable in field surveys or in crawling online networks. Our probing takes \(p\) as an input which may not be known in advance, and we show how to down-sample the probed edges to match the best estimate of \(p\) if they are collected with a higher probability. Finally, we test our algorithms on an empirical network to quantify the tradeoff between the cost of obtaining more refined network information and the benefit of the added information for guiding improved seeding strategies.Profit maximization for competitive social advertisinghttps://zbmath.org/1500.911042023-01-20T17:58:23.823708Z"Shi, Qihao"https://zbmath.org/authors/?q=ai:shi.qihao"Wang, Can"https://zbmath.org/authors/?q=ai:wang.can"Ye, Deshi"https://zbmath.org/authors/?q=ai:ye.deshi"Chen, Jiawei"https://zbmath.org/authors/?q=ai:chen.jiawei"Zhou, Sheng"https://zbmath.org/authors/?q=ai:zhou.sheng"Feng, Yan"https://zbmath.org/authors/?q=ai:feng.yan"Chen, Chun"https://zbmath.org/authors/?q=ai:chen.chun"Huang, Yanhao"https://zbmath.org/authors/?q=ai:huang.yanhaoSummary: In social advertising, the social platform host may run marketing campaigns for multiple competing clients simultaneously. In this case, each client comes up with a budget and an influence spread requirement. The host runs campaigns by allocating a set of seed nodes for each client. If the influence spread triggered by a seed set meets the requirement, the host can earn the budget from the corresponding client. In this paper, we study the problem of profit maximization, considering that different seeds incur different costs. Given all the clients' requirements met, we aim to find the optimal seed allocation with minimum cost. Under the competitive K-LT propagation model, we show the profit maximization problem is NP-hard and NP-hard to approximate with any factor. To find a feasible solution, we propose an effective algorithm that iteratively selects a candidate set and obtains an approximate allocation. The experimental results over a real-world dataset validate the effectiveness of the proposed methods.Optimal long-term Tier 1 employee pension management with an application to Chinese urban areashttps://zbmath.org/1500.911142023-01-20T17:58:23.823708Z"Ji, Bingbing"https://zbmath.org/authors/?q=ai:ji.bingbing"Chen, Zhiping"https://zbmath.org/authors/?q=ai:chen.zhiping"Consigli, Giorgio"https://zbmath.org/authors/?q=ai:consigli.giorgio"Yan, Zhe"https://zbmath.org/authors/?q=ai:yan.zheSummary: We formulate a stochastic optimization problem from the perspective of an investment committee responsible for Tier 1 social security pension policies and whose decisions are bound to have relevant economic and social consequences. The adopted modelling approach combines canonical \textit{multistage stochastic programming} (MSP) with \textit{dynamic stochastic control} (DSC): the first applies to the short-medium term, the second to the long-term. Through the combined framework, we are able to span a long planning horizon without jeopardizing the accuracy of scenario tree based medium-term planning. We apply this methodology to the Chinese pension system, which relies on two large reference areas for rural and urban populations. In this article, we concentrate on the ever-growing \textit{urban public pension} system, which is facing significant challenges due to a declining workforce and a rapidly ageing population. This welfare area, originally conceived as a \textit{pay-as-you-go} (PAYG) system, has undergone several recent reforms to enhance its long-term sustainability and reduce the interventions of the central government required to improve its funding condition. Among those relevant in our setting, is the reduction of policy constraints that until 2015 severely limited the possibility to invest in assets other than traditional, locally traded, long-term fixed income securities. We propose an optimization model in which the decisions of the investment management aim at significantly reducing central government interventions as a last resort liquidity provider and progressively improving the system funding condition. A rich set of computational and economic evidence is presented to validate the methodology and clarify its potential benefits to pension system efficiency.Portfolio optimization with tri-objective for index fund managementhttps://zbmath.org/1500.911202023-01-20T17:58:23.823708Z"Chen, Yao-Tsung"https://zbmath.org/authors/?q=ai:chen.yao-tsung"Sheng, Yang"https://zbmath.org/authors/?q=ai:sheng.yang(no abstract)Comparing integer linear programming to SAT-solving for hard problems in computational and systems biologyhttps://zbmath.org/1500.920302023-01-20T17:58:23.823708Z"Brown, Hannah"https://zbmath.org/authors/?q=ai:brown.hannah"Zuo, Lei"https://zbmath.org/authors/?q=ai:zuo.lei"Gusfield, Dan"https://zbmath.org/authors/?q=ai:gusfield.danSummary: It is useful to have general-purpose solution methods that can be applied to a wide range of problems, rather than relying on the development of clever, intricate algorithms for each specific problem. Integer linear programming is the most widely-used such general-purpose solution method. It is successful in a wide range of problems. However, there are some problems in computational biology where integer linear programming has had only limited success. In this paper, we explore an alternate, general-purpose solution method: SAT-solving, i.e., constructing Boolean formulas in conjunctive normal form (CNF) that encode a problem instance, and using a SAT-solver to determine if the CNF formula is satisfiable or not. In three hard problems examined, we were very surprised to find the SAT-solving approach was dramatically better than the ILP approach in two problems; and a little slower, but more robust, in the third problem. We also re-examined and confirmed an earlier result on a fourth problem, using current ILP and SAT-solvers. These results should encourage further efforts to exploit SAT-solving in computational biology.
For the entire collection see [Zbl 1496.92004].Approximation algorithms for a genetic diagnostics problemhttps://zbmath.org/1500.920552023-01-20T17:58:23.823708Z"Kosaraju, S. Rao"https://zbmath.org/authors/?q=ai:kosaraju.s-rao"Schäffer, Alejandro A."https://zbmath.org/authors/?q=ai:schaffer.alejandro-a"Biesecker, Leslie G."https://zbmath.org/authors/?q=ai:biesecker.leslie-gSummary: We define and study a combinatorial problem called `weighted diagnostic cover' (WDC) that models the use of a laboratory technique called \textit{genotyping} in the diagnosis of a important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the \textit{greedy} heuristic has been extensively studied. We prove worst-case performance bounds on the \textit{greedy} heuristic for WDC and for another heuristic we call \textit{directional greedy}. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by \textit{greedy} and \textit{dir-greedy} and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.
For the entire collection see [Zbl 1492.68014].Tractable model discrimination for safety-critical systems with disjunctive and coupled constraintshttps://zbmath.org/1500.930222023-01-20T17:58:23.823708Z"Shen, Qiang"https://zbmath.org/authors/?q=ai:shen.qiang"Niu, Ruochen"https://zbmath.org/authors/?q=ai:niu.ruochen"Yong, Sze Zheng"https://zbmath.org/authors/?q=ai:yong.sze-zhengSummary: This paper considers the model discrimination problem among a finite number of models in safety-critical systems that are subjected to constraints that can be disjunctive and where state and input constraints can be coupled with each other. In particular, we consider both the optimal input design problem for active model discrimination that is solved offline as well as the online passive model discrimination problem via a model invalidation framework. To overcome the issues associated with non-convex and generalized semi-infinite constraints due to the disjunctive and coupled constraints, we propose some techniques for reformulating these constraints in a computationally tractable manner by leveraging the Karush-Kuhn-Tucker (KKT) conditions and introducing binary variables, thus recasting the active and passive model discrimination problems into tractable mixed-integer linear/quadratic programming (MILP/MIQP) problems. When compared with existing approaches, our method is able to obtain the optimal solution and is observed in simulations to also result in less computation time. Finally, we demonstrate the effectiveness of the proposed active model discrimination approach for estimating driver intention with disjunctive safety constraints and state-input coupled curvature constraints, as well as for fault identification.Beamforming optimization for information enhancement transmission in MISO SWIPT with NOMA systemhttps://zbmath.org/1500.930422023-01-20T17:58:23.823708Z"Miao, Qing"https://zbmath.org/authors/?q=ai:miao.qing"Zhou, Longtao"https://zbmath.org/authors/?q=ai:zhou.longtao"Wang, Jiongqi"https://zbmath.org/authors/?q=ai:wang.jiongqi(no abstract)Learning-based adaptive optimal output regulation of linear and nonlinear systems: an overviewhttps://zbmath.org/1500.930532023-01-20T17:58:23.823708Z"Gao, Weinan"https://zbmath.org/authors/?q=ai:gao.weinan"Jiang, Zhong-Ping"https://zbmath.org/authors/?q=ai:jiang.zhong-ping(no abstract)Stability analysis of switched ARX models and application to learning with guaranteeshttps://zbmath.org/1500.930882023-01-20T17:58:23.823708Z"De Iuliis, Vittorio"https://zbmath.org/authors/?q=ai:de-iuliis.vittorio"Smarra, Francesco"https://zbmath.org/authors/?q=ai:smarra.francesco"Manes, Costanzo"https://zbmath.org/authors/?q=ai:manes.costanzo"D'Innocenzo, Alessandro"https://zbmath.org/authors/?q=ai:dinnocenzo.alessandroSummary: The main subject of this work is the stability analysis of switched auto-regressive models with exogenous inputs (SARX), which constitute a reference class for switched and hybrid system identification. The work introduces novel conditions for the arbitrary switching stability of multiple-input multiple-output SARX models which exploit the peculiar structure of their state-space realization. The analysis relies on the properties of block companion matrices, and partly leverages results from the theory of non-negative matrices, without nevertheless asking for an input-output positive behavior of the model. The novel stability conditions have a simple formulation in terms of linear co-positive common Lyapunov functions, and come at a remarkably low computational cost, being solvable by linear programming. The low computational burden is particularly attractive in an identification context, as it allows to efficiently constrain learning procedures in order to obtain SARX models with stability guarantees. The latter is itself a contribution of the work, as it fills a gap in the literature on the estimation of SARX models. The results are validated on a particular learning technique based on regression trees -- a well known machine learning algorithm -- which has shown remarkable accuracy in experimental environments.Bellman's principle of optimality and deep reinforcement learning for time-varying taskshttps://zbmath.org/1500.931442023-01-20T17:58:23.823708Z"Giuseppi, Alessandro"https://zbmath.org/authors/?q=ai:giuseppi.alessandro"Pietrabissa, Antonio"https://zbmath.org/authors/?q=ai:pietrabissa.antonioSummary: This paper presents the first framework (up to the authors' knowledge) to address time-varying objectives in finite-horizon deep reinforcement learning (DeepRL), based on a switching control solution developed on the ground of Bellman's principle of optimality. By augmenting the state space of the system with information on its visit time, the DeepRL agent is able to solve problems in which its task dynamically changes within the same episode. To address the scalability problems caused by the state space augmentation, we propose a procedure to partition the episode length to define separate sub-problems that are then solved by specialised DeepRL agents. Contrary to standard solutions, with the proposed approach the DeepRL agents correctly estimate the value function at each time-step and are hence able to solve time-varying tasks. Numerical simulations validate the approach in a classic RL environment.Multicast triangular semilattice networkhttps://zbmath.org/1500.940092023-01-20T17:58:23.823708Z"Grosso, Angelina"https://zbmath.org/authors/?q=ai:grosso.angelina"Manganiello, Felice"https://zbmath.org/authors/?q=ai:manganiello.felice"Varal, Shiwani"https://zbmath.org/authors/?q=ai:varal.shiwani"Zhu, Emily"https://zbmath.org/authors/?q=ai:zhu.emilySummary: We investigate the structure of the code graph of a multicast network that has a characteristic shape of an inverted equilateral triangle. We provide a criterion that determines the validity of a receiver placement within the code graph, present invariance properties of the determinants corresponding to receiver placements under symmetries, and provide a complete study of these networks' receivers and required field sizes up to a network of four sources. We also improve on various definitions related to code graphs.