Recent zbMATH articles in MSC 49Khttps://zbmath.org/atom/cc/49K2022-07-25T18:03:43.254055ZUnknown authorWerkzeugChallenges in optimization with complex PDE-systems. Abstracts from the workshop held February 14--20, 2021 (hybrid meeting)https://zbmath.org/1487.000392022-07-25T18:03:43.254055ZSummary: The workshop concentrated on various aspects of optimization problems with systems of nonlinear partial differential equations (PDEs) or variational inequalities (VIs) as constraints. In particular, discussions around several keynote presentations in the areas of optimal control of nonlinear or non-smooth systems, optimization problems with functional and discrete or switching variables leading to mixed integer nonlinear PDE constrained optimization, shape and topology optimization, feedback control and stabilization, multi-criteria problems and multiple optimization problems with equilibrium constraints as well as versions of these problems under uncertainty or stochastic influences, and the respectively associated numerical analysis as well as design and analysis of solution algorithms were promoted. Moreover, aspects of optimal control of data-driven PDE constraints (e.g. related to machine learning) were addressed.On some non-linear systems of PDEs related to inverse problems in 3-d conductivityhttps://zbmath.org/1487.351912022-07-25T18:03:43.254055Z"Pedregal, Pablo"https://zbmath.org/authors/?q=ai:pedregal.pabloSummary: We focus on certain non-linear, non-convex, non-coercive systems of PDEs in three dimensions that are directly motivated by inverse problems in conductivity for the three-dimensional case. It turns out that such systems are variational, as they formally are the Euler-Lagrange systems associated with an explicit first-order functional, and thus we exploit both its variational structure as well as its connection to inverse problems. In particular, boundary conditions play a central role.Constrained optimization problems governed by PDE models of grain boundary motionshttps://zbmath.org/1487.352182022-07-25T18:03:43.254055Z"Antil, Harbir"https://zbmath.org/authors/?q=ai:antil.harbir"Kubota, Shodai"https://zbmath.org/authors/?q=ai:kubota.shodai"Shirakawa, Ken"https://zbmath.org/authors/?q=ai:shirakawa.ken"Yamazaki, Noriaki"https://zbmath.org/authors/?q=ai:yamazaki.noriakiSummary: In this article, we consider a class of optimal control problems governed by state equations of Kobayashi-Warren-Carter-type. The control is given by physical temperature. The focus is on problems in dimensions less than or equal to 4. The results are divided into four Main Theorems, concerned with: solvability and parameter dependence of state equations and optimal control problems; the first-order necessary optimality conditions for these regularized optimal control problems. Subsequently, we derive the limiting systems and optimality conditions and study their well-posedness.Nonlinear dynamic analysis and optimum control of reaction-diffusion rumor propagation models in both homogeneous and heterogeneous networkshttps://zbmath.org/1487.352202022-07-25T18:03:43.254055Z"Zhu, Linhe"https://zbmath.org/authors/?q=ai:zhu.linhe"Zhou, Mengtian"https://zbmath.org/authors/?q=ai:zhou.mengtian"Liu, Ying"https://zbmath.org/authors/?q=ai:liu.ying|liu.ying.6|liu.ying.1|liu.ying.2|liu.ying.3|liu.ying.4|liu.ying.5"Zhang, Zhengdi"https://zbmath.org/authors/?q=ai:zhang.zhengdiSummary: The spread of rumors will make a big influence on people's life. This paper aims to study the dynamic behavior of rumor propagation. We consider the effects of transmission rates between different groups and establish 2ISR rumor propagation models with diffusions and time delay in both homogeneous and heterogeneous networks. In the homogeneous network model, we gain the basic reproduction number through the next generation matrix. The existence of two equilibrium points is discussed according to the basic reproduction number. Besides, based on the LaSalle's invariance principle, we obtain the global stability of two equilibrium points. In the heterogeneous network model, we further discuss the global stability of the rumor-free equilibrium point. Moreover, a set of useful rumor control strategies are put forward respectively in two systems. Finally, some necessary numerical simulations are displayed, which also suggest the effectiveness of theories.Topological asymptotic analysis for tumor identification problemhttps://zbmath.org/1487.353752022-07-25T18:03:43.254055Z"Chorfi, Nejmeddine"https://zbmath.org/authors/?q=ai:chorfi.nejmeddine"Ghezaiel, Emna"https://zbmath.org/authors/?q=ai:ghezaiel.emna"Hassine, Maatoug"https://zbmath.org/authors/?q=ai:hassine.maatougSummary: This work is concerned with the problem of identifying the shape, size and location of a small embedded tumor from measured temperature on the skin surface. The temperature distribution in the biological tissue is governed by the Pennes model equation. The proposed approach is based on the Kohn-Vogelius formulation and the topological sensitivity analysis method. The ill-posed geometric inverse problem is reformulated as a topology optimization. The temperature field perturbation, caused by the presence of a small anomaly, is analyzed and estimated. A topological asymptotic formula, describing the variation of the considered Kohn-Vogelius type functional with respect to the presence of a small anomaly is derived.Optimal control theory. Applications to management science and economicshttps://zbmath.org/1487.490012022-07-25T18:03:43.254055Z"Sethi, Suresh P."https://zbmath.org/authors/?q=ai:sethi.suresh-pThe author of this book is an expert in operations management, finance and economics, marketing, optimization, optimal control, etc. The first two editions of the book [Optimal control theory. Applications to management science. Boston/The Hague/London: Martinus Nijhoff Publishing (1981; Zbl 0495.49001); Optimal control theory. Applications to management science and economics. 2nd ed. Dordrecht: Kluwer Academic Publishers (2000; Zbl 0998.49002); Optimal control theory. Applications to management science and economics. 3rd edition. Cham: Springer (2019; Zbl 1412.49001)] were published by the same author in cooperation with \textit{G. L. Thompson}, and naturally the present fourth edition includes the contributions of both experts. We also point out that the material of this book has been discussed with students, since its level allows to do so. Obviously, the book is primarily addressed to students and researchers in management science, operations research, and economics.
The book comprises 13 chapters, 5 appendices, a long reference list, and an index (plus lists of figures and tables).
Chapter 1 (entitled ``What is Optimal Control Theory?'') comprises 5 sections: 1.1 Basic Concepts and Definitions; 1.2 Formulation of Simple Control Models; 1.3 History of Optimal Control Theory; 1.4 Notation and Concepts Used; 1.5 Plan of the Book. In addition, at the end of Chapter 1 there are appropriate exercises and a specific reference list.
Chapter 2 (``The Maximum Principle: Continuous Time'') introduces the maximum principle as a necessary condition that must be satisfied by any optimal control for the basic problem formulated in Section 2.1 of this chapter. Specifically, the state equation considered here is an ODE in the Euclidean space \(E^n\), \[ \dot{x}(t)=f(x(t),u(t),t), \ t\in [0,T], \ \ x(0)=x_0, \] where \(u(t) \in E^m\) is the control variable. The admissible controls are assumed to be piecewise continuous functions \(u\) satisfying a constraint of the form \(u(t)\in \Omega (t), \ t\in [0,T]\). The optimal control is defined to be an admissible control which maximizes a so-called objective function \[ J=\int_0^TF(x(t),u(t),t)\, dt + S(x(T),T), \] where \(F\) and \(S\) are here assumed to be continuously differentiable. So the \emph{optimal control problem} (OCP) is to find an admissible control \(u^*\) which maximizes \(J\) over all admissible controls \(u\) and all \(x\) satisfying the state equation above and the initial condition \(x(0)=x_0\). This OCP is said to be in \emph{Bolza form}. If \(S=0\) then the OCP is in \emph{Lagrange form}, while if \(F=0\) it is said to be in \emph{Mayer form}. An OCP in Mayer form with \(S= cx(T)\), where \(c\) is a given row vector, \(c=(c_1,c_2,\dots,c_n)\), is said to be in \emph{linear Mayer form}. In fact, any Bolza problem can be reduced to a linear Mayer problem, on the expense of introducing an additional scalar state equation. Then, in Section 2.2, the maximum principle is derived by using the dynamic programming approach, and an economic interpretation is provided. Some examples (with solutions) are discussed in Section 2.3. In Section 2.4 a result on sufficiency conditions are stated, and the fixed-end point problems are addressed. Then, Section 2.5 is devoted to solving a two-point boundary value problem by using Excel. And, at the end of the chapter, the author proposes a lot of exercises and provides an adequate reference list.
In Chapter 3 (``The Maximum Principle: Mixed Inequality Constraints'') the author presents the case of inequality constraints involving control and possibly state variables. In Section 3.1 a Lagrangian form of the maximum principle is discussed for models in which some constraints involve only control variables, and others involve both state and control variables. In Section 3.2 the author states conditions under which the Lagrangian maximum principle is also sufficient for optimality. In Section 3.3 the author considers the special case \(F(x,u,t)=\phi (x,u)e^{-pt}\), \(S(x,T)=\psi (x)e^{-pT}\), which occurs in most management science and economic problems. The maximum principle is stated and a specific example is analyzed. The next sections of Chapter 3 are the following: Section 3.4 (``Transversality Conditions: Special Cases''); Section 3.5 (``Free Terminal Time Problems''); Section 3.6 (``Infinite Horizon and Stationarity''); Section 3.7 (``Model Types''). As usual, the chapter ends with many specific exercises and references.
Chapter 4 (``The Maximum Principle: Pure State and Mixed Inequality Constraints''). As the author says, intuitively, pure state constraints are difficult to deal with because only the control variables are under the direct influence of the decision maker. The case of pure state constraints can be handled by a direct method (by associating a multiplier with each constraint for appending it to the Hamiltonian to form the Lagrangian, and then proceeding in a similar way as in the previous chapter in the case of mixed constraints), or by an indirect method (when a pure constraint is active, one can constrain the value of its time derivative, which will involve time derivatives of the state variables; so, the restrictions on the time derivatives of the pure state constraints become mixed constraints and these are appended to the Hamiltonian to form the Lagrangian, and so on. The direct and indirect maximum principle is derived, and finally the author adds specific exercises and references.
Chapters 5--7 are devoted to applications (in finance, production and inventory, marketing), which are very important for students and researchers oriented to management and economics.
Chapter 8 (``The Maximum Principle: Discrete Time'') is focused on the case where time is represented by a discrete variable \(k=0, 1, \cdots, T\). The maximum principle is reduced to a nonlinear programming problem and the necessary conditions for its solution are stated by using the Kuhn-Tucker theorem. This procedure requires some simplifications and so only a restricted form of the discrete maximum principle is obtained. A more general discrete maximum principle is stated (without proof) in Section 8.3. Again, several illustrative examples, many exercises and references are included.
In Chapter 9 (``Maintenance and Replacement'') the author presents some maintenance and replacement models, with solutions obtained by the maximum principle, and specific numerical examples. In addition, exercises and references are included.
Chapters 10 and 11 are devoted to applications of optimal control theory to natural resources and to economics. Certainly, formulation of the corresponding models and their solutions are of interest for a large audience. Again, specific exercises are proposed and reference lists are added at the end of the two chapters.
In Chapter 12 (``Stochastic Optimal Control'') the author deals with the case where the state equation is perturbed by a Wiener process (Brownian motion), which gives rise to the state as a Markov diffusion process. In Section 12.1, the author formulates an OCP governed by stochastic differential equations involving a Wiener process, also known as Ito equations. The goal is ``to synthesize optimal feedback controls for systems subject to Ito equations in a way that maximizes the expected value of a given objective function.'' Some practical stochastic models are analyzed, including an advertising model named after the author (see page 354). As usual some exercises and a reference list are added at the end of the chapter.
Chapter 13 (``Differential Games'') is focused on the situation when there are more decision makers, each having one's own objective function to be maximized, subject to a set of differential equations. The extension of optimal control theory to such situations is called the theory of differential games, which is far more complex than the classic optimal control theory. We confine ourselves to mentioning the titles of the sections of this chapter: Two-person zero-sum differential games; Nash differential games; A feedback Nash stochastic differential game in advertising; A feedback Stackelberg stochastic differential game of cooperative advertising. Again, exercises and references are added.
Some appendices are included to complete the exposition: A. Solutions of Linear Differential Equations; B. Calculus of Variations and Optimal Control Theory; C. An Alternative Derivation of the Maximum Principle; D. Special Topics in Optimal Control; E. Answers to Selected Exercises.
Certainly, the book contains enough mathematical tools to solve plenty of practical problems in management science and economics.
Reviewer: Gheorghe Moroşanu (Cluj-Napoca)Optimal control problems governed by two dimensional convective Brinkman-Forchheimer equationshttps://zbmath.org/1487.490062022-07-25T18:03:43.254055Z"Mohan, Manil T."https://zbmath.org/authors/?q=ai:mohan.manil-tSummary: The convective Brinkman-Forchheimer (CBF) equations describe the motion of incompressible viscous fluids through a rigid, homogeneous, isotropic, porous medium and is given by \[\partial_t{\boldsymbol{u}}-\mu \Delta{\boldsymbol{u}}+({\boldsymbol{u}}\cdot\nabla){\boldsymbol{u}}+\alpha{\boldsymbol{u}}+\beta|{\boldsymbol{u}}|^{r-1}{\boldsymbol{u}}+\nabla p = {\boldsymbol{f}}, \nabla\cdot{\boldsymbol{u}} = 0.\] In this work, we consider some distributed optimal control problems like total energy minimization, minimization of enstrophy, etc governed by the two dimensional CBF equations with the absorption exponent \(r=1,2\) and 3. We show the existence of an optimal solution and the first order necessary conditions of optimality for such optimal control problems in terms of the Euler-Lagrange system. Furthermore, for the case \(r = 3\), we show the second order necessary and sufficient conditions of optimality. We also investigate an another control problem which is similar to that of the data assimilation problems in meteorology of obtaining unknown initial data, when the system under consideration is 2D CBF equations, using optimal control techniques.Necessary second-order conditions for a local infimum in an optimal controlhttps://zbmath.org/1487.490082022-07-25T18:03:43.254055Z"Avakov, E. R."https://zbmath.org/authors/?q=ai:avakov.evgeniy-r"Magaril-Il'yaev, G. G."https://zbmath.org/authors/?q=ai:magaril-ilyaev.gregorij-gThe authors in the present paper provide some results on optimal control problems. Their assumptions rely on finite-dimensional dynamical systems.
Reviewer: Christos E. Kountzakis (Karlovassi)Hölder regularity in bang-bang type affine optimal control problemshttps://zbmath.org/1487.490102022-07-25T18:03:43.254055Z"Corella, Alberto Domínguez"https://zbmath.org/authors/?q=ai:dominguez-corella.alberto"Veliov, Vladimir M."https://zbmath.org/authors/?q=ai:veliov.vladimir-mSummary: This paper revisits the issue of Hölder Strong Metric sub-Regularity (HSMs-R) of the optimality system associated with ODE optimal control problems that are affine with respect to the control. The main contributions are as follows. First, the metric in the control space, introduced in this paper, differs from the ones used so far in the literature in that it allows to take into consideration the bang-bang structure of the optimal control functions. This is especially important in the analysis of Model Predictive Control algorithms. Second, the obtained sufficient conditions for HSMs-R extend the known ones in a way which makes them applicable to some problems which are non-linear in the state variable and the Hölder exponent is smaller than one (that is, the regularity is not Lipschitz).
For the entire collection see [Zbl 1484.65002].Anisotropic surface tensions for phase transitions in periodic mediahttps://zbmath.org/1487.490182022-07-25T18:03:43.254055Z"Choksi, Rustum"https://zbmath.org/authors/?q=ai:choksi.rustum"Fonseca, Irene"https://zbmath.org/authors/?q=ai:fonseca.irene"Lin, Jessica"https://zbmath.org/authors/?q=ai:lin.jessica"Venkatraman, Raghavendra"https://zbmath.org/authors/?q=ai:venkatraman.raghavendraThe authors consider the Allen-Cahn energy functional \(\mathcal{F} _{\varepsilon }:H^{1}(\Omega )\rightarrow \lbrack 0,\infty ]\) defined as \( \mathcal{F}_{\varepsilon }(u)=\int_{\Omega }[\frac{1}{\varepsilon }a(\frac{x }{\varepsilon })W(u)+\frac{\varepsilon }{2}\left\vert \nabla u\right\vert ^{2}]dx\), where \(\Omega \sqsubseteq \mathbb{R}^{N}\), \(N\geq 2\), is a Lipschitz domain, \(a:\mathbb{R}^{N}\rightarrow \mathbb{R}\) is a continuous, strictly positive and \(\mathbb{T}^{N}\)-periodic function, \(\mathbb{T}^{N}\) being the standard \(N\)-dimensional torus, which satisfies \(0<\theta \leq a(x)\leq \Theta \), for all \(x\in \mathbb{R}^{N}\), and \(W\) is the double-well potential \(W(u)=(1-u^{2})^{2}\). The authors recall some homogenization results concerning this functional. They introduce the anisotropic surface energy \(\sigma :\mathbb{S}^{N-1}\rightarrow \lbrack 0,\infty )\) through the cell formula \(\sigma (\nu )=\lim_{T\rightarrow \infty }\frac{1}{T^{N-1}}\inf \{\int_{TQ_{\nu }}[a(y)W(u)+\frac{1}{2}\left\vert \nabla u\right\vert ^{2}]dy:u\in C(TQ_{\nu })\}\), where \(C(TQ_{\nu })=u\in \{H^{1}(TQ_{\nu }):u=\rho \ast u_{0,\nu }\) on \(\partial (TQ_{\nu })\}\), with \(u_{0,\nu }(y)=-1\) if \(x\cdot \nu \leq 0\), and \(u_{0,\nu }(y)=1\) if \(x\cdot \nu >0\), and \(\rho \in C_{c}^{\infty }(B(0,1))\), with \(0\leq \rho \leq 1\), and \(\int_{ \mathbb{R}^{N}}\rho (x)dx=1\). They also introduce the function \(q:\mathbb{R} \rightarrow \mathbb{R}\) defined by \(q(z)=\tanh (\sqrt{2}z)\), \(z\in \mathbb{R} \), and for \(\nu \in \mathbb{S}^{N-1}\), they define \underline{\(\lambda \)}\( (\nu )=\lim \inf_{T\rightarrow \infty }\frac{1}{T^{N}}\int_{TQ_{\nu }}[a(y)W(q\circ h_{\nu })+\frac{1}{2}2\left\vert \nabla (q\circ h_{\nu })\right\vert ^{2}]dy\) and \(\overline{\lambda }(\nu )=\lim \sup_{T\rightarrow \infty }\frac{1}{T^{N}}\int_{TQ_{\nu }}[a(y)W(q\circ h_{\nu })+\frac{1}{2}2\left\vert \nabla (q\circ h_{\nu }\right\vert ^{2}]dy\) , where \(h_{\nu }(y)=sign(y\cdot \nu )\inf_{z\in \Sigma _{\nu }}d_{\sqrt{a} }(y,z)\), \(\Sigma _{\nu }=\{x\in \mathbb{R}^{N}:x\cdot \nu =0\}\). The first main result proves the existence of a universal constant \(\Lambda _{0}>0\) and of \(\lambda _{0}:\mathbb{S}^{N-1}\rightarrow \lbrack 0,\Lambda _{0}]\) such that \(\overline{\lambda }(\nu )-\lambda _{0}(\nu )\leq \sigma (\nu )\leq \underline{\lambda }(\nu )\). The second main result proves that for each \(\nu \in \mathbb{S}^{N-1}\), there exists a unique \(c(\nu )\in \lbrack \sqrt{\theta },\sqrt{\Theta }]\) such that for every compact \(K\sqsubseteq \mathbb{R}^{N}\), \(\lim_{T\rightarrow \infty }\sup_{x\in K}\left\vert \frac{1}{T}h_{\nu }(Tx)-c(\nu )(x\cdot \nu )\right\vert =0\), and \(c(\nu )=c(-\nu )\). The authors finally also prove a similar result if \(a:\mathbb{R} ^{N}\rightarrow \mathbb{R}\) is a Bohr almost periodic function. For the proof of the first main result, the authors use the standard De Giorgi slicing technique that they recall in an Appendix to prove the upper bound. For the proof of the lower bound, they choose \(\phi =\sqrt{2}\int_{0}^{z} \sqrt{W(s)}ds\) and they prove properties of the function \(h_{\nu }\) in connection with the signed distance \(d_{\sqrt{a}}(y_{1},y_{2})=\inf_{\gamma (0)=y_{1},\gamma (1)=y_{2}}\int_{0}^{1}\sqrt{a(\gamma (t))}\left\vert \overset{.}{\gamma }(t)\right\vert dt\), where \(\gamma \) is a Lipschitz curve \([0,1]\rightarrow \mathbb{R}^{N}\). For the proof of the second and third main results, the authors prove further properties of the function \(h_{\nu }\) and of Bohr almost periodic functions for the third result.
Reviewer: Alain Brillard (Riedisheim)Optimization of switchable systems' trajectorieshttps://zbmath.org/1487.490192022-07-25T18:03:43.254055Z"Bortakovsky, A. S."https://zbmath.org/authors/?q=ai:bortakovskii.a-s"Uryupin, I. V."https://zbmath.org/authors/?q=ai:uryupin.i-vSummary: We consider the problem of trajectory optimization by a switchable system whose continuous motion is described by differential equations; and discrete state changes (switching), by recurrent inclusions. Its motion is continuously controlled by choosing the state of the discrete part of the system. The number of switchings and time of switching are not predefined. The quality of the trajectory is characterized by a functional that takes into account the costs of each switch. Together with the task of optimizing the trajectories of motion, the task of finding the minimum number of switchings at which the value of the quality functional does not exceed the given value is solved.Optimal sampled-data controls with running inequality state constraints: Pontryagin maximum principle and bouncing trajectory phenomenonhttps://zbmath.org/1487.490202022-07-25T18:03:43.254055Z"Bourdin, Loïc"https://zbmath.org/authors/?q=ai:bourdin.loic"Dhar, Gaurav"https://zbmath.org/authors/?q=ai:dhar.gauravSummary: In the present paper we derive a Pontryagin maximum principle for general nonlinear optimal sampled-data control problems in the presence of running inequality state constraints. We obtain, in particular, a nonpositive averaged Hamiltonian gradient condition associated with an adjoint vector being a function of bounded variation. As a well known challenge, theoretical and numerical difficulties may arise due to the possible pathological behavior of the adjoint vector (jumps and singular part lying on parts of the optimal trajectory in contact with the boundary of the restricted state space). However, in our case with sampled-data controls, we prove that, under certain general hypotheses, the optimal trajectory activates the running inequality state constraints at most at the sampling times. Due to this so-called bouncing trajectory phenomenon, the adjoint vector experiences jumps at most at the sampling times (and thus in a finite number and at precise instants) and its singular part vanishes. Taking advantage of these informations, we are able to implement an indirect numerical method which we use to solve three simple examples.On singularities of minimum time control-affine systemshttps://zbmath.org/1487.490212022-07-25T18:03:43.254055Z"Caillau, Jean-Baptiste"https://zbmath.org/authors/?q=ai:caillau.jean-baptiste"Féjoz, Jacques"https://zbmath.org/authors/?q=ai:fejoz.jacques"Orieux, Michaël"https://zbmath.org/authors/?q=ai:orieux.michael"Roussarie, Robert"https://zbmath.org/authors/?q=ai:roussarie.robertHidden invariant convexity for global and conic-intersection optimality guarantees in discrete-time optimal controlhttps://zbmath.org/1487.490222022-07-25T18:03:43.254055Z"Baayen, Jorn H."https://zbmath.org/authors/?q=ai:baayen.jorn-h"Postek, Krzysztof"https://zbmath.org/authors/?q=ai:postek.krzysztofSummary: Non-convex discrete-time optimal control problems in, \textit{e.g.}, water or power systems, typically involve a large number of variables related through nonlinear equality constraints. The ideal goal is to find a globally optimal solution, and numerical experience indicates that algorithms aiming for Karush-Kuhn-Tucker points often find solutions that are indistinguishable from global optima. In our paper, we provide a theoretical underpinning for this phenomenon, showing that on a broad class of problems the objective can be shown to be an \textit{invariant convex} function (\textit{invex} function) of the control decision variables when state variables are eliminated using implicit function theory. In this way, optimality guarantees can be obtained, the exact nature of which depends on the position of the solution within the feasible set. In a numerical example, we show how high-quality solutions are obtained with local search for a river control problem where invexity holds.Optimal control problems for complex heat transfer equations with Fresnel matching conditionshttps://zbmath.org/1487.490232022-07-25T18:03:43.254055Z"Chebotarev, A. Yu."https://zbmath.org/authors/?q=ai:chebotarev.alexander-yurievichSummary: A class of optimal control problems for a system of nonlinear elliptic equations simulating radiative heat transfer with Fresnel matching conditions on the surfaces of discontinuity of the refractive index is considered. Based on estimates for the solution of the boundary value problem, the solvability of the optimal control problems is proved. The existence and uniqueness of the solution of a linearized problem with the matching conditions is analyzed, and the nondegeneracy of the optimality conditions is proved. As an example, a control problem with boundary observation is considered and the relay-like character of the optimal control is shown.A priori error estimate of perturbation method for optimal control problem governed by elliptic PDEs with small uncertaintieshttps://zbmath.org/1487.490242022-07-25T18:03:43.254055Z"Feng, Mengya"https://zbmath.org/authors/?q=ai:feng.mengya"Sun, Tongjun"https://zbmath.org/authors/?q=ai:sun.tongjunSummary: In this paper, we investigate the first-order and second-order perturbation approximation schemes for an optimal control problem governed by elliptic PDEs with small uncertainties. The optimal control minimizes the expectation of a cost functional with a deterministic constrained control. First, using a perturbation method, we expand the state and co-state variables up to a certain order with respect to a parameter that controls the magnitude of uncertainty in the input. Then we take the expansions into the known deterministic parametric optimality system to derive the first-order and second-order optimality systems which are both deterministic problems. After that, the two systems are discretized by finite element method directly. The strong and weak error estimates are derived for the state, co-state and control variables, respectively. We finally illustrate the theoretical results by two numerical examples.An optimal control problem for equations with p-structure and its finite element discretizationhttps://zbmath.org/1487.490252022-07-25T18:03:43.254055Z"Hirn, Adrian"https://zbmath.org/authors/?q=ai:hirn.adrian"Wollner, Winnifried"https://zbmath.org/authors/?q=ai:wollner.winnifriedSummary: We analyze a finite element approximation of an optimal control problem that involves an elliptic equation with \(p\)-structure (e.g., the \(p\)-Laplace) as a constraint. As the nonlinear operator related to the p-Laplace equation mapping the space \(W_0^{1,p}(\Omega)\) to its dual \((W_0^{1,p}(\Omega))^\ast\) is not Gâteaux differentiable, first-order optimality conditions cannot be formulated in a standard way. Without using adjoint information, we derive novel a priori error estimates for the convergence of the cost functional for both variational discretization and piecewise constant controls.
For the entire collection see [Zbl 07438181].Continuous differentiability of the value function of semilinear parabolic infinite time horizon optimal control problems on \(L^2(\Omega)\) Under control constraintshttps://zbmath.org/1487.490262022-07-25T18:03:43.254055Z"Kunisch, Karl"https://zbmath.org/authors/?q=ai:kunisch.karl"Priyasad, Buddhika"https://zbmath.org/authors/?q=ai:priyasad.buddhikaLet \(\Omega \) be an open, connected and bounded subset of \(\mathbb{R}^{d}\) with Lipschitz continuous boundary \(\Gamma \), \(Y=L^{2}(\Omega )\), \( V=H_{0}^{1}(\Omega )\), \(U=L^{2}(0,\infty ;\mathcal{U})\), where \(\mathcal{U}\) is a Hilbert space which will be identified with its dual, \(W(0,T)=\{y\in L^{2}(0,T;V)\); \(\frac{dy}{dt}\in L^{2}(0,T;V^{\ast })\), \(W_{\infty }=W(0,\infty )\), and the set of admissible controls \(U_{ad}\subset \{u\in U:\left\Vert u(t)\right\Vert _{\mathcal{U}}\leq \eta \), for a.e. \(t>0\}\), where \(\eta \) is a positive constant. The main part of the paper is devoted to the analysis of the stabilization problem for an abstract semilinear parabolic equation formulated as an infinite horizon optimal control problem under control constraints \(\mathcal{V}(y_{0})=\min_{(y,u)\in W_{\infty }\times U_{ad}}J(y,u)=\min_{(y,u)\in W_{\infty }\times U_{ad}}\frac{1}{2} \int_{0}^{\infty }\left\Vert y(t)\right\Vert _{Y}^{2}dt+\frac{\alpha }{2} \int_{0}^{\infty }\left\Vert u(t)\right\Vert _{\mathcal{U}}^{2}dt\), subject to the semilinear parabolic equation \(y_{t}=\mathcal{A}y+\mathcal{F}(y)+Bu\) in \(L^{2}(I;V^{\ast })\), \(y(x,0)=y_{0}\) in \(Y\). The authors assume that the operator \(\mathcal{A}\) with domain \(D(\mathcal{A})\subset Y\) and range in \(Y\) generates a strongly continuous analytic semigroup \(e^{\mathcal{A}t}\) on \(Y \) and can be extended to \(\mathcal{A}\in L(V,V^{\ast })\), \(B\in \mathcal{L}(\mathcal{U},Y)\), and there exists a stabilizing feedback operator \(K\in \mathcal{L}(Y,\mathcal{U})\) such that the semigroup \(e^{(\mathcal{A}-BK)t}\) is exponentially stable on \(Y\), the nonlinearity \(\mathcal{F}:W_{\infty }\rightarrow L^{2}(I;V^{\ast })\) is twice continuously Fréchet differentiable, with second Fréchet derivative \(\mathcal{F}^{\prime \prime }\) bounded on bounded subsets of \(W_{\infty }\), \(\mathcal{F}(0)=0\), \( \mathcal{F}:W(0,T)\rightarrow L^{1}(0,T;\mathcal{H}^{\ast })\) is weak-to-weak continuous for every \(T>0\), where \(\mathcal{H}\) is Hilbert space which embeds densely in \(V\), and finally \(\mathcal{F}^{\prime }( \overline{y})\in \mathcal{L}(L^{2}(I;V),L^{2}(I;V^{\ast }))\). The authors prove that associated to each local solution \((\overline{y}(y_{0}),\overline{ u}(y_{0}))\) to this problem, there exists a neighborhood of \(U(y_{0})\) such that the local value function \(\mathcal{V}:U(y_{0})\subset Y\rightarrow \mathbb{R}\) is continuously differentiable, if \(y_{0}\) is sufficiently close to the origin in \(Y\).\ In a second theorem, the authors prove that if \(( \overline{y}(y_{0}),\overline{u}(y_{0}))\) denotes a global solution to this problem for \(y_{0}\in D(\mathcal{A})\) with sufficiently small norm in \(Y\), and if there exists \(T_{y_{0}}>0\) such that \(\mathcal{F}(\overline{y})\in C([0,T_{y_{0}});Y)\), then the following Hamilton-Jacobi-Bellman equation holds at \(y_{0}\): \(\mathcal{V}^{\prime }(y)(\mathcal{A}y+\mathcal{F}(y))+ \frac{1}{2}\left\Vert y\right\Vert _{Y}^{2}+\frac{\alpha }{2}\left\Vert \mathbb{P}_{\mathcal{U}_{ad}}(-\frac{1}{\alpha }B^{\ast }\mathcal{V}^{\prime }(y))\right\Vert _{Y}^{2}+\left\langle B^{\ast }\mathcal{V}^{\prime }(y), \mathbb{P}_{\mathcal{U}_{ad}}(-\frac{1}{\alpha }B^{\ast }\mathcal{V}^{\prime }(y))\right\rangle _{Y}=0\), where \(\mathbb{P}_{\mathcal{U}_{ad}}\) is the projection operator on \(\mathcal{U}_{ad}=\{v\in \mathcal{U}:\left\Vert v\right\Vert _{\mathcal{U}}\leq \eta \}\). The optimal feedback law is given by \(\overline{u}(0)=\mathbb{P}_{\mathcal{U}_{ad}}(-\frac{1}{\alpha }B^{\ast } \mathcal{V}^{\prime }(y))\).
For the proofs, the authors mainly use properties of analytic semigroups. They also use properties of general minimization problems.\ Consider the minimization problem \(\min f(x)\), \(e(x,q)=0\), \(x\in C \), where \(C\) is a closed and convex subset of a real Hilbert space \(X\), \( f:X\rightarrow \mathbb{R}^{+}\) is twice continuously differentiable in a neighborhood of the local solution \(x_{0}\) to the preceding problem associated to a nominal reference parameter \(q_{0}\in P\) a normed linear space, \(e:X\times P\rightarrow W\) is continuous and twice continuously differentiable with respect to \(x\), with first and second derivatives which are Lipschitz continuous in a neighborhood of \((x_{0},q_{0})\). The authors introduce the Lagrangian \(\mathcal{L}:X\times P\times W^{\ast }\rightarrow \mathbb{R}\) associated to the above problem and defined as \(\mathcal{L} (x,q,\lambda )=f(x)+\left\langle \lambda ,e(x,q)\right\rangle _{W^{\ast },W}\) and they assume that \(0\in inte^{\prime }(x_{0},q_{0})(C-x_{0})\), where \(int\) denotes the interior in the \(W\) topology. This implies the existence of a Lagrange multiplier \(\lambda _{0}\in W^{\ast }\) such that: \(\left\langle \mathcal{L}(x_{0},q_{0},\lambda _{0}),c-x_{0}\right\rangle _{X^{\ast },X}\geq 0\), \(\forall c\in C\), \(e(x_{0},q_{0})=0\). Finally, defining the operator representation \(A\in L(X,X^{\ast })\) of \(\mathcal{L}^{\prime \prime }(x_{0},q_{0},\lambda _{0})\) defined as \(\left\langle Ax_{1},x_{2}\right\rangle _{X^{\ast },X}=\mathcal{L}^{\prime \prime }(x_{0},q_{0},\lambda _{0})(x_{1},x_{2})\), then the operator \( E=e(x_{0},q_{0})\in \mathcal{L}(X,W)\), the authors assume the existence of \( \kappa >0\) such that: \(\left\langle Ax,x\right\rangle _{X^{\ast },X}\geq \kappa \left\Vert x\right\Vert _{X}^{2}\), \(\forall x\in kerE\). Under further hypotheses, the authors prove a stability result, that is the existence of a neighborhood \(U=U(x_{0},\lambda _{0})\subset X\times W^{\ast }\), of a neighborhood \(N=N(q_{0})\subset P\), and of a constant \(\mu \) such that for all \(q\in N\) there exists a unique \((x(q),\lambda (q))\in U\) satisfying \( \mathcal{L}(x(q),q,\lambda (q))+\partial I_{C}(x(q))\), in \(X^{\ast }\), \( e(x(q),q)\), in \(W\), and \((x(q_{1}),\lambda (q_{1}))-(x(q_{2}),\lambda (q_{2}))_{X\times W^{\ast }}\leq \mu \left\Vert q_{1}-q_{2}\right\Vert _{P}\) , \(\forall q_{1},q_{2}\in N\). For the proof, the authors use the implicit function theorem of Dontchev for generalized equations and an existence result for a minimization problem involving generic operators.
Reviewer: Alain Brillard (Riedisheim)Continuous and discrete Noether's fractional conserved quantities for restricted calculus of variationshttps://zbmath.org/1487.490272022-07-25T18:03:43.254055Z"Cresson, Jacky"https://zbmath.org/authors/?q=ai:cresson.jacky"Jiménez, Fernando"https://zbmath.org/authors/?q=ai:jimenez.fernando"Ober-Blöbaum, Sina"https://zbmath.org/authors/?q=ai:ober-blobaum.sinaSummary: We prove a Noether's theorem of the first kind for the so-called \textit{restricted fractional Euler-Lagrange equations} and their discrete counterpart, introduced in [\textit{F. Jiménez} and \textit{S. Ober-Blöbaum}, ``A fractional variational approach for modelling dissipative mechanical systems: continuous and discrete settings'', IFAC-PapersOnLine 51, No. 3, 50--55 (2018; \url{doi:10.1016/j.ifacol.2018.06.013}); J. Nonlinear Sci. 31, No. 2, Paper No. 46, 43 p. (2021; Zbl 1477.70031)], based in previous results [\textit{L. Bourdin} et al., Commun. Nonlinear Sci. Numer. Simul. 18, No. 4, 878--887 (2013; Zbl 1328.70013); \textit{F. Riewe}, ``Nonconservative Lagrangian and Hamiltonian mechanics'', Phys. Rev. E (3) 53, 1890--1899 (1996; \url{doi:10.1103/PhysRevE.53.1890})]. Prior, we compare the restricted fractional calculus of variations to the \textit{asymmetric fractional calculus of variations}, introduced in [\textit{J. Cresson} and \textit{P. Inizan}, J. Math. Anal. Appl. 385, No. 2, 975--997 (2012; Zbl 1250.49024)], and formulate the restricted calculus of variations using the \textit{discrete embedding} approach [\textit{L. Bourdin} et al., Appl. Numer. Math. 71, 14--23 (2013; Zbl 1284.65183); \textit{J. Cresson} and \textit{F. Pierret}, ``Continuous versus discete structures I: discrete embeddings and ordinary differential equations'', Preprint, \url{arXiv:1411.7117}]. The two theories are designed to provide a variational formulation of dissipative systems, and are based on modeling irreversbility by means of fractional derivatives. We explicit the role of time-reversed solutions and causality in the restricted fractional calculus of variations and we propose an alternative formulation. Finally, we implement our results for a particular example and provide simulations, actually showing the constant behaviour in time of the discrete conserved quantities outcoming the Noether's theorems.Asymptotic expansion of the solution of a singularly perturbed optimal control problem with elliptical control constraintshttps://zbmath.org/1487.490282022-07-25T18:03:43.254055Z"Danilin, A. R."https://zbmath.org/authors/?q=ai:danilin.aleksei-rufimovich"Shaburov, A. A."https://zbmath.org/authors/?q=ai:shaburov.aleksandr-aleksandrovichSummary: The main distinction of the present paper from our previous publications is that the integral part of the performance functional has a more general form and the control is subjected to elliptical rather than spherical constraints. We prove that, in the case of finitely many control type switching points, one can construct the asymptotics of the initial costate vector \(l_\varepsilon\) determining the form of the optimal control. The asymptotics is shown to be of power-law character.Discrete-continuous systems with parameters: method for improving control and parametershttps://zbmath.org/1487.490292022-07-25T18:03:43.254055Z"Rasina, Irina Viktorovna"https://zbmath.org/authors/?q=ai:rasina.irina-viktorovna"Guseva, Irina Sergeevna"https://zbmath.org/authors/?q=ai:guseva.irina-sergeevnaSummary: The paper presents one of the classes of controlled systems capable of changing their structure over time. The general name for such systems is hybrid. In the article it is discussed the so-called discrete-continuous systems containing parameters. It is a two-level hierarchical model. The upper level of this model is represented by a discrete system. At the lower level continuous controlled systems operate in turn. All these systems contain parameters and are linked by the functional. In recent decades hybrid systems have been the subject of active research both of the systems themselves and of a wide range of problems for them, using various methods that reflect the views of scientific schools and directions. At the same time the most diverse mathematical apparatus is presented in the research. In this case a generalization of Krotov's sufficient optimality conditions is used. Their advantage is the possibility of preserving the classical assumptions about the properties of objects that appear in the formulation of the optimal control problem. For the optimal control problem considered in this paper for discrete-continuous systems with parameters an analogue of Krotov's sufficient optimality conditions is proposed. Two theorems are formulated. On their base an easy-to-implement algorithm for improving control and parameters is built. A theorem on its functional convergence is given. This algorithm contains a vector system of linear equations for conjugate variables, which always has a solution. It is guarantees a solution to the original problem. The algorithm is tested on an illustrative example. Calculations and graphs are presented.Second-order Lagrange multiplier rules in multiobjective optimal control of infinite dimensional systems under state constraints and mixed pointwise constraintshttps://zbmath.org/1487.490302022-07-25T18:03:43.254055Z"Nguyen Dinh, Tuan"https://zbmath.org/authors/?q=ai:nguyen-dinh.tuanSummary: We investigate a multiobjective optimal control problem, governed by a strongly continuous semigroup operator in an infinite dimensional separable Banach space, and with final-state constraints, pointwise pure state constraints and a mixed pointwise control-state constraint. Basing on necessary optimality conditions obtained for an abstract multiobjective optimization framework, we establish a second-order Lagrange multiplier rule, of Fritz-John type, for local weak Pareto solutions of the problem under study. As a consequence of the main result, we also derive a multiplier rule for a multiobjective optimal control model driven by a bilinear system being affine-linear in the control, and with an objective function of continuous quadratic form.Method for solving bang-bang and singular optimal control problems using adaptive Radau collocationhttps://zbmath.org/1487.490312022-07-25T18:03:43.254055Z"Pager, Elisha R."https://zbmath.org/authors/?q=ai:pager.elisha-r"Rao, Anil V."https://zbmath.org/authors/?q=ai:rao.anil-vSummary: A method is developed for solving bang-bang and singular optimal control problems using adaptive Legendre-Gauss-Radau collocation. The method is divided into several parts. First, a structure detection method is developed that identifies switch times in the control and analyzes the corresponding switching function for segments where the solution is either bang-bang or singular. Second, after the structure has been detected, the domain is decomposed into multiple domains such that the multiple-domain formulation includes additional decision variables that represent the switch times in the optimal control. In domains classified as bang-bang, the control is set to either its upper or lower limit. In domains identified as singular, the objective function is augmented with a regularization term to avoid the singular arc. An iterative procedure is then developed for singular domains to obtain a control that lies in close proximity to the singular control. The method is demonstrated on four examples, three of which have either a bang-bang and/or singular optimal control while the fourth has a smooth and nonsingular optimal control. The results demonstrate that the method of this paper provides accurate solutions to problems whose solutions are either bang-bang or singular when compared against previously developed mesh refinement methods that are not tailored for solving nonsmooth and/or singular optimal control problems, and produces results that are equivalent to those obtained using previously developed mesh refinement methods for optimal control problems whose solutions are smooth.Relationship between maximum principle and dynamic programming in presence of intermediate and final state constraintshttps://zbmath.org/1487.490322022-07-25T18:03:43.254055Z"Bokanowski, Olivier"https://zbmath.org/authors/?q=ai:bokanowski.olivier"Désilles, Anya"https://zbmath.org/authors/?q=ai:desilles.anya"Zidani, Hasnaa"https://zbmath.org/authors/?q=ai:zidani.hasnaaOptimal control problems with endpoint and intermediate state constraints are considered. The analysis of the sensitivity relations satisfied by the co-state arc of the Pontryagin maximum principle and the value function that associates the optimal value of the control problem to the initial time and state are provided. New sensitivity relations between the Pontryagin Maximum Principle and the dynamic programming principle for such a class of control problems are derived without assuming any controllability assumptions and so without Lipschitz regularity of the value function. Instead of assuming regularity assumptions [\textit{F. H. Clarke} and \textit{R. Vinter}, in: Fermat days 85: Mathematics for optimization, Toulouse/France 1985, North-Holland Math. Stud. 129, 77--102 (1986; Zbl 0602.49019); SIAM J. Control Optim. 25, 1291--1311 (1987; Zbl 0642.49014); \textit{R. B. Vinter}, Math. Control Signals Syst. 1, No. 1, 97--105 (1988; Zbl 0656.49008)], following an idea introduced in [\textit{A. Altarovici} et al., ESAIM, Control Optim. Calc. Var. 19, No. 2, 337--357 (2013; Zbl 1273.35089)], an auxiliary control problem without state constraints is considered. The value function of this auxiliary control problem is an interesting tool that can be used to obtain the optimal control value function and to establish the link between the Pontrygain Maximum Principle and the Hamilton Jacobi Bellman approach along an optimal path. The sensitivity relations hold for normal optimal trajectories and abnormal trajectories as well. This provides a path to simplify the arguments in [\textit{F. H. Clarke} and \textit{R. B Vinter}, SIAM J. Control Optim. 25, 1291--1311 (1987; Zbl 0642.49014); \textit{R. B. Vinter}, Math. Control Signals Syst. 1, No. 1, 97--105 (1988; Zbl 0656.49008)] and both relations can be obtained by considering a same perturbed control problem without adding complex approximations of the dynamic equation by impulse systems. Moreover, an important step for the proof of the main results gives the sensitivity relations for a control problem with intermediate costs but without any constraint.
Reviewer: Lisa Morhaim (Paris)A maximum principle for a stochastic control problem with multiple random terminal timeshttps://zbmath.org/1487.490332022-07-25T18:03:43.254055Z"Cordoni, Francesco"https://zbmath.org/authors/?q=ai:cordoni.francesco-giuseppe"Di Persio, Luca"https://zbmath.org/authors/?q=ai:di-persio.lucaThe authors consider the stochastic differential system \(dX^{i;0}(t)=\mu ^{i;0}(t,X^{i;0}(t),\alpha ^{i;0}(t))dt+\sigma ^{i;0}(t,X^{i;0}(t),\alpha ^{i;0}(t))dW^{i}(t)\), \(i=1,\ldots ,n\), where \(W^{i}(t)\) is a standard Brownian motion, \(\mu ^{i;0},\sigma ^{i;0}:[0,T]\times \mathbb{R}\times A\rightarrow \mathbb{R}\) are Lipschitz continuous coefficients with at most linear growth, \(\alpha ^{i}\) is a control which belongs to \(\mathcal{A} ^{i}=\{\alpha ^{i;0}\in L_{ad}^{2}([0,T];\mathbb{R}):\alpha ^{i;0}(t)\in A^{i}\) a.e. \(t\in \lbrack 0,T]\}\), \(A^{i}\) being a convex and closed subset of \(\mathbb{R}\), and \(L_{ad}^{2}([0,T];\mathbb{R})\) being the space of \(( \mathcal{F}_{t})_{t\in \lbrack 0,T]}\)-adapted processes such that \(\mathbb{E} \int_{0}^{T}\left\vert \alpha ^{i;0}(t)\right\vert ^{2}dt<1\). The authors write this system in a vectorial form as \(dX^{0}(t)=B^{0}(t,X^{0}(t),\alpha ^{0}(t))dt+\Sigma ^{0}(t,X^{0}(t),\alpha ^{0}(t))dW(t)\) and they add the initial condition \(X^{0}(t)=x_{0}^{0}\). They further introduce the cost functional \(J(x,\alpha )=\mathbb{E}\int_{0}^{\widehat{\tau } ^{1}}L^{0}(t,X^{0}(t),\alpha ^{0}(t))dt+G^{0}(\widehat{\tau }^{1},X^{0}( \widehat{\tau }^{1}))\) where \(L^{0}:[0,T]\times \mathbb{R}^{n}\times A^{0}\rightarrow \mathbb{R}\) and \(G^{0}:[0,T]\times \mathbb{R} ^{n}\rightarrow \mathbb{R}\) are measurable and continuous functions such that there exist two\ positive constants \(K,k\) and for any \(t\in \lbrack 0,T] \), \(x\in \mathbb{R}^{n}\) and \(a\in A^{0}\), it holds \(\left\vert L^{0}(t,x,a)\right\vert \leq K(1+\left\vert x\right\vert ^{k}+\left\vert a\right\vert ^{k})\), \(\left\vert G^{0}(t,x)\right\vert \leq K(1+\left\vert x\right\vert ^{k})\), \(\widehat{\tau }^{1}\) being a vector of stopping times. The authors prove a necessary and a sufficient maximum principle for such systems. Finally, considering the system \(dX(t)=B(t,X(t),\alpha (t))dt+\Sigma (t,X(t))dW(t)\) where both the drift and the volatility coefficients are supposed to be linear, with running and terminal cost functionals which satisfy suitable quadratic weighted averages of the distance from the stopping boundaries, the authors prove the existence of an optimal control to which they give an explicit formula.
Reviewer: Alain Brillard (Riedisheim)Pseudospectral optimal train controlhttps://zbmath.org/1487.490482022-07-25T18:03:43.254055Z"Goverde, Rob M. P."https://zbmath.org/authors/?q=ai:goverde.rob-m-p"Scheepmaker, Gerben M."https://zbmath.org/authors/?q=ai:scheepmaker.gerben-m"Wang, Pengling"https://zbmath.org/authors/?q=ai:wang.penglingSummary: In the last decade, pseudospectral methods have become popular for solving optimal control problems. Pseudospectral methods do not need prior knowledge about the optimal control structure and are thus very flexible for problems with complex path constraints, which are common in optimal train control, or train trajectory optimization. Practical optimal train control problems are nonsmooth with discontinuities in the dynamic equations and path constraints corresponding to gradients and speed limits varying along the track. Moreover, optimal train control problems typically include singular solutions with a vanishing Hessian of the associated Hamiltonian. These characteristics make these problems hard to solve and also lead to convergence issues in pseudospectral methods. We propose a computational framework that connects pseudospectral methods with Pontryagin's Maximum Principle allowing flexible computations, verification and validation of the numerical approximations, and improvements of the continuous solution accuracy. We apply the framework to two basic problems in optimal train control: minimum-time train control and energy-efficient train control, and consider cases with short-distance regional trains and long-distance intercity trains for various scenarios including varying gradients, speed limits, and scheduled running time supplements. The framework confirms the flexibility of the pseudospectral method with regards to state, control and mixed algebraic inequality path constraints, and is able to identify conditions that lead to inconsistencies between the necessary optimality conditions and the numerical approximations of the states, costates, and controls. A new approach is proposed to correct the discrete approximations by incorporating implicit equations from the optimality conditions. In particular, the issue of oscillations in the singular solution for energy-efficient driving as computed by the pseudospectral method has been solved.Optimal orientation control of a space vehicle with restrictions on the control and phase variableshttps://zbmath.org/1487.490582022-07-25T18:03:43.254055Z"Levskii, M. V."https://zbmath.org/authors/?q=ai:levskii.m-vSummary: The problem of the optimal control of the reorientation of a spacecraft (SC) from an arbitrary initial angular position to the given final angular position is solved analytically in the presence of ellipsoidal constraints on the phase variables and control functions (the angular velocity and force moment are limited). The turning time is minimized. The case when the maximal allowable kinetic energy of rotation is a significant limitation is considered. The construction of the optimal control of a turn is based on quaternion variables and models. It is shown that during the optimal turn, the moment of forces is parallel to a straight line that is stationary in inertial space, and when the SC rotates, the direction of the kinetic momentum is constant relative to the inertial coordinate system. Analytical equations and relations for finding the optimal control program are written out. The calculation formulas are given for determining the time characteristics of the maneuver and calculating the duration of acceleration and deceleration. For an axisymmetric SC, the posed problem of the optimal control is solved completely: the dependences are obtained as explicit functions of time for the control variables and relations for calculating the key parameters of the control law. A numerical example and the results of mathematical modeling of the motion of an SC under the optimal control are given, demonstrating the practical feasibility of the developed method for controlling the orientation of an SC.On Hamilton's principle for discrete and continuous systems: a convolved action principlehttps://zbmath.org/1487.700792022-07-25T18:03:43.254055Z"Kalpakides, Vassilios K."https://zbmath.org/authors/?q=ai:kalpakides.vassilios-k"Charalambopoulos, Antonios"https://zbmath.org/authors/?q=ai:charalambopoulos.antoniosSummary: In an attempt to generalize Hamilton's principle, an action functional is proposed which, unlike the standard version of the principle, accounts properly for all initial data and the possible presence of dissipation. To this end, the convolution is used instead of the \(L^2\) inner product so as to eliminate the undesirable end temporal condition of Hamilton's principle. Also, fractional derivatives are used to account for dissipation and the Dirac delta function is exploited so as the initial velocity can be inherently set into the variational setting. The proposed approach applies to both finite- and infinite-dimensional systems.Optimality conditions in terms of contingent epiderivatives for strict local Pareto minima in vector optimization problems with constraintshttps://zbmath.org/1487.905972022-07-25T18:03:43.254055Z"Van Su, Tran"https://zbmath.org/authors/?q=ai:su.tran-van"Hang, Dinh Dieu"https://zbmath.org/authors/?q=ai:hang.dinh-dieuThe paper addresses primal and dual necessary and sufficient optimality conditions for strict local Pareto minima (also known as isolated local Pareto mimima) of a nonsmooth constrained vector optimization problem. Specifically, the feasible set of the problem is defined by set, cone and equality constraints and the objective and constraint functions are assumed to be steady at the nominal point. The necessary optimality conditions are formulated by contingent derivatives, contingent epiderivatives and contingent hypoderivatives of the involved functions. The sufficient ones are derived from them in the finite-dimensional setting by considering stable functions and other suitable assumptions. Some illustrative examples and comparisons with related results of the literature are also provided.
Reviewer: César Gutiérrez (Valladolid)Mathematical modeling of the adaptive immune responses in the early stage of the HBV infectionhttps://zbmath.org/1487.920082022-07-25T18:03:43.254055Z"Allali, Karam"https://zbmath.org/authors/?q=ai:allali.karam"Meskaf, Adil"https://zbmath.org/authors/?q=ai:meskaf.adil"Tridane, Abdessamad"https://zbmath.org/authors/?q=ai:tridane.abdessamadSummary: The aim of this paper is to study the early stage of HBV infection and impact delay in the infection process on the adaptive immune response, which includes cytotoxic T-lymphocytes and antibodies. In this stage, the growth of the healthy hepatocyte cells is logistic while the growth of the infected ones is linear. To investigate the role of the treatment at this stage, we also consider two types of treatment: interferon-\(\alpha\) (IFN) and nucleoside analogues (NAs). To find the best strategy to use this treatment, an optimal control approach is developed to find the possibility of having a functional cure to HBV.Displacement field construction based on a discrete model in image processing problemshttps://zbmath.org/1487.940242022-07-25T18:03:43.254055Z"Kotina, Elena Dmitrievna"https://zbmath.org/authors/?q=ai:kotina.elena-dmitrievna"Leonova, Ekaterina Borisovna"https://zbmath.org/authors/?q=ai:leonova.ekaterina-borisovna"Ploskikh, Viktor Aleksandrovich"https://zbmath.org/authors/?q=ai:ploskikh.viktor-aleksandrovichSummary: The problem of a displacement field calculation for an image sequence based on a discrete model is being solved. Algorithms for velocity field (displacement field) construction are in demand in various image processing tasks. These methods are used in motion detection, object movement tracking, analysis of complex images, movement correction of medical diagnostic images in nuclear medicine, radiology, etc. An optimization approach to the displacement field construction based on a discrete model is developed in the paper. The approach explores the possibility of taking into account the brightness change along the trajectories of the system. A linear model is considered. Directed optimization methods based on the analytical representation of the functional gradient are constructed to search for unknown parameters. The algorithm for displacement field construction with image partitioning into regions (neighborhoods) is proposed. This algorithm can be used to process a variety of image sequences. The results of the algorithm operation on test radionuclide images are presented.