Recent zbMATH articles in MSC 91https://zbmath.org/atom/cc/912023-01-20T17:58:23.823708ZWerkzeugFrom harmony to eHarmony: Charles Fourier, social science, and the management of lovehttps://zbmath.org/1500.010062023-01-20T17:58:23.823708Z"Hsiung, Hansun"https://zbmath.org/authors/?q=ai:hsiung.hansunThe paper provides a survey of some principles on which the social theories of Charles Fourier (1772--1837) were based. A main focus was on the management of human passions, especially of love. To this aim, Fourier derived a rudimentary mathematical procedure of matching characteristic propertities of different persons. As an example, he demonstrated how height and humor, both quantified by appropriate linear scales, could be matched best among a man and a woman, under the assumption that considerable contrasts (small and large height, good humored and serious character) would lead to a strong attraction.
Reviewer: Hans Fischer (Eichstätt)From truth degree comparison games to sequents-of-relations calculi for Gödel logichttps://zbmath.org/1500.030102023-01-20T17:58:23.823708Z"Fermüller, Christian"https://zbmath.org/authors/?q=ai:fermuller.christian-g"Lang, Timo"https://zbmath.org/authors/?q=ai:lang.timo"Pavlova, Alexandra"https://zbmath.org/authors/?q=ai:pavlova.aleksandra-mikhailovnaGödel logic is studied from a game semantic point of view. Among the infinite many fuzzy logics, i.e., logics where logical connectives are interpreted in the unit real interval, Gödel logic is the only one in which the comparison of the truth values of two propositions returns ultimately to the order of these values. A truth degree comparison game is introduced. The first player is looking for support for claim that the truth value of proposition \(F\) is smaller or equal to that of proposition \(G\), and the second player attempts to disprove this claim. This game is lifted from individual truth values to a more general level of validity; to comparison claims that hold under every interpretation. The most important new concept is that of disjunctive state, exploiting it leads to a disjunctive winning strategy. Disjunctive winning strategies, in turn are shown to correspond proofs in an analytic proof system called sequents-of-relations calculus introduced by \textit{M. Baaz} and \textit{C. G. Fermüller} [Lect. Notes Comput. Sci. 1617, 36--50 (1999; Zbl 0931.03066)].
Reviewer: Esko Turunen (Tampere)Gini index on generalized \(r\)-partitionshttps://zbmath.org/1500.050102023-01-20T17:58:23.823708Z"Mansour, Toufik"https://zbmath.org/authors/?q=ai:mansour.toufik"Schork, Matthias"https://zbmath.org/authors/?q=ai:schork.matthias"Shattuck, Mark"https://zbmath.org/authors/?q=ai:shattuck.mark-a"Wagner, Stephan"https://zbmath.org/authors/?q=ai:wagner.stephan-gSummary: The Gini index of a set partition \(\pi\) of size \(n\) is defined as \(1-\frac{\delta(\pi)}{n^2}\), where \(\delta(\pi)\) is the sum of the squares of the block cardinalities of \(\pi\). In this paper, we study the distribution of the \(\delta\) statistic on various kinds of set partitions in which the first \(r\) elements are required to lie in distinct blocks. In particular, we derive the generating function for the distribution of \(\delta\) on a generalized class of \(r\)-partitions wherein contents-ordered blocks are allowed and elements meeting certain restrictions may be colored. As a consequence, we obtain simple explicit formulas for the average \(\delta\) value, equivalently for the average Gini index, in all \(r\)-partitions, \(r\)-permutations and \(r\)-Lah distributions of a given size. Finally, combinatorial proofs can be found for these formulas in the case \(r=0\) corresponding to the Gini index on classical set partitions, permutations and Lah distributions.Role coloring bipartite graphshttps://zbmath.org/1500.050202023-01-20T17:58:23.823708Z"Pandey, Sukanya"https://zbmath.org/authors/?q=ai:pandey.sukanya"Sahlot, Vibha"https://zbmath.org/authors/?q=ai:sahlot.vibhaThe \(k\)-role colouring problem has as input an undirected graph \(G\) and a positive integer \(k\). The question is then whether there is a surjective function \(\alpha: V(G)\) to \(\{1,2,\ldots k\{\) such that if \(\alpha(u)=\alpha(v)\) then, letting \(N(w)\) denote the neighbourhood of a vertex \(w\) as usual, we have \(\alpha(N(u))=\alpha(N(v))\). This notion apparently has applications in social science. Previous work by various groups of authors had shown that for general graphs this problem is polynomial-time soluble when \(k=1\) and is NP-complete for \(k\geq 2\).
The aim of the paper under review is to study what happens for the class of bipartite graphs. The \(k\)-role colouring problem is trivial for \(k\leq 2\) for this class. The main contribution of the paper is to show that the \(k\)-role colouring problem is NP-complete on bipartite graphs for fixed \(k\geq 3\). The paper also gives a characterisation of so-called bipartite chain graphs which are 3-role-colourable and shows that 2-role colouring is NP complete on the class of ``almost bipartite graphs''.
Reviewer: David B. Penman (Colchester)Examples of edge critical graphs in peg solitairehttps://zbmath.org/1500.050412023-01-20T17:58:23.823708Z"Beeler, Robert A."https://zbmath.org/authors/?q=ai:beeler.robert-a"Gray, Aaron D."https://zbmath.org/authors/?q=ai:gray.aaron-dSummary: Peg solitaire is a game in which pegs are placed in every hole but one and the player jumps over pegs along rows or columns to remove them. Usually, the goal is to remove all but one peg. In [\textit{R. A. Beeler} and \textit{D. P. Hoilman}, Discrete Math. 311, No. 20, 2198--2202 (2011; Zbl 1230.05211)], this game is generalized to graphs. In this paper, we examine graphs in which any single edge addition changes solvability. In order to do this, we introduce a family of graphs and provide necessary and sufficient conditions for the solvability for this family. We show that infinite subsets of this family are edge critical. We also determine the maximum number of pegs that can be left on this family with the condition that a jump is made whenever possible. Finally, we give a list of graphs on eight vertices that are edge critical.
For the entire collection see [Zbl 1495.05003].An exact bound on the number of chips of parallel chip-firing games that stabilizehttps://zbmath.org/1500.050422023-01-20T17:58:23.823708Z"Bu, Alan"https://zbmath.org/authors/?q=ai:bu.alan"Choi, Yunseo"https://zbmath.org/authors/?q=ai:choi.yunseo"Xu, Max"https://zbmath.org/authors/?q=ai:xu.max-wenqiangSummary: \textit{P. M. Kominers} and \textit{S. D. Kominers} [ibid. 95, No. 1, 9--13 (2010; Zbl 1230.05212)] showed that any parallel chip-firing game on \(G(V, E)\) with at least \(4|E|-|V|\) chips stabilizes with an eventual period of length 1. We make this bound exact: we prove that any parallel chip-firing game with more than \(3|E|-|V|\) or less than \(|E|\) chips must stabilize and that if the number of chips is outside this range, then there exists some parallel chip-firing game with that many chips that does not stabilize. In addition, as do Kominers and Kominers [loc. cit.], we provide an upper bound on the number of rounds before the game stabilizes.Burning spidershttps://zbmath.org/1500.050432023-01-20T17:58:23.823708Z"Das, Sandip"https://zbmath.org/authors/?q=ai:das.sandip-kr|das.sandip-kumar"Dev, Subhadeep Ranjan"https://zbmath.org/authors/?q=ai:dev.subhadeep-ranjan"Sadhukhan, Arpan"https://zbmath.org/authors/?q=ai:sadhukhan.arpan"Sahoo, Uma kant"https://zbmath.org/authors/?q=ai:sahoo.uma-kant"Sen, Sagnik"https://zbmath.org/authors/?q=ai:sen.sagnikSummary: Graph burning is a graph process modeling the spread of social contagion. Initially all the vertices of a graph \(G\) are unburned. At each step an unburned vertex is put on fire and the fire from burned vertices of the previous step spreads to their adjacent unburned vertices. This process continues till all vertices are burned. The burning number \(b(G)\) of the graph is the minimum number of steps required to burn all the vertices in the graph. The burning number conjecture by \textit{A. Bonato} et al. [Internet Math. 12, No. 1--2, 85--100 (2016; Zbl 1461.05193)] states that for a connected graph \(G\) of order \(n\), its burning number \(b(G)\leq\lceil\sqrt{n}\rceil\). It is easy to observe that in order to burn a graph it is enough to burn its spanning tree. Hence it suffices to prove that for any tree \(T\) of order \(n\), its burning number \(b(T)\leq\lceil\sqrt{n}\rceil\). A spider \(S\) is a tree with one vertex of degree at least 3 and all other vertices with degree at most 2. Here we prove that for any spider \(S\) of order \(n\), its burning number \(b(S)\leq\lceil\sqrt{n}\rceil\).
For the entire collection see [Zbl 1382.68013].Local regularity estimates for general discrete dynamic programming equationshttps://zbmath.org/1500.350642023-01-20T17:58:23.823708Z"Arroyo, Ángel"https://zbmath.org/authors/?q=ai:arroyo.angel"Blanc, Pablo"https://zbmath.org/authors/?q=ai:blanc.pablo"Parviainen, Mikko"https://zbmath.org/authors/?q=ai:parviainen.mikkoSummary: We obtain an analytic proof for asymptotic Hölder estimate and Harnack's inequality for solutions to a discrete dynamic programming equation. The results also generalize to functions satisfying Pucci-type inequalities for discrete extremal operators. Thus the results cover a quite general class of equations.Power mixture forward performance processeshttps://zbmath.org/1500.352802023-01-20T17:58:23.823708Z"Avanesyan, Levon"https://zbmath.org/authors/?q=ai:avanesyan.levon"Sircar, Ronnie"https://zbmath.org/authors/?q=ai:sircar.ronnieSummary: We consider the forward investment problem in market models where the stock prices are continuous semimartingales adapted to a Brownian filtration. We construct a broad class of forward performance processes with initial conditions of power mixture type, \(u(x) = \int_{\mathbb{I}} \frac{x^{1-\gamma}}{1-\gamma}\nu(d\gamma)\). We proceed to define and fully characterize two-power mixture forward performance processes with constant risk aversion coefficients in the interval \((0, 1)\), and derive properties of two-power mixture forward performance processes when the risk aversion coefficients are continuous stochastic processes. Finally, we discuss the problem of managing an investment pool of two investors, whose respective preferences evolve as power forward performance processes.Exploring the gender gap in a closed market niche. Explicit solutions of an ODE modelhttps://zbmath.org/1500.370502023-01-20T17:58:23.823708Z"Sifuentes, David"https://zbmath.org/authors/?q=ai:sifuentes.david"Téllez, Iván"https://zbmath.org/authors/?q=ai:tellez.ivan"Zazueta, Jorge"https://zbmath.org/authors/?q=ai:zazueta.jorgeThe paper builds on the work by \textit{E. Accinelli} and \textit{J. Zazueta} [Exploring the gender gap in the labor market: A sex-disaggregated view, The Social Science Journal, (2021) \url{https://doi.org/10.1080/03623319.2021.1905398}]
by simplifying the model therein and offering analytical solutions to the system of ordinary differential equations describing the growth of the numbers of males and females employed at the labor market. The model may be interpreted as a two-dimensional version of the logistic growth model where the growth term for the employee population of a given sex is a linear combination of both men and women employed (coefficients of the terms represent hiring biases towards their own sex). The overall logistic growth rate of the total employed population equals one, which is an unnecessary model constraint and may easily be dropped by assuming a constant non-unity growth rate. Analytical results for the total employment (in Section 3) follow directly from the logistic growth model and might have been introduced without elaborating own proof.
The added value of the work lies in the analytical solution of the model that also allows offering an analytical expression for the gender gap in employment. Based on the proposed expressions, the authors investigate conditions for the gender gap to close asymptotically (``equality'' condition (15)). However, their main result (Theorem 5.3) is not technically correct, as there exist indefinite number of paths towards equality. The uniqueness of such solutions may only be established in the sense that any solution curve leading to equality must cross similar combination of levels of men and women employed, as established in Theorem 5.2, at some point in time. That point in time, however, is arbitrary; hence, an indefinite number of solution curves satisfy the equality condition. Technically, the problem appears in the last line of the proof to Theorem 5.3. The misfortunate formulation of the Theorem 5.3, however, does not undermine the substantive conclusion: only a specific set of initial conditions may lead to equality under any given set of model parameters. In other words, with model parameters and initial conditions set arbitrarily, the model is unlikely to lead to equality at the labor market.
Here, we come to another limitation of the model regarding its relevance to real-life employment processes. In reality, the employment sectors go through numerous growth and shrinkage phases and those processes involve stochasticity. That might completely change the view to equality implications of the model parameters. Assuming, for example, indefinite number of economic swings and that laying off the labor market happens independently of the sex, one may note that long-term composition of the market will be determined by the first multipliers in Equation (1) and not by the logistic constraint terms. On the other hand, the authors might have deepened their analysis by allowing for `overemployment' solutions with \(f+m<k\), i.e., describing the labor shrinking phases within their own model. This is a missing part of the work that may be considered in a future work.
As a side note, it is worth pointing to possible usefulness of the model proposed in the paper beyond modeling the employment dynamics. One may, for example, consider a logistic model for a population composed of two or more traits that may reproduce one another through mutations.
Reviewer: Dalkhat M. Ediev (Cherkessk)Turnpike phenomenon and symmetric optimization problemshttps://zbmath.org/1500.490012023-01-20T17:58:23.823708Z"Zaslavski, Alexander J."https://zbmath.org/authors/?q=ai:zaslavski.alexander-jPublisher's description: ``Written by a leading expert in turnpike phenomenon, this book is devoted to the study of symmetric optimization, variational and optimal control problems in infinite dimensional spaces and turnpike properties of their approximate solutions. The book presents a systematic and comprehensive study of general classes of problems in optimization, calculus of variations, and optimal control with symmetric structures from the viewpoint of the turnpike phenomenon. The author establishes generic existence and well-posedness results for optimization problems and individual (not generic) turnpike results for variational and optimal control problems. Rich in impressive theoretical results, the author presents applications to crystallography and discrete dispersive dynamical systems which have prototypes in economic growth theory.
This book will be useful for researchers interested in optimal control, calculus of variations turnpike theory and their applications, such as mathematicians, mathematical economists, and researchers in crystallography, to name just a few.''
Reviewer: Costică Moroşanu (Iaşi)The gambler's ruin problem and quantum measurementhttps://zbmath.org/1500.810032023-01-20T17:58:23.823708Z"Debbasch, Fabrice"https://zbmath.org/authors/?q=ai:debbasch.fabriceSummary: The dynamics of a single microscopic or mesoscopic non quantum system interacting with a macroscopic environment is generally stochastic. In the same way, the reduced density operator of a single quantum system interacting with a macroscopic environment is \textit{a priori} a stochastic variable, and decoherence describes only the average dynamics of this variable, not its fluctuations. It is shown that a general unbiased quantum measurement can be reformulated as a gambler's ruin problem where the game is a martingale. Born's rule then appears as a direct consequence of the optional stopping theorem for martingales. Explicit computations are worked out in detail on a specific simple example.
For the entire collection see [Zbl 1466.81003].Quantum cognitive triad: semantic geometry of context representationhttps://zbmath.org/1500.810202023-01-20T17:58:23.823708Z"Surov, Ilya A."https://zbmath.org/authors/?q=ai:surov.ilya-aSummary: The paper describes an algorithm for semantic representation of behavioral contexts relative to a dichotomic decision alternative. The contexts are represented as quantum qubit states in two-dimensional Hilbert space visualized as points on the Bloch sphere. The azimuthal coordinate of this sphere functions as a one-dimensional semantic space in which the contexts are accommodated according to their subjective relevance to the considered uncertainty. The contexts are processed in triples defined by knowledge of a subject about a binary situational factor. The obtained triads of context representations function as stable cognitive structure at the same time allowing a subject to model probabilistically-variative behavior. The developed algorithm illustrates an approach for quantitative subjectively-semantic modeling of behavior based on conceptual and mathematical apparatus of quantum theory.Continuous variable quantum steganography protocol based on quantum identityhttps://zbmath.org/1500.810242023-01-20T17:58:23.823708Z"Qu, Zhiguo"https://zbmath.org/authors/?q=ai:qu.zhiguo"Jiang, Leiming"https://zbmath.org/authors/?q=ai:jiang.leiming"Sun, Le"https://zbmath.org/authors/?q=ai:sun.le"Wang, Mingming"https://zbmath.org/authors/?q=ai:wang.mingming"Wang, Xiaojun"https://zbmath.org/authors/?q=ai:wang.xiaojun.2(no abstract)Optimization and coordination in a service-constrained supply chain with the bidirectional option contract under conditional value-at-riskhttps://zbmath.org/1500.900062023-01-20T17:58:23.823708Z"Zhao, Han"https://zbmath.org/authors/?q=ai:zhao.han"Sun, Bangdong"https://zbmath.org/authors/?q=ai:sun.bangdong"Wang, Hui"https://zbmath.org/authors/?q=ai:wang.hui.4|wang.hui.10|wang.hui.8|wang.hui.13|wang.hui.18|wang.hui.17|wang.hui.20|wang.hui.7|wang.hui.6|wang.hui.15|wang.hui.12|wang.hui.40|wang.hui.11|wang.hui.9|wang.hui.16|wang.hui|wang.hui.14|wang.hui.22"Song, Shiji"https://zbmath.org/authors/?q=ai:song.shiji"Zhang, Yuli"https://zbmath.org/authors/?q=ai:zhang.yuli"Wang, Liejun"https://zbmath.org/authors/?q=ai:wang.liejunSummary: This paper investigates the optimal operational decisions for the risk-neutral supplier and the risk-averse retailer in the supply chain with a service requirement under the conditional value-at-risk. Specifically, the optimal order and production policies with and without the bidirectional option contract are derived. Further, this paper shows that the optimal conditional value-at-risk of the retailer is non-increasing in the service requirement, while the optimal expected profit of the supplier is non-decreasing in the service requirement. When the service requirement is binding, the optimal conditional value-at-risk of the retailer is increasing in the risk aversion, while the optimal expected profit of the supplier is decreasing in the risk aversion. In addition, it is shown that with the bidirectional option contract, the service level provided by the retailer is equivalent to (higher than) that without them when the service requirement is (not) binding. Finally, this paper demonstrates that the bidirectional option contract can mitigate the effect of risk aversion on the retailer's order quantity, benefit both the retailer and supplier, and improve the performance of the supply chain. Numerical experiments are conducted to further confirm our results.Dynamic Katz and related network measureshttps://zbmath.org/1500.900082023-01-20T17:58:23.823708Z"Arrigo, Francesca"https://zbmath.org/authors/?q=ai:arrigo.francesca"Higham, Desmond J."https://zbmath.org/authors/?q=ai:higham.desmond-j"Noferini, Vanni"https://zbmath.org/authors/?q=ai:noferini.vanni"Wood, Ryan"https://zbmath.org/authors/?q=ai:wood.ryanSummary: We study walk-based centrality measures for time-ordered network sequences. For the case of standard dynamic walk-counting, we show how to derive and compute centrality measures induced by analytic functions. We also prove that dynamic Katz centrality, based on the resolvent function, has the unique advantage of allowing computations to be performed entirely at the node level. We then consider two distinct types of backtracking and develop a framework for capturing dynamic walk combinatorics when either or both is disallowed.Asset liability management for the bank of Uganda defined benefits scheme by stochastic programminghttps://zbmath.org/1500.900362023-01-20T17:58:23.823708Z"Mukalazi, Herbert"https://zbmath.org/authors/?q=ai:mukalazi.herbert"Larsson, Torbjörn"https://zbmath.org/authors/?q=ai:larsson.torbjorn"Kasozi, Juma"https://zbmath.org/authors/?q=ai:kasozi.juma"Mayambala, Fred"https://zbmath.org/authors/?q=ai:mayambala.fredSummary: We develop a model for asset liability management of pension funds, which is solved by stochastic programming techniques. Using data provided by the Bank of Uganda Defined Benefits Scheme, which is closed to new members, we obtain the optimal investment policies. Randomly sampled scenario trees using the mean and covariance structure of the return distribution are used for generating the coefficients of the stochastic program. Liabilities are modelled by remaining years of life expectancy and guaranteed period for monthly pension. We obtain the funding situation of the scheme at each stage, and the terminal cash injection by the sponsor required to meet all future benefit payments, in absence of contributing members.Solving continuous set covering problems by means of semi-infinite optimization. With an application in product portfolio optimizationhttps://zbmath.org/1500.900772023-01-20T17:58:23.823708Z"Krieg, Helene"https://zbmath.org/authors/?q=ai:krieg.helene"Seidel, Tobias"https://zbmath.org/authors/?q=ai:seidel.tobias"Schwientek, Jan"https://zbmath.org/authors/?q=ai:schwientek.jan"Küfer, Karl-Heinz"https://zbmath.org/authors/?q=ai:kufer.karl-heinzSummary: This article introduces the new class of continuous set covering problems. These optimization problems result, among others, from product portfolio design tasks with products depending continuously on design parameters and the requirement that the product portfolio satisfies customer specifications that are provided as a compact set. We show that the problem can be formulated as semi-infinite optimization problem (SIP). Yet, the inherent non-smoothness of the semi-infinite constraint function hinders the straightforward application of standard methods from semi-infinite programming. We suggest an algorithm combining adaptive discretization of the infinite index set and replacement of the non-smooth constraint function by a two-parametric smoothing function. Under few requirements, the algorithm converges and the distance of a current iterate can be bounded in terms of the discretization and smoothing error. By means of a numerical example from product portfolio optimization, we demonstrate that the proposed algorithm only needs relatively few discretization points and thus keeps the problem dimensions small.An invitation to pursuit-evasion games and graph theoryhttps://zbmath.org/1500.910012023-01-20T17:58:23.823708Z"Bonato, Anthony"https://zbmath.org/authors/?q=ai:bonato.anthonyIn this textbook, the author provides a thorough introduction to pursuit-evasion games, a class of dynamic interactions that take place on a graph. In a pursuit-evasion game, there is a set of vertices and a set of edges linking them. Time is discrete and measured in rounds. One player, the evader, can move across vertices of the graph according to fixed rules. One or more other players, the pursuers, can move according to their own rules. The goal of the pursuers is to reach the same location as the evader, or to surround or locate the evader. A classic example is Cops and Robbers, where the evader and pursuers move in alternating rounds, and each may move along a single edge to a neighboring vertex.
The book begins with a concise, seven-page introduction or refresher on graph theory and then covers several classes of pursuit-evasion games in more detail. By focusing on pursuit-evasion fames specifically, the author is able to go into substantial depth across many of the important topics in the field. Two areas that the book does not devote much attention to are algorithmic approaches and stochastic modeling.
The text is primarily intended to support a one-semester course for graduate students or outstanding undergraduates who have already taken a class on graph theory. It could also be a useful entrance point for mathematicians who want an overview of pursuit-evasion games. The author provides many exercises, as well as several suggestions for more ambitious research projects.
Overall, the book is reader friendly and engaging, with many helpful figures and illustrations. The author writes in the preface that the book aims to be ``self-contained, understandable, and accessible to a broad mathematical audience'', and it achieves that goal.
Reviewer: Thomas Wiseman (Austin)Introduction to optimization-based decision-makinghttps://zbmath.org/1500.910022023-01-20T17:58:23.823708Z"de Miranda, Joao Luis"https://zbmath.org/authors/?q=ai:de-miranda.joao-luisThe book is divided into 10 chapters. The fist chapter shows on small real life numerical examples the process of successive finding the optimal decision. Chapter 2 is devoted to basic introduction to linear algebra, especially methods of solving systems of linear algebraic equations (Cramer's rule, Gauss elimination). In the third chapter the reader obtains a basic knowledge concerning linear programming. The main features of linear programming are presented using small numerical examples accompanied by graphical illustration with two structural variables. Similar approach is used in the next Chapter 4, where the concept of duality in linear programming is explained and in Chapter 5 containing necessary concepts and mathematical tools for minimization or maximization of nonlinear functions. The explanation is confined mostly to continuous functions of one and two variables with a brief outline of possible further generalization to more variables. Besides, the concept of saddle point is shortly mentioned. Sensitivity and post-optimality analysis of linear programming problems are studied in Chapter 6. This chapter analyzes also how introducing new variables influences the optimal solution of the problems and provides a basic information about linear parametric optimization. Solution methods of integer linear programming problems, i.e. problems, in which either all or a part of structural variables must be integer are presented in Chapter 7. Principles of branch and bound problems are explained, special attention is devoted to binary integer programming problems. Chapter 8 contains introduction to the game theory with emphasis on zero-sum games and their solution via linear programming. Practical examples explaining the principles and aims of multi-criteria decision making are presented in Chapter 9. Chapter 10 develops various approaches to problems containing uncertainty. Stochastic optimization and construction of an adequate deterministic model to a given stochastic problem are explained.
The book is an elementary and self-contained textbook introducing to decision-making based on optimization with minimum pre-requisites required from the readers
Reviewer: Karel Zimmermann (Praha)Risk measures and insurance solvency benchmarks. Fixed-probability levels in renewal risk modelshttps://zbmath.org/1500.910032023-01-20T17:58:23.823708Z"Malinovskii, Vsevolod K."https://zbmath.org/authors/?q=ai:malinovskii.vsevolod-kThis monograph provides an excellent account on risk measures and their link with actuarial risk theory. It integrates both the practical and theoretical aspects of the capital requirements in some regulatory frameworks, say Solvency II and Swiss Solvency Test regulatory systems. Two types of annual solvency provisions, namely the non-loss and non-ruin capitals, are considered. The non-loss capital is the initial capital ensuring the non-negative of the capital at the end of the year. The non-ruin capital ensures that ruin does not occur during the year. The risk measure, value-at-risk (VaR), is used as a key tool in the analysis. Technically speaking, the evaluations of both the non-loss and non-ruin capitals involve solving inverse problems. However, the evaluation of the non-ruin capital is more delicate from the mathematical perspective, which involves finding an implicit function by inverting a finite-time ruin probability at a given risk tolerance level. To address this technically challenging problem, this monograph aims to relate two seemingly rather contrasting concepts, namely VaR and risk measures in the risk theory. Specifically, the evaluation of the non-ruin capital is formulated into a mathematical inverse level-crossing problem for compound renewal processes. The author obtains nice analytical results for non-ruin capitals relating to the inverse Gaussian distributions. These results look interesting. The Monte Carlo simulation method is used for practical implementation of the model.
There are seven chapters in this monograph. The concepts, ideas and mathematics are well presented. There is an end-of-chapter problem set for each chapter. This is a nice feature from the pedagogical point of view. Chapter 1 is devoted to an introduction to risk measures and their applications to finance, Solvency II and risk theory. Chapter 2 is concerned with deriving a solution to the inverse level-crossing problem in a diffusion model, which is called a fixed-probability level. Chapter 3 considers the level-crossing problem for compound renewal process, which is related to a collective risk model in the actuarial risk theory. Chapter 4 studies an implicit function relating to the inverse Gaussian approximation. Chapter 5 is the main chapter of the monograph and considers a general compound renewal process. In this case, an approximation based on the Kendall's identity is used to analyse the structure of the fixed-probability level. Chapter 6 presents a case study for numerical computation of a fixed-probability level. Chapter 7 conducts the expansion, revenue and solvency (ERS) analysis. Both exponentially distributed and generally distributed claim sizes are considered. Furthermore, both a homogeneous compound Poisson model and an inhomogeneous model are studied.
It is anticipated that the results presented in this monograph will benefit academic researchers, market practitioners and other stakeholders in risk management, insurance, finance and related fields.
Reviewer: Tak Kuen Siu (Sydney)Classification by decomposition: a novel approach to classification of symmetric \(2\times 2\) gameshttps://zbmath.org/1500.910042023-01-20T17:58:23.823708Z"Böörs, Mikael"https://zbmath.org/authors/?q=ai:boors.mikael"Wängberg, Tobias"https://zbmath.org/authors/?q=ai:wangberg.tobias"Everitt, Tom"https://zbmath.org/authors/?q=ai:everitt.tom"Hutter, Marcus"https://zbmath.org/authors/?q=ai:hutter.marcusSummary: In this paper, we provide a detailed review of previous classifications of \(2\times 2\) games and suggest a mathematically simple way to classify the symmetric \(2\times 2\) games based on a decomposition of the payoff matrix into a cooperative and a zero-sum part. We argue that differences in the interaction between the parts is what makes games interesting in different ways. Our claim is supported by evolutionary computer experiments and findings in previous literature. In addition, we provide a method for using a stereographic projection to create a compact 2-d representation of the game space.(In)existence of equilibria for 2-player, 2-value games with semistrictly quasiconcave cost functionshttps://zbmath.org/1500.910052023-01-20T17:58:23.823708Z"Georgiou, Chryssis"https://zbmath.org/authors/?q=ai:georgiou.chryssis"Mavronicolas, Marios"https://zbmath.org/authors/?q=ai:mavronicolas.marios"Monien, Burkhard"https://zbmath.org/authors/?q=ai:monien.burkhardSummary: We consider \(2\)-\textit{player}, \(2\)-\textit{value} cost minimization games where the players' \textit{costs} take on two values, \(a,b\), with \(a < b\). The players play mixed strategies and their costs are evaluated by \textit{semistrictly quasiconcave cost functions} representable by \textit{strictly quasiconcave,} one-parameter functions \(\mathsf{F}: [0, 1] \rightarrow \mathbb{R} \). Our main result is an impossibility result stating that:
\begin{itemize}
\item[-]
If the maximum of \(\mathsf{F}\) is obtained in (\(0,1)\) and \(\mathsf{F} \left (\frac{1}{2}\right )\ne b\), then there exists a 2-player, 2-value game without \(\mathsf{F}\)-equilibrium.
\end{itemize}
The counterexample to the existence of equilibria game used for the impossibility result belongs to a new class of very sparse 2-player, 2-value bimatrix games which we call \textit{simple games}. In an attempt to investigate the remaining case \(\mathsf{F}\left (\frac{1}{2}\right ) = b\), we show that:
\begin{itemize}
\item[-] Every simple, \(n\)-strategy game has an \(\mathsf{F}\)-equilibrium when \(\mathsf{F} \left (\frac{1}{2}\right ) = b\). We present a linear time algorithm for computing such an equilibrium.
\item[-] For 2-player, 2-value, 3-strategy games, we have that if \(\mathsf{F} \left (\frac{1}{2}\right ) \le b\), then every 2-player, 2-value, 3-strategy game has an F-equilibrium; if \(\mathsf{F} \left (\frac{1}{2}\right ) > b\), then there exists a simple 2-player, 2-value, 3-strategy game without \(\mathsf{F}\)-equilibrium.
\end{itemize}
To the best of our knowledge, this work is the first to provide an (almost complete) answer on whether there is, for a given function \(\mathsf{F}\), a counterexample game without \(\mathsf{F}\)-equilibrium.Pareto Nash equilibrium seeking for switched differential gameshttps://zbmath.org/1500.910062023-01-20T17:58:23.823708Z"Huang, Yabing"https://zbmath.org/authors/?q=ai:huang.yabing"Zhao, Jun"https://zbmath.org/authors/?q=ai:zhao.jun.1Summary: We study the differential games consisting of multiple players subject to switched dynamics. Firstly, the definition of the switched Nash equilibrium (NE) is given. Then, conditions are proposed under which the switched NE can be constructed. Furthermore, in order to improve the efficiency of the switched NE, we focus on seeking the Pareto Nash equilibrium (PNE) of switched differential games. A sufficient condition is provided such that the PNE can be achieved by designing a Pareto optimal switching strategy orchestrating the activation among subsystems. Finally, a conceptual algorithm is developed to construct the Pareto optimal switching strategy and the corresponding PNE of switched differential games.
For the entire collection see [Zbl 1495.93004].On the grey obligation ruleshttps://zbmath.org/1500.910072023-01-20T17:58:23.823708Z"Palancı, O."https://zbmath.org/authors/?q=ai:palanci.osman"Alparslan Gök, S. Z."https://zbmath.org/authors/?q=ai:alparslan-gok.sirma-zeynep"Weber, Gerhard-Wilhelm"https://zbmath.org/authors/?q=ai:weber.gerhard-wilhelmSummary: In this paper, we extend obligation rules by using grey calculus. We introduce grey obligation rules for minimum grey cost spanning tree (mgcst) situations. It turns out that the grey obligation rule and the grey Bird rule are equal under suitable conditions. Further, we show that such rules are grey cost monotonic and induce population monotonic grey allocation schemes (pmgas). Moreover, if the game is concave, its (extended) grey Shapley value is a PMGAS. Some examples of pmgas, grey obligation rules and grey Shapley value for mgcst situations are also given.
For the entire collection see [Zbl 1481.92004].Shapley value for TU-games with multiple memberships and externalitieshttps://zbmath.org/1500.910082023-01-20T17:58:23.823708Z"Sokolov, Denis"https://zbmath.org/authors/?q=ai:sokolov.denisSummary: In this paper, we introduce a new form, the \textit{clique function form} (CQFF), of TU-games that allows for multiple memberships and explicit externalities. The new notion is based on a graphical representation of the connections between agents in a game. We treat as coalitions only fully connected sub-graphs (i.e., maximal cliques). Following \textit{R. B. Myerson} [Int. J. Game Theory 6, 23--31 (1977; Zbl 0373.90091)] we adapt the well-known efficiency, symmetry, and linearity axioms to the new setting and obtain a unique value for superadditive CQFF games.Optimization implementation of solution concepts for cooperative games with stochastic payoffshttps://zbmath.org/1500.910092023-01-20T17:58:23.823708Z"Sun, Panfei"https://zbmath.org/authors/?q=ai:sun.panfei"Hou, Dongshuang"https://zbmath.org/authors/?q=ai:hou.dongshuang"Sun, Hao"https://zbmath.org/authors/?q=ai:sun.hao.1Summary: In this paper, we study solution concepts for cooperative games with stochastic payoffs. we define four kinds of solution concepts, namely the most coalitional (marginal) stable solution and the fairest coalitional (marginal) solution, by minimizing the total variance of excesses of coalitions (individual players). All these four concepts are optimal solutions of corresponding optimal problem under the least square criterion. It turns out that the fairest coalitional (marginal) solution belongs to the set of the most coalitional (marginal) stable solutions. Inspired by the definition of nucleolus, we propose various extended nucleolus based on lexicographic criterion. Furthermore, axiomatizations of the proposed solutions are exhibited through the linkage between the stochastic and deterministic models.Cooperative differential games with the utility function switched at a random time momenthttps://zbmath.org/1500.910102023-01-20T17:58:23.823708Z"Zaremba, Anastasiya P."https://zbmath.org/authors/?q=ai:zaremba.anastasiya-pSummary: This paper describes a differential game of \(n\) persons in which the utility functions of the players have a hybrid form, namely, they are changed at a random moment in time. With the help of integration in parts, the form of the payoff functional is simplified. For the cooperative scenario the problem of time-consistency of the optimality principle chosen by the players is studied and a solution is proposed in the form of an adapted imputation distribution procedure. The differential investment game is considered as an example.A survey on decomposition of finite strategic-form gameshttps://zbmath.org/1500.910112023-01-20T17:58:23.823708Z"Hao, Yaqi"https://zbmath.org/authors/?q=ai:hao.yaqi"Zhang, Ji-Feng"https://zbmath.org/authors/?q=ai:zhang.jifengSummary: Several decomposition methods for finite strategic-form games are systematically reviewed. In each method, viewing the set of all games as a vector space, a canonical direct sum decomposition of an arbitrary game into several components is presented, and natural classes of games that are induced by each decomposition are analyzed. In particular, we focus on their structural and equilibrium properties. By exploiting the properties of the decomposition framework, explicit expressions for the projections of games onto subspaces of particular games are given, such as potential games, harmonic game, symmetric games and zero-sum games. This extends the static and dynamic properties of these games to ``nearby'' games.
For the entire collection see [Zbl 1495.93004].Stackelberg stochastic differential game with asymmetric noisy observationshttps://zbmath.org/1500.910122023-01-20T17:58:23.823708Z"Zheng, Yueyang"https://zbmath.org/authors/?q=ai:zheng.yueyang"Shi, Jingtao"https://zbmath.org/authors/?q=ai:shi.jingtaoSummary: This paper is concerned with a Stackelberg stochastic differential game with asymmetric noisy observation. In our model, the follower cannot observe the state process directly, but could observe a noisy observation process, while the leader can completely observe the state process. Open-loop Stackelberg equilibrium is considered. The follower first solve a stochastic optimal control problem with partial observation, the maximum principle and verification theorem are obtained. Then the leader turns to solve an optimal control problem for a conditional mean-field forward-backward stochastic differential equation, and both maximum principle and verification theorem are proved. A linear-quadratic Stackelberg stochastic differential game with asymmetric noisy observation is discussed to illustrate the theoretical results in this paper. With the aid of some new Riccati equations, the open-loop Stackelberg equilibrium admits its state estimate feedback representation. Finally, an application to the resource allocation and its numerical simulation are given to show the effectiveness of the proposed results.Approximating Nash equilibrium for optimal consumption in stochastic growth model with jumpshttps://zbmath.org/1500.910132023-01-20T17:58:23.823708Z"Bo, Li Jun"https://zbmath.org/authors/?q=ai:bo.lijun"Li, Tong Qing"https://zbmath.org/authors/?q=ai:li.tongqingSummary: In this paper, we study a class of dynamic games consisting of finite agents under a stochastic growth model with jumps. The jump process in the dynamics of the capital stock of each agent models announcements regarding each agent in the game occur at Poisson distributed random times. The aim of each agent is to maximize her objective functional with mean-field interactions by choosing an optimal consumption strategy. We prove the existence of a fixed point related to the so-called consistence condition as the number of agents goes large. Building upon the fixed point, we establish an optimal feedback consumption strategy for all agents which is in fact an approximating Nash equilibrium which describes strategies for each agent such that no agent has any incentive to change her strategy.Mean field games with common noises and conditional distribution dependent FBSDEshttps://zbmath.org/1500.910142023-01-20T17:58:23.823708Z"Huang, Ziyu"https://zbmath.org/authors/?q=ai:huang.ziyu"Tang, Shanjian"https://zbmath.org/authors/?q=ai:tang.shanjianSummary: In this paper, the authors consider the mean field game with a common noise and allow the state coefficients to vary with the conditional distribution in a nonlinear way. They assume that the cost function satisfies a convexity and a weak monotonicity property. They use the sufficient Pontryagin principle for optimality to transform the mean field control problem into existence and uniqueness of solution of conditional distribution dependent forward-backward stochastic differential equation (FBSDE for short). They prove the existence and uniqueness of solution of the conditional distribution dependent FBSDE when the dependence of the state on the conditional distribution is sufficiently small, or when the convexity parameter of the running cost on the control is sufficiently large. Two different methods are developed. The first method is based on a continuation of the coefficients, which is developed for FBSDE by \textit{Y. Hu} and \textit{S. Peng} [Probab. Theory Relat. Fields 103, No. 2, 273--283 (1995; Zbl 0831.60065)]. They apply the method to conditional distribution dependent FBSDE. The second method is to show the existence result on a small time interval by Banach fixed point theorem and then extend the local solution to the whole time interval.Incentive-based fault tolerant control of evolutionary matrix gameshttps://zbmath.org/1500.910152023-01-20T17:58:23.823708Z"Yang, Hao"https://zbmath.org/authors/?q=ai:yang.hao"Ni, Yuan"https://zbmath.org/authors/?q=ai:ni.yuan"Jiang, Bin"https://zbmath.org/authors/?q=ai:jiang.binSummary: This chapter considers the fault tolerant control problem of evolutionary matrix games modeled by replicator dynamics. Under the evolutionary matrix game framework, the faults can maliciously provide additional payoffs to particular strategies for all players. The local asymptotical stability of the equilibria as well as the domains of attraction are analyzed in both healthy and faulty cases, based on which an incentive-based fault tolerant control method is proposed. The results are further extended to single population with switching payoff matrices, single population with mutation and multiple populations on the network.
For the entire collection see [Zbl 1495.93004].Bounded rationality in differential games: a reinforcement learning-based approachhttps://zbmath.org/1500.910162023-01-20T17:58:23.823708Z"Kokolakis, Nick-Marios T."https://zbmath.org/authors/?q=ai:kokolakis.nick-marios-t"Kanellopoulos, Aris"https://zbmath.org/authors/?q=ai:kanellopoulos.aris"Vamvoudakis, Kyriakos G."https://zbmath.org/authors/?q=ai:vamvoudakis.kyriakos-gSummary: This chapter presents a unified framework of bounded rationality for control systems, as this can be employed in a coordinated unmanned aerial vehicle tracking problem. By considering limitations in the decision-making mechanisms of the vehicles and utilizing learning algorithms that capture their level of cognition, we are able to design appropriate countermeasures. Via game-theoretic concepts, we derive the Nash equilibrium of the interaction between the evader and the pursuing team, with the first being the maximizing player and the team being the minimizing ones. We derive optimal pursuing and evading policies while taking into account the physical constraints imposed by Dubins vehicles. Subsequently, the infinite rationality assumption underlying the equilibrium is relaxed, and level-k thinking policies are computed through reinforcement-learning architectures. Convergence to the Nash equilibrium is shown as the levels of intelligence increase. Finally, simulation results verify the efficacy of our approach.
For the entire collection see [Zbl 1492.49001].Matrix resolving functions in the linear group pursuit problem with fractional derivativeshttps://zbmath.org/1500.910172023-01-20T17:58:23.823708Z"Machtakova, Alena I."https://zbmath.org/authors/?q=ai:machtakova.alena-igorevna"Petrov, Nikolai N."https://zbmath.org/authors/?q=ai:petrov.nikolai-nikandrovichSummary: In finite-dimensional Euclidean space, we analyze the problem of pursuit of a single evader by a group of pursuers, which is described by a system of differential equations with Caputo fractional derivatives of order \(\alpha \). The goal of the group of pursuers is the capture of the evader by at least m different pursuers (the instants of capture may or may not coincide). As a mathematical basis, we use matrix resolving functions that are generalizations of scalar resolving functions. We obtain sufficient conditions for multiple capture of a single evader in the class of quasi-strategies. We give examples illustrating the results obtained.Unawareness of decision criteria in multicriteria gameshttps://zbmath.org/1500.910182023-01-20T17:58:23.823708Z"Sasaki, Yasuo"https://zbmath.org/authors/?q=ai:sasaki.yasuoSummary: The present paper incorporates unawareness into multicriteria games, where agents can have multiple decision criteria and their preferences are represented by vector-valued utility functions. Focusing only on unawareness of decision criteria, we define (weak and strong) Pareto rationalizability as a solution concept for multicriteria games with unawareness, which involves iterated eliminations of never-Pareto optimal actions, and examine its properties. We show that any form of unawareness can decrease the set of weakly Pareto rationalizable actions but will never increase it, while this property does not hold for strong Pareto rationalizability. We also extend the property on the rationalizability concept for weighted games in the standard case to our setting. We apply our framework to a bicriteria Cournot game.Customization of J. Bather UCB strategy for a Gaussian multi-armed bandithttps://zbmath.org/1500.910192023-01-20T17:58:23.823708Z"Garbar', Sergeĭ V."https://zbmath.org/authors/?q=ai:garbar.sergei-v"Kolnogorov, Alexander V."https://zbmath.org/authors/?q=ai:kolnogorov.alexander-vSummary: We consider the customization of the UCB strategy, which was first proposed by J. Bather for Bernoulli two-armed bandit, to the case of a Gaussian multi-armed bandit describing batch data processing. This optimal control problem has classical interpretation as a game with nature, in which the payment function of the player is the expected loss of total income caused by incomplete information. The goal is stated in minimax setting. For the considered game with nature, we present an invariant description of the control with a horizon equal to one, which allows to perform computations in two ways: using Monte Carlo simulations and analytically by dynamic programming technique. For various configurations of the considered game with nature, we have found saddle points, which characterize the optimal control and the worst-case distribution of the parameters of the multi-armed bandit.The game of flipping coinshttps://zbmath.org/1500.910202023-01-20T17:58:23.823708Z"Bonato, Anthony"https://zbmath.org/authors/?q=ai:bonato.anthony"Huggan, Melissa A."https://zbmath.org/authors/?q=ai:huggan.melissa-a"Nowakowski, Richard J."https://zbmath.org/authors/?q=ai:nowakowski.richard-jSummary: We consider \textsf{flipping coins}, a partizan version of the impartial game turning turtles, played on lines of coins. We show that the values of this game are numbers, and these are found by first applying a reduction, then decomposing the position into an iterated ordinal sum. This is unusual since moves in the middle of the line do not eliminate the rest of the line. Moreover, if \(G\) is decomposed into lines \(H\) and \(K\), then \(G= (H:K)\). This is in contrast to hackenbush strings, where \(G= (H:K)\).
For the entire collection see [Zbl 1495.91007].The game of blocking pebbleshttps://zbmath.org/1500.910212023-01-20T17:58:23.823708Z"Burke, Kyle"https://zbmath.org/authors/?q=ai:burke.kyle-w"Ferland, Matthew"https://zbmath.org/authors/?q=ai:ferland.matthew"Fisher, Michael"https://zbmath.org/authors/?q=ai:fisher.michael-j|fisher.michael-e|fisher.michael-w"Gledel, Valentin"https://zbmath.org/authors/?q=ai:gledel.valentin"Tennenhouse, Craig"https://zbmath.org/authors/?q=ai:tennenhouse.craig-mSummary: Graph pebbling is a well-studied single-player game on graphs. We introduce the game of \textsf{blocking pebbles}, which adapts graph pebbling into a two-player strategy game to examine it within the context of combinatorial game theory. Positions with game values matching all integers, all nimbers, and many infinitesimals and switches are found. This game joins the ranks of other combinatorial games on graphs, games with discovered moves, and partisan games with impartial movement options. The computational complexity of the general case is shown to be PSPACE-hard.
For the entire collection see [Zbl 1495.91007].Transverse wave: an impartial color-propagation game inspired by social influence and quantum nimhttps://zbmath.org/1500.910222023-01-20T17:58:23.823708Z"Burke, Kyle"https://zbmath.org/authors/?q=ai:burke.kyle-w"Ferland, Matthew"https://zbmath.org/authors/?q=ai:ferland.matthew"Teng, Shang-Hua"https://zbmath.org/authors/?q=ai:teng.shang-hua.1Summary: In this paper, we study \textsf{Transverse Wave}, a colorful, impartial combinatorial game played on a two-dimensional grid. We are drawn to this game because of its apparent simplicity, contrasting intractability, and intrinsic connection to two other combinatorial games, one about social influences and another inspired by quantum superpositions. More precisely, we show that \textsf{Transverse Wave} is at the intersection of two other games, the social-influence-derived \textsf{Friend Circlee} and superposition based \textsf{Demi-Quantum Nim}. \textsf{Transverse Wave} is also connected with Schaefer's logic game \textsf{Avoid True} from the 1970s. In addition to analyzing the mathematical structures and computational complexity of \textsf{Transverse Wave}, we provide a web-based version of the game. Furthermore, we formulate a basic network-influence game, called \textsf{Demographic Influence}, which simultaneously generalizes \textsf{Node-Kayles} and \textsf{Demi-Quantum Nim} (which in turn contains \textsf{Nim}, \textsf{Avoid True}, and \textsf{Transverse Wave} as particular cases). These connections illuminate a \textit{lattice order} of games, induced by special-case/generalization relationships, fundamental to both design and comparative analysis of combinatorial games.
For the entire collection see [Zbl 1495.91007].A note on numbershttps://zbmath.org/1500.910232023-01-20T17:58:23.823708Z"Carvalho, Alda"https://zbmath.org/authors/?q=ai:carvalho.alda"Huggan, Melissa A."https://zbmath.org/authors/?q=ai:huggan.melissa-a"Nowakowski, Richard J."https://zbmath.org/authors/?q=ai:nowakowski.richard-j"Dos Santos, Carlos Pereira"https://zbmath.org/authors/?q=ai:pereira-dos-santos.carlosSummary: When are all positions of a game numbers? We show that two properties are necessary and sufficient. These properties are consequences of the fact that, in a number, it is not an advantage to be the first player. One of these properties implies the other. However, checking for one or the other, rather than just one, can often be accomplished by only looking at the positions on the ``board''. If the stronger property holds for all positions, then the values are integers.
For the entire collection see [Zbl 1495.91007].Ordinal sums, clockwise hackenbush, and domino shavehttps://zbmath.org/1500.910242023-01-20T17:58:23.823708Z"Carvalho, Alda"https://zbmath.org/authors/?q=ai:carvalho.alda"Huggan, Melissa A."https://zbmath.org/authors/?q=ai:huggan.melissa-a"Nowakowski, Richard J."https://zbmath.org/authors/?q=ai:nowakowski.richard-j"Dos Santos, Carlos Pereira"https://zbmath.org/authors/?q=ai:pereira-dos-santos.carlosSummary: We present two rulesets, domino shave and clockwise hackenbush. The first is somehow natural and has, as special cases, stirling shave and Hetyei's Bernoulli game. Clockwise hackenbush seems artificial, yet it is equivalent to domino shave. From the pictorial form of the game and a knowledge of hackenbush, the decomposition into ordinal sums is immediate. The values of clockwise bluered hackenbush are numbers, and we provide an explicit formula for the ordinal sum of numbers where the literal form of the base is \(\{x|\}\) or \(\{|x\}\), and \(x\) is a number. That formula generalizes van Roode's signed binary number method for blue-red hackenbush.
For the entire collection see [Zbl 1495.91007].Advances in finding ideal play on poset gameshttps://zbmath.org/1500.910252023-01-20T17:58:23.823708Z"Clow, Alexander"https://zbmath.org/authors/?q=ai:clow.alexander"Finbow, Stephen"https://zbmath.org/authors/?q=ai:finbow.stephenSummary: Poset games are a class of combinatorial games that remain unsolved. Soltys and Wilson proved that computing winning strategies is in \textbf{PSPACE} and aside from particular cases such as nim and N-Free games, \textbf{P} time algorithms for finding ideal play are unknown. In this paper, we present methods to calculate the nimber of poset games allowing for the classification of winning or losing positions. The results present an equivalence of ideal strategies on posets that are seemingly unrelated.
For the entire collection see [Zbl 1495.91007].Strings-and-coins and nimstring are PSPACE-completehttps://zbmath.org/1500.910262023-01-20T17:58:23.823708Z"Demaine, Erik D."https://zbmath.org/authors/?q=ai:demaine.erik-d"Diomidov, Yevhenii"https://zbmath.org/authors/?q=ai:diomidov.yevheniiSummary: We prove that strings-and-coins, the combinatorial two-player game generalizing the dual of dots-and-boxes, is strongly PSPACE-complete on multigraphs. This result improves the best previous result, NP-hardness, argued in winning ways. Our result also applies to the nimstring variant, where the winner is determined by normal play; indeed, one step in our reduction is the standard reduction (also from winning ways) from nimstring to strings-and-coins.
For the entire collection see [Zbl 1495.91007].Partizan subtraction gameshttps://zbmath.org/1500.910272023-01-20T17:58:23.823708Z"Duchêne, Eric"https://zbmath.org/authors/?q=ai:duchene.eric"Heinrich, Marc"https://zbmath.org/authors/?q=ai:heinrich.marc"Nowakowski, Richard"https://zbmath.org/authors/?q=ai:nowakowski.richard-j"Parreau, Aline"https://zbmath.org/authors/?q=ai:parreau.alineSummary: Partizan subtraction games are combinatorial games where two players, say Left and Right, alternately remove a number \(n\) of tokens from a heap of tokens, with \(n\in S_\mathcal{L}\) (resp., \(n\in S_\mathcal{R})\) when it is Left's (resp., Right's) turn. The first player unable to move loses. These games were introduced by \textit{A. S. Fraenkel} and \textit{A. Kotzig} [Int. J. Game Theory 16, 145--154 (1987; Zbl 0662.90095)], where they introduced the notion of dominance, i. e., an asymptotic behavior of the outcome sequence where Left always wins if the heap is sufficiently large. In the current paper, we investigate other kinds of behaviors for the outcome sequence. In addition to dominance, three other disjoint behaviors are defined, \textit{weak dominance, fairness}, and \textit{ultimate impartiality}. We consider the problem of computing this behavior with respect to \(S_\mathcal{L}\) and \(S_\mathcal{R}\), which is connected to the well-known Frobenius coin problem. General results are given, together with arithmetic and geometric characterizations when the sets \(S_\mathcal{L}\) and \(S_\mathcal{R}\) have size at most \(2\).
For the entire collection see [Zbl 1495.91007].Circular nim games CN\((7,4)\)https://zbmath.org/1500.910282023-01-20T17:58:23.823708Z"Dufour, Matthieu"https://zbmath.org/authors/?q=ai:dufour.matthieu"Heubach, Silvia"https://zbmath.org/authors/?q=ai:heubach.silvia"Vo, Anh"https://zbmath.org/authors/?q=ai:vo.anh-khoa|vo.anh-ducSummary: Circular Nim is a two-player impartial combinatorial game consisting of \(n\) stacks of tokens placed in a circle. A move consists of choosing \(k\) consecutive stacks and taking at least one token from one or more of the stacks. The last player able to make a move wins. The question of interest is: Who can win from a given position if both players play optimally? This question is answered by determining the set of \(\mathcal{P}\)-positions from which the next player is bound to lose, no matter what moves the player makes. We will completely characterize the set of \(\mathcal{P}\)-positions for \(n=7\) and \(k=4\), adding to the known results for other games in this family. The interesting feature of the set of \(\mathcal{P}\)-positions of this game is that it splits into different subsets, unlike the structures for the previously solved games in this family.
For the entire collection see [Zbl 1495.91007].Misère domineering on \(2\times n\) boardshttps://zbmath.org/1500.910292023-01-20T17:58:23.823708Z"Dwyer, Aaron"https://zbmath.org/authors/?q=ai:dwyer.aaron"Milley, Rebecca"https://zbmath.org/authors/?q=ai:milley.rebecca"Willette, Michael"https://zbmath.org/authors/?q=ai:willette.michaelSummary: Domineering is a well-studied tiling game, in which one player places vertical dominoes, and a second places horizontal dominoes, alternating turns until someone cannot place on their turn. Previous research has found game outcomes and values for certain rectangular boards under \textit{normal play} (last move wins); however, nothing has been published about domineering under \textit{misère play} (last move loses). We find optimal-play outcomes for all \(2\times n\) boards under misère play: these games are right-win for \(n\geqslant 12\). We also present algebraic results including sums, inverses, and comparisons in misère domineering.
For the entire collection see [Zbl 1495.91007].The arithmetic-periodicity of \textsc{cut} for \(\mathcal{C} = \{1, 2 c\}\)https://zbmath.org/1500.910302023-01-20T17:58:23.823708Z"Ellis, Paul"https://zbmath.org/authors/?q=ai:ellis.paul"Thanatipanonda, Thotsaporn Aek"https://zbmath.org/authors/?q=ai:thanatipanonda.thotsaporn-aekSummary: \textsc{cut} is a class of partition games played on a finite number of finite piles of tokens. Each version of \textsc{cut} is specified by a cut-set \(\mathcal{C} \subseteq \mathbb{N}\). A legal move consists of selecting one of the piles and partitioning it into \(d + 1\) nonempty piles, where \(d \in \mathcal{C}\). No tokens are removed from the game. It turns out that the nim-set for any \(\mathcal{C} = \{1, 2 c \}\) with \(c \geq 2\) is arithmetic-periodic, which answers an open question of \textit{A. Dailly} et al. [Discrete Appl. Math. 285, 509--525 (2020; Zbl 1452.91056)]. The key step is to show that there is a correspondence between the nim-sets of \textsc{cut} for \(\mathcal{C} = \{1, 6 \}\) and the nim-sets of \textsc{cut} for \(\mathcal{C} = \{1, 2 c \}\), \(c \geq 4\). The result easily extends to the case of \(\mathcal{C} = \{1, 2 c_1, 2 c_2, 2 c_3, \ldots \}\), where \(c_1, c_2, \ldots \geq 2\).Relator games on groupshttps://zbmath.org/1500.910312023-01-20T17:58:23.823708Z"Gates, Zachary"https://zbmath.org/authors/?q=ai:gates.zachary"Kelvey, Robert"https://zbmath.org/authors/?q=ai:kelvey.robertSummary: We define two impartial games, the \textit{relator achievement game} \textsf{REL} and the \textit{relator avoidance game} \textsf{RAV}. Given a finite group \(G\) and generating set \(S\), both games begin with the empty word. Two players form a word in \(S\) by alternately appending an element from \(S\cup S^{-1}\) at each turn. The first player to form a word equivalent in \(G\) to a previous word wins the game \textsf{REL} but loses the game \textsf{RAV}. Alternatively, we can think of \textsf{REL} and \textsf{RAV} as make a cycle and avoid a cycle games on the Cayley graph \(\Gamma (G,S)\). We determine winning strategies for several families of finite groups including dihedral, dicyclic, and products of cyclic groups.
For the entire collection see [Zbl 1495.91007].Playing Bynum's game cautiouslyhttps://zbmath.org/1500.910322023-01-20T17:58:23.823708Z"Haff, L. R."https://zbmath.org/authors/?q=ai:haff.l-rSummary: Several sequences of infinitesimals are introduced for the purpose of analyzing a restricted form of Bynum's game or ``eatcake''. Two of these have terms with uptimal values (à la Conway and Ryba, the 1970s). All others (eight) are specified by ``uptimal+ forms'', i. e., standard uptimals plus a fractional uptimal. The game itself is played on an \(n\times m\) grid of unit squares, and here we describe all followers (submatrices) of the \(12\times 12\) grid. Positional values of larger grids become intractable. However, an examination of \(n\times n\) squares, \(2\leq n\leq 21\), reveals that all but three of them are equal to \(\ast\), the exceptions being the \(10\times 10,14\times 14\), and \(18\times 18\) cases. Nonetheless, the exceptional cases have ``star-like'' characteristics: they are of the form \(\pm (G)\), confused with both zero and up, and less than double-up.
For the entire collection see [Zbl 1495.91007].Genetically modified gameshttps://zbmath.org/1500.910332023-01-20T17:58:23.823708Z"Huggan, Melissa A."https://zbmath.org/authors/?q=ai:huggan.melissa-a"Tennenhouse, Craig"https://zbmath.org/authors/?q=ai:tennenhouse.craig-mSummary: Genetic programming is the practice of evolving formulas using crossover and mutation of genes representing functional operations. Motivated by genetic evolution, we introduce and solve two combinatorial games, and we demonstrate some advantages and pitfalls of using genetic programming to investigate Grundy values. We conclude by investigating a combinatorial game whose ruleset and starting positions are inspired by genetic structures.
For the entire collection see [Zbl 1495.91007].Game values of arithmetic functionshttps://zbmath.org/1500.910342023-01-20T17:58:23.823708Z"Iannucci, Douglas E."https://zbmath.org/authors/?q=ai:iannucci.douglas-e"Larsson, Urban"https://zbmath.org/authors/?q=ai:larsson.urbanSummary: Arithmetic functions in number theory meet the Sprague-Grundy function from combinatorial game theory. We study a variety of two-player games induced by standard arithmetic functions, such as Euclidian division, divisors, remainders and relatively prime numbers, and their negations.
For the entire collection see [Zbl 1495.91007].A base-\(p\) Sprague-Grundy-type theorem for \(p)\)-calm subtraction games: Welter's game and representations of generalized symmetric groupshttps://zbmath.org/1500.910352023-01-20T17:58:23.823708Z"Irie, Yuki"https://zbmath.org/authors/?q=ai:irie.yukiSummary: For impartial games \(\Gamma\) and \(\Gamma^\prime\), the Sprague-Grundy function of the disjunctive sum \(\Gamma+\Gamma^\prime\) is equal to the Nim-sum of their Sprague-Grundy functions. In this paper, we introduce p-calm subtraction games and show that for p-calm subtraction games \(\Gamma\) and \(\Gamma \)', the Sprague-Grundy function of a p-saturation of \(\Gamma+\Gamma \)' is equal to the \(p\)-Nim-sum of the Sprague-Grundy functions of their \(p\)-saturations. Here a \(p\)-Nim-sum is the result of addition without carrying in base \(p\), and a \(p\)-saturation of \(\Gamma\) is an impartial game obtained from \(\Gamma\) by adding some moves. It will turn out that Nim and Welter's game are p-calm. Further, using the p-calmness of Welter's game, we generalize a relation between Welter's game and representations of symmetric groups to disjunctive sums of Welter's games and representations of generalized symmetric groups; this result is described combinatorially in terms of Young diagrams.
For the entire collection see [Zbl 1495.91007].Recursive comparison tests for dicot and dead-ending games under misère playhttps://zbmath.org/1500.910362023-01-20T17:58:23.823708Z"Larsson, Urban"https://zbmath.org/authors/?q=ai:larsson.urban"Milley, Rebecca"https://zbmath.org/authors/?q=ai:milley.rebecca"Nowakowski, Richard"https://zbmath.org/authors/?q=ai:nowakowski.richard-j"Renault, Gabriel"https://zbmath.org/authors/?q=ai:renault.gabriel"Santos, Carlos"https://zbmath.org/authors/?q=ai:santos.carlos-pSummary: In partizan games, where players Left and Right may have different options, there is a partial order defined as preference by Left: \(G\geqslant H\) if Left wins \(G+X\) whenever she wins \(H+X\) for any game position \(X\). In normal play, there is an easy test for comparison: \(G\geqslant H\) if and only if Left wins \(G-H\) playing second. In misère play, where the last player to move loses, the same test does not apply-for one thing, there are no additive inverses-and very few games are comparable. If we restrict the arbitrary game \(X\) to a subset of games \(u\), then we may have \(G\geq H\) ``modulo U''; but without the easy test from normal play, we must give a general argument about the outcomes of \(G+X\) and \(H+X\) for all \(X\in U\). In this paper, we use the novel theory of absolute combinatorial games to develop recursive comparison tests for the well-studied universes of dicots and dead-ending games. This is the first constructive test for comparison of dead-ending games under misère play using a new family of end-games called perfect murders.
For the entire collection see [Zbl 1495.91007].Impartial games with entailing moveshttps://zbmath.org/1500.910372023-01-20T17:58:23.823708Z"Larsson, Urban"https://zbmath.org/authors/?q=ai:larsson.urban"Nowakowski, Richard J."https://zbmath.org/authors/?q=ai:nowakowski.richard-j"Santos, Carlos P."https://zbmath.org/authors/?q=ai:santos.carlos-pSummary: Combinatorial game theory has also been called ``additive game theory'' whenever the analysis involves sums of independent game components. Such disjunctive sums invoke comparison between games, which allows abstract values to be assigned to them. However, there are rulesets with entailing moves that break the alternating play axiom and/or restrict the other player's options within the disjunctive sum components. These situations are exemplified in the literature by a ruleset such as nimstring, a normal play variation of the classical children's game dots \& boxes, and top entails, an elegant ruleset introduced in the classical work Winning Ways by \textit{E. R. Berlekamp} et al. [Winning ways for your mathematical plays. Vol. 1: Games in general. Vol. 2: Games in particular. pbk: \textsterling 10 (1982; Zbl 0485.00025)]. Such rulesets fall outside the scope of the established normal play theory. Here we axiomatize normal play via two new terminating games, \(\infty\) (Left wins) and \(\overline\infty\) (Right wins), and achieve a more general theory. We define affine impartial, which extends classical impartial games, and we analyze their algebra by extending the established Sprague-Grundy theory with an accompanying minimum excluded rule. Solutions of nimstring and top entails are given to illustrate the theory.
For the entire collection see [Zbl 1495.91007].Extended Sprague-Grundy theory for locally finite games, and applications to random game-treeshttps://zbmath.org/1500.910382023-01-20T17:58:23.823708Z"Martin, James B."https://zbmath.org/authors/?q=ai:martin.james-b.1|martin.james-bSummary: The Sprague-Grundy theory for finite games without cycles was extended to general finite games by \textit{C. A. B. Smith} [J. Comb. Theory 1, 51--81 (1966; Zbl 0141.36101)] and by \textit{A. S. Fraenkel} and \textit{Y. Yesha} [J. Comb. Theory, Ser. A 43, 165--177 (1986; Zbl 0622.05030)]. We observe that the same framework used to classify finite games also covers the case of locally finite games (that is, games where any position has only finitely many options). In particular, any locally finite game is equivalent to some finite game. We then study cases where the directed graph of a game is chosen randomly and is given by the tree of a Galton-Watson branching process. Natural families of offspring distributions display a surprisingly wide range of behavior. The setting shows a nice interplay between ideas from combinatorial game theory and ideas from probability.
For the entire collection see [Zbl 1495.91007].Grundy numbers of impartial three-dimensional chocolate-bar gameshttps://zbmath.org/1500.910392023-01-20T17:58:23.823708Z"Miyadera, Ryohei"https://zbmath.org/authors/?q=ai:miyadera.ryohei"Nakaya, Yushi"https://zbmath.org/authors/?q=ai:nakaya.yushiSummary: Chocolate-bar games are variants of the Chomp game. Let \(Z_{\geq 0}\) be a set of nonnegative numbers, and let \(x,y,z\in Z_{\geq 0}\). A three-dimensional chocolate bar is comprised of a set of \(1\times 1\times 1\) cubes with a ``bitter'' or ``poison'' cube at the bottom of the column at position \((0,0)\). For \(u,w\in Z_{\geq 0}\) such that \(u\leq x\) and \(w\leq z\), the height of the column at position \((u,w)\) is \(\min (F(u,w),y)+1\), where \(F\) is an increasing function. We denote such a chocolate bar as \(CB(F, x, y, z)\). Two players take turns to cut the bar along a plane horizontally or vertically along the grooves, and eat the broken pieces. The player who manages to leave the opponent with a single bitter cube is the winner. In a prior work, we characterized the function f for a two-dimensional chocolate bar game such that the Sprague-Grundy value of \(CB(f,y,z)\) is \(y\oplus z\). In this study, we characterize the function \(F\) such that the Sprague-Grundy value of \(CB(F, x, y, z)\) is \(x\oplus y\oplus z\).
For the entire collection see [Zbl 1495.91007].On the structure of misère impartial gameshttps://zbmath.org/1500.910402023-01-20T17:58:23.823708Z"Siegel, Aaron N."https://zbmath.org/authors/?q=ai:siegel.aaron-nSummary: We consider the abstract structure of the monoid \(\mathcal{M}\) of misère impartial game values. We present several new results, including a proof that the group of fractions of \(\mathcal{M}\) is almost torsion-free, a method of calculating the number of distinct games born by day \(7\), and some new results on the structure of prime games. We also include proofs of a few older results due to Conway, such as the cancellation theorem, that are essential to the analysis, but whose proofs are not readily available in the literature.
For the entire collection see [Zbl 1495.91007].All probabilities are equal, but some probabilities are more equal than othershttps://zbmath.org/1500.910412023-01-20T17:58:23.823708Z"Letsou, Christina"https://zbmath.org/authors/?q=ai:letsou.christina"Naeh, Shlomo"https://zbmath.org/authors/?q=ai:naeh.shlomo"Segal, Uzi"https://zbmath.org/authors/?q=ai:segal.uziSummary: There are several procedures for selecting people at random. Modern and ancient stories as well as some experiments suggest that individuals may not view all such lotteries as ``fair''. In this paper, we compare alternative procedures and show conditions under which some procedures are preferred to others. These procedures give all individuals an equal chance of being selected, but have different structures. We analyze these procedures as multi-stage lotteries. In line with previous literature, our analysis is based on the observation that multi-stage lotteries are not considered indifferent to their probabilistic one-stage representations.On Stackelberg equilibrium in the sense of program strategies in Volterra functional operator gameshttps://zbmath.org/1500.910422023-01-20T17:58:23.823708Z"Chernov, Andreĭ V."https://zbmath.org/authors/?q=ai:chernov.andrei-vladimirovichSummary: For a nonlinear Volterra functional operator equation controlled by two players with the help of finite dimensional program controls with integral objective functionals we prove existence of Stackelberg equilibrium (in the style of M. S. Nikol'skiy). On this way we use our formerly proved results on continuous dependence of the state and functionals on finite dimensional controls and also classical Weierstrass theorem. The property of being singleton for the minimizer set of the first player is proved by the scheme of M. S. Nikol'skiy applied earlier for a linear ordinary differential equation.The complexity of \((\mathsf{E}+\mathsf{Var})\)-equilibria, \(\mathsf{ESR}\)-equilibria, and \(\mathsf{SuperE}\)-equilibria for 2-players games with few cost valueshttps://zbmath.org/1500.910432023-01-20T17:58:23.823708Z"Georgiou, Chryssis"https://zbmath.org/authors/?q=ai:georgiou.chryssis"Mavronicolas, Marios"https://zbmath.org/authors/?q=ai:mavronicolas.marios"Monien, Burkhard"https://zbmath.org/authors/?q=ai:monien.burkhardSummary: We consider 2-players minimization games with very few cost values. Players are \textit{risk-averse} and play mixed strategies. The players care about minimizing some function other than expectation or minimizing expectation with additional properties: \textit{expectation plus variance} (\textsf{E+Var}), or \textit{extended Sharpe ratio} (\textsf{ESR}), or \textit{expectation} (\textsf{E}) \textit{with the additional property that Variance is zero} \((\mathsf{Var}=0)\). These give rise to \((\mathsf{E}+\mathsf{Var})-equilibria\), to \textit{ESR-equilibria}, and to \textit{\textsf{SuperE}-equilibria}, respectively: in an \((\mathsf{E}+\mathsf{Var})\)-equilibrium, no player could unilaterally reduce her \((\mathsf{E}+\mathsf{Var})\)-cost; in an \(\textsf{ESR}\)-equilibrium, no player could unilaterally reduce her \(\textsf{ESR}\)-cost; in a \textsf{SuperE}-equilibrium, \(\mathsf{Var}=0\) and no player could unilaterally reduce her \textsf{E}-cost. We show two complexity results:
\begin{itemize}
\item Deciding the existence of an \((\mathsf{E}+\mathsf{R})\)-equilibrium is strongly \(\mathcal{NP}\)-hard for 3-values games, where \(\mathsf{R}\) is a general \textit{risk valuation,} assuming that \(\mathsf{E}+\mathsf{R}\) is strictly quasiconcave and satisfies certain technical properties. \(\mathcal{N}P\)-hardness is inherited to \(\mathsf{E}+\mathsf{Var}\) and to \(\mathsf{ESR}\), shown to have the properties.
\item Deciding the existence of a \textsf{SuperE}-equilibrium is strongly \(\mathcal{NP}\)-hard for 3-values games, but computing one is in \(\mathcal{P}\) for 2-values games. These results identify a complexity separation between 2-values and 3-values games. We also identify certain combinatorial properties of \((\mathsf{E}+\mathsf{Var})\)-equilibria for 2-values games.
\end{itemize}Game-theoretic approach in lawhttps://zbmath.org/1500.910442023-01-20T17:58:23.823708Z"Mart'yanova, Elizaveta Yu."https://zbmath.org/authors/?q=ai:martyanova.elizaveta-yuSummary: The article is devoted to the study of the possible application of game theory for the purposes of lawmaking and law enforcement. The purpose of the study is to identify the conditions and limits of the application of the game theory to situations with legal content. It is proved that the legal behavior of the subject can be evaluated through the prism of the axiom of completeness, transitivity, continuity, substitution and independence, and the result can be used in determining the criterion of ``rationality of behavior'', when fixing the presence or absence of signs of unfair behavior, abuse of law. The author comes to the conclusion that the incompleteness of the legal theory of games, which is expressed in the limited situations to which it can be applied, does not detract from its significance as an analytical tool, the results of which should be interpreted taking into account other methods of analyzing legal matter and situations of legal content.Should I stay or should I go? Congestion pricing and equilibrium selection in a transportation networkhttps://zbmath.org/1500.910452023-01-20T17:58:23.823708Z"Carbone, Enrica"https://zbmath.org/authors/?q=ai:carbone.enrica"Dixit, Vinayak V."https://zbmath.org/authors/?q=ai:dixit.vinayak-v"Rutstrom, E. Elisabet"https://zbmath.org/authors/?q=ai:rutstrom.e-elisabetSummary: When imposing traffic congestion pricing around downtown commercial centers, there is a concern that commercial activities will have to consider relocating due to reduced demand, at a cost to merchants. Concerns like these were important in the debates before the introductions of congestion charges in both London and Stockholm and influenced the final policy design choices. This study introduces a sequential experimental game to study reactions to congestion pricing in the commercial sector. In the game, merchants first make location choices. Consumers, who drive to do their shopping, subsequently choose where to shop. Initial responses to the introduction of congestion pricing and equilibrium selection adjustments over time are observed. These observations are compared to responses and adjustments in a condition where congestion pricing is reduced from an initially high level. Payoffs are non-linear and non-transparent, making it less than obvious that the efficient equilibrium will be selected, and introducing possibilities that participants need to discover their preferences and anchor on past experiences. We find that initial responses reflect standard inverse price-demand relations, and that adjustments over time rely on signaling by consumers leading to the efficient equilibrium. There is also evidence that priming from initial experiences influence play somewhat. We confirm that commercial activities relocate following the introduction of congestion pricing and that the adjustment process is costly to merchants.Accountability as a warrant for trust: an experiment on sanctions and justifications in a trust gamehttps://zbmath.org/1500.910462023-01-20T17:58:23.823708Z"Herne, Kaisa"https://zbmath.org/authors/?q=ai:herne.kaisa"Lappalainen, Olli"https://zbmath.org/authors/?q=ai:lappalainen.olli"Setälä, Maija"https://zbmath.org/authors/?q=ai:setala.maija"Ylisalo, Juha"https://zbmath.org/authors/?q=ai:ylisalo.juhaSummary: Accountability is present in many types of social relations; for example, the accountability of elected representatives to voters is the key characteristic of representative democracy. We distinguish between two institutional mechanisms of accountability, i.e., opportunity to punish and requirement of a justification, and examine the separate and combined effects of these mechanisms on individual behavior. For this purpose, we designed a decision-making experiment where subjects engage in a three-player trust game with two senders and one responder. We ask whether holding the responder accountable increases senders' and responders' contributions in a trust game. When restricting the analysis to the first round, the requirement of justification seems to have a positive impact on senders' contributions. When the game is played repeatedly, the experience of previous rounds dominates the results and significant treatment effects are no longer seen. We also find that responders tend to justify their choices in terms of reciprocity, which is in line with observed behavior. Moreover, the treatment combining punishment and justification hinders justifications that appeal to pure self-interest.Mechanism design for pandemicshttps://zbmath.org/1500.910472023-01-20T17:58:23.823708Z"Maskin, Eric"https://zbmath.org/authors/?q=ai:maskin.eric-sSummary: Under normal circumstances, competitive markets do an excellent job of supplying the goods that members of society want and need. But in an emergency like a pandemic, unassisted markets may not suffice. Imagine, for example, that society suddenly needs to obtain tens (or even hundreds) of millions of COVID-19 virus test kits a week. Test kits for this virus are a new product, and so it may not even be clear who the relevant set of manufacturers are. If we had the luxury of time, a laissez-faire market might identify these manufacturers: the price of test kits would adjust until supply matched demand. But getting a new market of this size to equilibrate quickly is unrealistic. Furthermore, markets don't work well when there are concentrations of power on either the buying or selling side, as there might well be here. Finally, a test is, in part, a \textit{public} good (its benefits go not just to the person being tested, but everyone he might come in contact with), and markets do not usually provide public goods adequately. Fortunately, mechanism design can be enlisted to help.Dynamically consistent objective and subjective rationalityhttps://zbmath.org/1500.910482023-01-20T17:58:23.823708Z"Bastianello, Lorenzo"https://zbmath.org/authors/?q=ai:bastianello.lorenzo"Faro, José Heleno"https://zbmath.org/authors/?q=ai:faro.jose-heleno"Santos, Ana"https://zbmath.org/authors/?q=ai:santos.ana-moura|santos.ana-isabelSummary: A group of experts, for instance climate scientists, is to advise a decision maker about the choice between two policies \(f\) and \(g\). Consider the following decision rule. If all experts agree that the expected utility of \(f\) is higher than the expected utility of \(g\), the unanimity rule applies, and \(f\) is chosen. Otherwise, the precautionary principle is implemented and the policy yielding the highest minimal expected utility is chosen. This decision rule may lead to time inconsistencies when adding an intermediate period of partial resolution of uncertainty. We show how to coherently reassess the initial set of experts' beliefs so that precautionary choices become dynamically consistent: new beliefs should be added until one obtains the smallest ``rectangular set'' that contains the original one. Our analysis offers a novel behavioral characterization of rectangularity and a prescriptive way to aggregate opinions in order to avoid sure regret.A multicriteria analysis of life satisfaction: assessing trust and distance effectshttps://zbmath.org/1500.910492023-01-20T17:58:23.823708Z"Daskalopoulou, Irene"https://zbmath.org/authors/?q=ai:daskalopoulou.irene"Karakitsiou, Athanasia"https://zbmath.org/authors/?q=ai:karakitsiou.athanasia"Malliou, Christina"https://zbmath.org/authors/?q=ai:malliou.christinaSummary: Sustainable societies require that a diverse set of risks (e.g. socio-economic, environmental, political and cultural) that intervene with peoples' wellbeing levels are systematically addressed. Here we focus on life satisfaction and the social cohesion effects driven from the perceptions of others in contemporary societies. We postulate that perceptions of risk as drawn from `otherness' are dependent upon citizens' evaluations of trust in key societal institutions and their perceptions of civic (socio-economic and cultural) distance. Trust is a risk mitigation factor whereas distance exacerbates perceptions of exposure to various risk parameters. This constitutes a complex policy intervention challenge suggesting that the use of decision-making tools that are able to handle a large set of information is appropriate. To that extent, we propose the use of a hybrid TOPSIS-entropy multicriteria technique and test our trust and distance risk effects hypotheses using case study data for Greece. After controlling for the socio-demographic and economic profile of respondents, we provide support for the role of trust in institutions and feelings of distance as determinants of life satisfaction. Important policy level implications are derived on the basis of these findings. Improvements in life satisfaction might be seen as policy interventions that aim at improving civil society institutions. Interventions might involve formal and/or informal institutions that affect both objective (e.g. safety/crime) and subjective (e.g. feelings of safety/disorder) quality of life judgements.Learning under unawarenesshttps://zbmath.org/1500.910502023-01-20T17:58:23.823708Z"Grant, Simon"https://zbmath.org/authors/?q=ai:grant.simon"Meneghel, Idione"https://zbmath.org/authors/?q=ai:meneghel.idione"Tourky, Rabee"https://zbmath.org/authors/?q=ai:tourky.rabeeSummary: We propose a model of learning when experimentation is possible, but unawareness and ambiguity matter. In this model, complete lack of information regarding the underlying data generating process is expressed as a (maximal) family of priors. These priors yield posterior inferences that become more precise as more information becomes available. As information accumulates, however, the individual's level of awareness as encoded in the state space may expand. Such newly learned states are initially seen as ambiguous, but as evidence accumulates there is a gradual reduction of ambiguity.Representing preorders with injective monotoneshttps://zbmath.org/1500.910512023-01-20T17:58:23.823708Z"Hack, Pedro"https://zbmath.org/authors/?q=ai:hack.pedro"Braun, Daniel A."https://zbmath.org/authors/?q=ai:braun.daniel-a"Gottwald, Sebastian"https://zbmath.org/authors/?q=ai:gottwald.sebastianSummary: We introduce a new class of real-valued monotones in preordered spaces, injective monotones. We show that the class of preorders for which they exist lies in between the class of preorders with strict monotones and preorders with countable multi-utilities, improving upon the known classification of preordered spaces through real-valued monotones. We extend several well-known results for strict monotones (Richter-Peleg functions) to injective monotones, we provide a construction of injective monotones from countable multi-utilities, and relate injective monotones to classic results concerning Debreu denseness and order separability. Along the way, we connect our results to Shannon entropy and the uncertainty preorder, obtaining new insights into how they are related. In particular, we show how injective monotones can be used to generalize some appealing properties of Jaynes' maximum entropy principle, which is considered a basis for statistical inference and serves as a justification for many regularization techniques that appear throughout machine learning and decision theory.Uncertainty and compound lotteries: calibrationhttps://zbmath.org/1500.910522023-01-20T17:58:23.823708Z"Halevy, Yoram"https://zbmath.org/authors/?q=ai:halevy.yoram"Ozdenoren, Emre"https://zbmath.org/authors/?q=ai:ozdenoren.emreSummary: This paper introduces a theoretical model of decision making in which preferences are defined on both Savage subjective acts and compound objective lotteries. Preferences are two-stage probabilistically sophisticated when the ranking of acts corresponds to the ranking of the respective compound lotteries induced by the acts through the decision maker's subjective belief. This family of preferences includes various theoretical models proposed in the literature to accommodate non-neutral attitude towards ambiguity. The principle of calibration relates preferences over acts and compound objective lotteries, and provides a foundation for the tight empirical association between probabilistic sophistication and reduction of compound lotteries for all two-stage probabilistically sophisticated preferences.Habit formation, self-deception, and self-controlhttps://zbmath.org/1500.910532023-01-20T17:58:23.823708Z"Hayashi, Takashi"https://zbmath.org/authors/?q=ai:hayashi.takashi.1|hayashi.takashi"Takeoka, Norio"https://zbmath.org/authors/?q=ai:takeoka.norioSummary: Recent research in psychology suggests that successful self-control is attributed to developing adaptive habits rather than resisting temptation. However, developing good habits itself is a self-regulating process, and people often fail to accumulate good habits. This study axiomatically characterizes a dynamic decision model where an agent may form a deceptive belief about his future preference: the agent correctly anticipates his future preference by considering the effect of habits; however, he is also tempted to ignore the habit formation. Self-control must be exerted for resisting such a self-deceptive belief. Our model is flexible enough to accommodate a variety of habit formation and explains behavioral puzzles related to gym attendance, self-control fatigue, and demand for commitment.Decision-based scenario clustering for decision-making under uncertaintyhttps://zbmath.org/1500.910542023-01-20T17:58:23.823708Z"Hewitt, Mike"https://zbmath.org/authors/?q=ai:hewitt.mike"Ortmann, Janosch"https://zbmath.org/authors/?q=ai:ortmann.janosch"Rei, Walter"https://zbmath.org/authors/?q=ai:rei.walterSummary: In order to make sense of future uncertainty, managers have long resorted to creating scenarios that are then used to evaluate how uncertainty affects decision-making. The large number of scenarios that are required to faithfully represent several sources of uncertainty leads to major computational challenges in using these scenarios in a decision-support context. Moreover, the complexity induced by the large number of scenarios can stop decision makers from reasoning about the interplay between the uncertainty modelled by the data and the decision-making processes (i.e., how uncertainty affects the decisions to be made). To meet this challenge, we propose a new approach to group scenarios based on the decisions associated to them. We introduce a graph structure on the scenarios based on the opportunity cost of predicting the wrong scenario by the decision maker. This allows us to apply graph clustering methods and to obtain groups of scenarios with mutually acceptable decisions (i.e., decisions that remain efficient for all scenarios within the group). In the present paper, we test our approach by applying it in the context of stochastic optimization. Specifically, we use it as a means to derive both lower and upper bounds for stochastic network design models and fleet planning problems under uncertainty. Our numerical results indicate that our approach is particularly effective to derive high-quality bounds when dealing with complex problems under time limitations.Decline, adopt or compromise? A dual hurdle model for advice utilizationhttps://zbmath.org/1500.910552023-01-20T17:58:23.823708Z"Himmelstein, Mark"https://zbmath.org/authors/?q=ai:himmelstein.markSummary: Research on advice utilization often operationalizes the construct via judge advisor systems (JAS), where a judge's belief is elicited, they are provided advice, and given an opportunity to revise their belief. Belief change, or weight of advice (WOA), is measured as the shift in the judge's belief proportional to the difference between their original belief and the advice. Several JAS studies have found WOA typically takes on a trimodal distribution, with inflation at the boundary values of 0 (indicating a judge declined advice) and 1 (adoption of advice). A dual hurdle beta model is proposed to account for these inflations. In addition to being an innovative computational model to address this methodological challenge, it also serves as a descriptive theoretical model which posits that the decision process happens in two stages: an initial discrete ``choosing'' stage, where the judge opts to either decline, adopt, or compromise with advice; and a subsequent continuous ``averaging'' stage, which occurs only if the judge opts to compromise. The approach was assessed via reanalysis of three recent JAS studies reflective of popular topics in the literature, such as algorithmic advice utilization, egocentric discounting effects, and judgmental forecasting. In each case new results were uncovered about how different correlates of advice utilization influence the decision process at either or both of the discrete and continuous stages, often in quite different ways, providing support for the descriptive theoretical model. A Bayesian graphical analysis framework is provided that can be applied to future research on advice utilization.On the economic foundations of decision theoryhttps://zbmath.org/1500.910562023-01-20T17:58:23.823708Z"Montesano, Aldo"https://zbmath.org/authors/?q=ai:montesano.aldoSummary: Economics bases the choice theory on the mental experiment that introduces the choice correspondence, which associates to every set of possible actions the subset of preferred actions. If some conditions are satisfied, then the choice correspondence implies a binary preference ordering on actions and an ordinal utility function. This approach applies both to decisions under certainty and decisions under uncertainty. The preference ordering depends on the consequence of actions. Under certainty, there is only one consequence to every action, while, under uncertainty, many consequences are possible, associated with the states of the world. These consequences are represented by the action itself, the states of the world, and the corresponding outcomes. Current theories consider only outcomes, but some theories include state dependent preference. Preference for the action itself is not considered, but it might be relevant. The rationality of the theory is a different question from the rationality of the decision-maker. Moreover, the rationality of the theory may imply the rationality of a preference ordering, but this does not require the rationality of the decision-maker. It is only assumed that he/she behaves according to the calculation made by the theorist. The rationality of the preference ordering requires the rationality of the preference on outcomes, of the expectations on the events, and of their connection with the preference ordering on actions. The normative relevance of rational preferences is removed by the introduction of many alternative rational theories, which justify contrasting behaviors in identical situations.Choice of mixed strategy in matrix game with nature by Hurwitz criterionhttps://zbmath.org/1500.910572023-01-20T17:58:23.823708Z"Ponomarëv, Stepan Yu."https://zbmath.org/authors/?q=ai:ponomarev.stepan-yu"Khutoretskiĭ, Aleksandr B."https://zbmath.org/authors/?q=ai:khutoretskii.aleksandr-bSummary: The article solves the problem of choosing an optimal, by the Hurwitz criterion, mixed strategy for arbitrary matrix game against nature. We reduce the problem to solving \(n\) linear programming problems (where \(n\) is the number of scenarios). As far as we know, this is a new result. It can be used to make decisions in uncertain environments, if the game situation is repeated many times, or physical mixture of pure strategies is realizable.Admissible and Bayes decisions with fuzzy-valued losseshttps://zbmath.org/1500.910582023-01-20T17:58:23.823708Z"Shvedov, Alexey S."https://zbmath.org/authors/?q=ai:shvedov.alexey-sSummary: Some results of classical statistical decision theory are generalized by means of the theory of fuzzy sets. The concepts of an admissible decision in the restricted sense, an admissible decision in the broad sense, a Bayes decision in the restricted sense, and a Bayes decision in the broad sense are introduced. It is proved that any Bayes decision in the broad sense with positive prior discrete density is admissible in the restricted sense. The class of Bayes decisions is shown to be complete under certain conditions on the loss function. Problems with a finite set of possible states are considered.Collective bias models in two-tier voting systems and the democracy deficithttps://zbmath.org/1500.910592023-01-20T17:58:23.823708Z"Kirsch, Werner"https://zbmath.org/authors/?q=ai:kirsch.werner"Toth, Gabor"https://zbmath.org/authors/?q=ai:toth.gabor.1|toth.gabor-zsoltSummary: We analyse optimal voting weights in two-tier voting systems. In our model, the overall population (or union) is split in groups (or member states) of different sizes. The individuals comprising the overall population constitute the first tier, and the council is the second tier. Each group has a representative in the council that casts votes on their behalf. By `optimal weights', we mean voting weights in the council which minimise the democracy deficit, i.e. the expected deviation of the council vote from a (hypothetical) popular vote.
We assume that the voters within each group interact via what we call a local collective bias or common belief (through tradition, common values, strong religious beliefs, etc.). We allow in addition an interaction across group borders via a global bias. Thus, the voting behaviour of each voter depends on the behaviour of all other voters. This correlation may be stronger between voters in the same group, but is in general not zero for voters in different groups. We call the respective voting measure a collective bias model (CBM). The `simple CBM' introduced in [\textit{W. Kirsch}, in: Power, voting, and voting power: 30 years after. Berlin: Springer. 365--387 (2013; Zbl 1419.91251)] and in particular the impartial culture and the impartial anonymous culture are special cases of our general model.
We compute the optimal weights in the large population limit. Those optimal weights are unique as long as there is no `complete' correlation between the groups. In this case, we obtain optimal weights which are the sum of a common constant equal for all groups and a summand which is proportional to the population of each group. If the correlation between voters in different groups is extremely strong, then the optimal weights are not unique. In fact, in this case, the weights are essentially arbitrary. We also analyse the conditions under which the optimal weights are negative, thus making it impossible to reach the theoretical minimum of the democracy deficit. This is a new aspect of the model owed to the correlation between votes belonging to different groups.When are committees of Condorcet winners Condorcet winning committees?https://zbmath.org/1500.910602023-01-20T17:58:23.823708Z"Aslan, Fatma"https://zbmath.org/authors/?q=ai:aslan.fatma"Dindar, Hayrullah"https://zbmath.org/authors/?q=ai:dindar.hayrullah"Lainé, Jean"https://zbmath.org/authors/?q=ai:laine.jeanSummary: We consider seat-posted (or designated-seat) committee elections, where disjoint sets of candidates compete for each seat. We assume that each voter has a collection of seat-wise strict rankings of candidates, which are extended to a strict ranking of committees by means of a preference extension. We investigate conditions upon preference extensions for which seat-wise Condorcet candidates, whenever all exist, form the Condorcet winner among committees. We characterize the domain of neutral preference extensions for which the committee of seat-wise winners is the Condorcet winning committee, first assuming the latter exists (Theorem 1) and then relaxing this assumption (Theorem 2). Neutrality means that preference extensions are not sensitive to the names of candidates. Moreover, we show that these two characterizations can be stated regardless of which preference level is considered as a premise.An axiomatic re-characterization of the Kemeny rulehttps://zbmath.org/1500.910612023-01-20T17:58:23.823708Z"Can, Burak"https://zbmath.org/authors/?q=ai:can.burak"Pourpouneh, Mohsen"https://zbmath.org/authors/?q=ai:pourpouneh.mohsen"Storcken, Ton"https://zbmath.org/authors/?q=ai:storcken.tonSummary: The Kemeny rule is one of the well studied decision rules. In this paper we show that the Kemeny rule is the only rule which is unbiased, monotone, strongly tie-breaking, strongly gradual, and weighed tournamental. We show that these conditions are logically independent.Anonymous and neutral social choice: a unified framework for existence results, maximal domains and tie-breakinghttps://zbmath.org/1500.910622023-01-20T17:58:23.823708Z"Doğan, Onur"https://zbmath.org/authors/?q=ai:dogan.onur"Giritligil, Ayça Ebru"https://zbmath.org/authors/?q=ai:giritligil.ayca-ebruSummary: We present a group-theoretical method to analyze and compare necessary and sufficient conditions on the size of the social choice problem for the existence of anonymous, neutral and resolute social choice and social welfare rules in a unified framework. We define the largest domain of preference profiles that would allow for the existence of such aggregation rules when said conditions are not met. We propose a tie-breaking procedure to obtain resolute refinements of social choice rules, which preserves anonymity and neutrality. Compatibility of this refinement procedure with simple monotonicity is compared with that of conventional tie-breaking mechanisms.The effect of unconditional preferences on Sen's paradoxhttps://zbmath.org/1500.910632023-01-20T17:58:23.823708Z"Dougherty, Keith L."https://zbmath.org/authors/?q=ai:dougherty.keith-l"Edward, Julian"https://zbmath.org/authors/?q=ai:edward.julianSummary: Sen's Liberal paradox describes a conflict between weak Pareto, minimal liberalism, and either transitivity or a best element over a domain of individual preferences. This paper examines variants of that paradox with varying amounts of unconditional preferences. We define a notion of unconditional preferences under which, in the absence of Pareto, there can be no cycles. We then define a stronger condition, that makes an individual's preferences for her own private attributes independent of all other attributes. Under this assumption, there can be no cycles with or without Pareto. We also show there exists a social decision function satisfying those conditions. We then determine the probability of a cycle assuming a much weaker independence condition that does not restrict the domain. This probability converges to one as the number of non-private attributes within the social states increases. Finally, we use simulations to determine the probability that liberalism and Pareto conflict with best elements, maximal elements, and transitivity separately.On the probability of the Condorcet jury theorem or the miracle of aggregationhttps://zbmath.org/1500.910642023-01-20T17:58:23.823708Z"Romaniega Sancho, Álvaro"https://zbmath.org/authors/?q=ai:romaniega-sancho.alvaroSummary: The Condorcet jury theorem or the miracle of aggregation are frequently invoked to ensure the competence of some aggregate decision-making processes. In this article we explore the probability of the thesis predicted by the theorem (if there are enough voters, majority rule is a competent decision procedure) in different settings. We use tools from measure theory to conclude that it will happen almost surely or almost never, depending on the probability measure. In particular, it will fail almost surely for measures estimating the prior probability. To update this prior either more evidence in favor of competence would be needed (so that a large likelihood term compensates a small prior term in Bayes' theorem) or a modification of the decision rule. The former includes the case of (rational) agents reversing their vote if its probability of voting the right option is less than \(1/2\). Following the latter, we investigate how to obtain an almost sure competent information aggregation mechanism for almost any evidence on voter competence (including the less favorable ones). To do so, we substitute simple majority rule by weighted majority rule based on some stochastic weights correlated with epistemic rationality such that every voter is guaranteed a minimal weight equal to one. We also explore how to obtain these weights in a real setting.Revealed desirability: a novel instrument for social welfarehttps://zbmath.org/1500.910652023-01-20T17:58:23.823708Z"Barokas, Guy"https://zbmath.org/authors/?q=ai:barokas.guySummary: The note puts forward the idea of \textit{revealed desirability}, a novel instrument, which like revealed preference is observable from choice and important for individual and social welfare. We provide the axiomatic underlying individual's choice model, preliminary experimental results that support the idea, and an appealing allocation rule that uses the revealed desirability information along with the revealed-preference information.Welfare reducing vertical licensing in the presence of complementary inputshttps://zbmath.org/1500.910662023-01-20T17:58:23.823708Z"Lin, Yen-Ju"https://zbmath.org/authors/?q=ai:lin.yen-yu"Lin, Yan-Shu"https://zbmath.org/authors/?q=ai:lin.yan-shu"Shih, Pei-Cyuan"https://zbmath.org/authors/?q=ai:shih.pei-cyuanSummary: This research explores the welfare implications of vertical licensing when the final goods are produced by multiple complementary inputs. We spotlight the importance of two-part tariff input terms when there is a buyer-seller relationship after vertical licensing, which has different welfare ramifications depending on the product differentiation. When the products are less differentiated, our result shows welfare improving licensing, but when the products are more differentiated, the wholesale price is set above the supplier's marginal cost through licensing, leading to the problem of double marginalization and reducing welfare. This study offers various policy implications, which go up against conventional wisdom that welfare improving licensing may not be attainable by considering multiple complementary inputs.Input price discrimination, pricing contract and social welfarehttps://zbmath.org/1500.910672023-01-20T17:58:23.823708Z"Wang, Xingtang"https://zbmath.org/authors/?q=ai:wang.xingtangSummary: In this paper, we introduce input price discrimination into vertical product differentiation model to analyze the impact of input price discrimination on social welfare. We find that the input price discrimination improves the supply of high-quality products in the market, and lead to the increase of consumer surplus. The effect of input price discrimination on social welfare is influenced by the pricing contract of input. When the price of input is determined by upstream firm, the input price discrimination reduces social welfare. While the input price discrimination increases social welfare when the price of input is determined by bargaining of upstream and downstream firms.Identification in the random utility modelhttps://zbmath.org/1500.910682023-01-20T17:58:23.823708Z"Turansick, Christopher"https://zbmath.org/authors/?q=ai:turansick.christopherIn the paper, the random utility model is described and considered in detail. Three main results are formulated and proved. The first result characterizes which data sets have an unique random utility representation. The second result characterizes which distributions over preferences are observationally unique. The last result characterizes when the supports of a random utility representation is uniquely identified.
Reviewer: Jonas Šiaulys (Vilnius)Buck-passing dumping in a garbage-dumping gamehttps://zbmath.org/1500.910692023-01-20T17:58:23.823708Z"Abe, Takaaki"https://zbmath.org/authors/?q=ai:abe.takaakiSummary: We study stable strategy profiles in a pure exchange game of bads, where each player dumps his or her bads such as garbage onto someone else. \textit{T. Hirai} et al. [Math. Soc. Sci. 51, No. 2, 162--170 (2006; Zbl 1184.91023)] show that cycle dumping, in which each player follows an ordering and dumps his or her bads onto the next player, is a strong Nash equilibrium and that self-disposal is \(\alpha\)-stable for some initial distributions of bads. In this paper, we show that a strategy profile of bullying, in which all players dump their bads onto a single player, becomes \(\alpha\)-stable for every exchange game of bads. We also provide a necessary and sufficient condition for a strategy profile to be \(\alpha\)-stable in an exchange game of bads. In addition, we show that repeating an exchange after the first exchange makes self-disposal stationary.To sell public or private goodshttps://zbmath.org/1500.910702023-01-20T17:58:23.823708Z"Loertscher, Simon"https://zbmath.org/authors/?q=ai:loertscher.simon"Marx, Leslie M."https://zbmath.org/authors/?q=ai:marx.leslie-mSummary: Traditional analysis takes the public or private nature of goods as given. However, technological advances, particularly related to digital goods such as non-fungible tokens, increasingly make rivalry a choice variable of the designer. This paper addresses the question of when a profit-maximizing seller prefers to provide an asset as a private good or as a public good. While the public good is subject to a free-rider problem, a profit-maximizing seller or designer faces a nontrivial quantity-exclusivity tradeoff, and so profits from collecting small payments from multiple agents can exceed the large payment from a single agent. We provide conditions under which the profit from the public good exceeds that from a private good. If the cost of production is sufficiently, but not excessively, large, then production is profitable only for the public good. Moreover, if the lower bound of the support of the buyers' value distribution is positive, then the profit from the public good is unbounded in the number of buyers, whereas the profit from selling the private good is never more than the upper bound of the support minus the cost. As the variance of the agents' distribution becomes smaller, public goods eventually outperform private goods, reflecting intuition based on complete information models, in which public goods always outperform private goods in terms of revenue.Technical note -- Revenue volatility under uncertain network effectshttps://zbmath.org/1500.910712023-01-20T17:58:23.823708Z"Baron, Opher"https://zbmath.org/authors/?q=ai:baron.opher"Hu, Ming"https://zbmath.org/authors/?q=ai:hu.ming"Malekian, Azarakhsh"https://zbmath.org/authors/?q=ai:malekian.azarakhshSummary: We study the revenue volatility of a monopolist selling a divisible good to consumers in the presence of local network externalities among consumers. The utility of consumers depends on their consumption level as well as those of their neighbors in a network through network externalities. In the eye of the seller, there exist uncertainties in the network externalities, which may be the result of unanticipated shocks or a lack of exact knowledge of the externalities. However, the seller has to commit to prices ex ante. We quantify the magnitude of revenue volatility under the optimal pricing in the presence of those random externalities. We consider both a given uncertainty set (from a robust optimization perspective) and a known uncertainty distribution (from a stochastic optimization perspective) and carry out the analyses separately. For a given uncertainty set, we show that the worst case of revenue fluctuation is determined by the largest eigenvalue of the matrix that represents the underlying network. Our results indicate that in networks with a smaller largest eigenvalue, the monopolist has a less volatile revenue. For the known uncertainty, we model the random noise in the form of a Wigner matrix and investigate large networks such as social networks. For such networks, we establish that the expected revenue is the sum of the revenue associated with the underlying expected network externalities and a term that depends on the noise variance and the weighted sum of all walks of different lengths in the expected network. We demonstrate that in a less connected network, the revenue is less volatile to uncertainties, and perhaps counterintuitively, the expected revenue increases with the level of uncertainty in the network. We show that a seller in the two settings favors the opposite type of network. In particular, if the underlying network is such that all the edge weights equal 1, the seller in the robust optimization setting prefers more asymmetry and the seller in the stochastic optimization setting prefers less asymmetry in the underlying network; by contrast, if the underlying network is such that the sum of all the edge weights is fixed, the seller in the robust optimization setting prefers less symmetry and the seller in the stochastic optimization setting prefers more asymmetry.Computing prices for target profits in contractshttps://zbmath.org/1500.910722023-01-20T17:58:23.823708Z"Ganesan, Ghurumuruhan"https://zbmath.org/authors/?q=ai:ganesan.ghurumuruhanSummary: Price discrimination for maximizing expected profit is a well-studied concept in economics and there are various methods that achieve the maximum given the user type distribution and the budget constraints. In many applications, particularly with regard to engineering and computing, it is often the case than the user type distribution is unknown or not accurately known. In this paper, we therefore propose and study a mathematical framework for price discrimination with \textit{target} profits under the contract-theoretic model. We first consider service providers with a given user type profile and determine sufficient conditions for achieving a target profit. Our proof is constructive in that it also provides a method to compute the quality-price tag menu. Next we consider a dual scenario where the offered service qualities are predetermined and describe an iterative method to obtain nominal demand values that best match the qualities offered by the service provider while achieving a target profit-user satisfaction margin. We also illustrate our methods with design examples in both cases.
For the entire collection see [Zbl 1491.65006].Multiproduct pricing with discrete price setshttps://zbmath.org/1500.910732023-01-20T17:58:23.823708Z"Manchiraju, Chandrasekhar"https://zbmath.org/authors/?q=ai:manchiraju.chandrasekhar"Dawande, Milind"https://zbmath.org/authors/?q=ai:dawande.milind-w"Janakiraman, Ganesh"https://zbmath.org/authors/?q=ai:janakiraman.ganeshSummary: We study a multiproduct pricing problem in which the prices of the products are restricted to \textit{discrete} and \textit{finite} sets. The demand for a product is a function of the prices of all the products. The prices of the products can be changed through time, subject to the aggregate consumption of each resource not exceeding its availability over the planning horizon. The focus of our work is the deterministic variant of this problem (wherein customer-arrival rates are deterministic), which is a key subproblem whose solution can be used to build effective policies for the stochastic variant (as in [\textit{G. Gallego} and \textit{G. van Ryzin}, Oper. Res. 45, No. 1, 24--41 (1997; Zbl 0889.90052)]). When the demand rate of each product is a concave function of the prices, we obtain an efficient and effective solution to the deterministic problem; the worst-case optimality gap of our solution depends on the curvature of the objective function. We obtain a similar performance guarantee for our solution under the linear attraction demand model and a special case of the multinomial logit (MNL) demand model. For a general demand function, the worst-case optimality gap of our solution depends on the curvature of both the objective function and the demand function. For the special case where the demand rate of a product depends only on its own price and not on the prices of the other products, we show that the deterministic problem can be efficiently solved via a linear program.Dynamic double auctions: toward first besthttps://zbmath.org/1500.910742023-01-20T17:58:23.823708Z"Balseiro, Santiago R."https://zbmath.org/authors/?q=ai:balseiro.santiago-r"Mirrokni, Vahab"https://zbmath.org/authors/?q=ai:mirrokni.vahab-s"Paes Leme, Renato"https://zbmath.org/authors/?q=ai:paes-leme.renato"Zuo, Song"https://zbmath.org/authors/?q=ai:zuo.songSummary: We study the problem of designing dynamic double auctions for two-sided markets in which a platform intermediates the trade between one seller offering independent items to multiple buyers, repeatedly over a finite horizon, when agents have private values. Motivated by online platforms for advertising, ride-sharing, and freelancing markets, we seek to design mechanisms satisfying the following properties: \textit{no positive transfers}, that is, the platform never asks the seller to make payments nor are buyers ever paid, and \textit{periodic individual rationality}, that is, every agent derives a nonnegative utility from every trade opportunity. We provide mechanisms satisfying these requirements that are asymptotically efficient and budget balanced with high probability as the number of trading opportunities grows. Our mechanisms thus overcome well-known impossibility results preventing efficient bilateral trade without subsidies in static environments. Moreover, we show that the average expected profit obtained by the platform under these mechanisms asymptotically approaches ``first best'' (the maximum possible welfare generated by the market). We also extend our approach to general environments with complex, combinatorial preferences.Competition in online markets with auctions and posted priceshttps://zbmath.org/1500.910752023-01-20T17:58:23.823708Z"Maslov, Alexander"https://zbmath.org/authors/?q=ai:maslov.alexanderSummary: The paper studies an online consumer-to-consumer market with limited supply, where sellers may list their items by posted prices or auctions. I show that when there is competition among sellers, they use only posted prices in the equilibrium. This result contrasts with the findings for a monopolistic seller listing objects by auctions and posted prices on markets with infinite supply, where using both mechanisms is the equilibrium. The model helps to explain the trends documented in \textit{L. Einav} et al. [Econometrica 80, No. 4, 1387--1432 (2012; Zbl 1274.91195)].Bridging bargaining theory with the regulation of a natural monopolyhttps://zbmath.org/1500.910762023-01-20T17:58:23.823708Z"Saglam, Ismail"https://zbmath.org/authors/?q=ai:saglam.ismailSummary: In this paper, we integrate the bargaining theory with the problem of regulating a natural monopoly under symmetric information or asymmetric information with complete ignorance. We prove that the unregulated payoffs under symmetric information and the optimally regulated payoffs under asymmetric information define a pair of bargaining sets which are dual to (reflections of) each other. Thanks to this duality, the bargaining solution under asymmetric information can be obtained from the solution under symmetric information by permuting the implied payoffs of the monopolist and consumers provided that the bargaining rule satisfies anonymity and homogeneity. We also show that under symmetric (asymmetric) information the bargaining payoffs (permuted payoffs) obtained under the Egalitarian, Nash, and Kalai-Smorodinsky rules are equivalent to the Cournot-Nash payoffs of unregulated symmetric oligopolies, involving two, three, and four firms, respectively. Moreover, we characterize two bargaining rules using, in addition to (weak or strong) Pareto optimality, several new axioms that depend only on the essentials of the regulation problem.Retailer leadership under monopolistic competitionhttps://zbmath.org/1500.910772023-01-20T17:58:23.823708Z"Til'zo, Ol'ga A."https://zbmath.org/authors/?q=ai:tilzo.olga-aSummary: A modification of the Dixit-Stiglitz model, supplemented by retailing, is investigated, namely, various situations of equilibrium according to Stackelberg are considered under the leadership of the retailer and the free entry condition of manufacturers to the market. For each of the situations, detailed solutions were provided, which were considered taking into account the preferences of the participants in the market interaction. This allows us to understand the occurrence of which of the considered situations is most beneficial for the retailer, manufacturers and for society as a whole. Moreover, optimal taxation is considered. Situations are revealed when it is beneficial for the state to tax the producer, and when, on the contrary, to subsidize the producer.Monopolistic competition model with entrance feehttps://zbmath.org/1500.910782023-01-20T17:58:23.823708Z"Tilzo, Olga A."https://zbmath.org/authors/?q=ai:tilzo.olga-aSummary: We study the monopolistic competition model with producer-retailer-consumers two-level interaction. The industry is organized according to the Dixit-Stiglitz model. The retailer is the only monopolist. A quadratic utility function represents consumer preferences. We consider the case of the retailer's leadership; namely, we study two types of behavior: with and without the free entry condition. Earlier, we obtained the result: to increase social welfare and/or consumer surplus, the government needs to subsidize (not tax!) retailers. In the presented paper, we develop these results for the situation when the producer imposes an entrance fee for retailers.Maximum Nash welfare and other stories about EFXhttps://zbmath.org/1500.910792023-01-20T17:58:23.823708Z"Amanatidis, Georgios"https://zbmath.org/authors/?q=ai:amanatidis.georgios"Birmpas, Georgios"https://zbmath.org/authors/?q=ai:birmpas.georgios"Filos-Ratsikas, Aris"https://zbmath.org/authors/?q=ai:filos-ratsikas.aris"Hollender, Alexandros"https://zbmath.org/authors/?q=ai:hollender.alexandros"Voudouris, Alexandros A."https://zbmath.org/authors/?q=ai:voudouris.alexandros-aSummary: We consider the classic problem of fairly allocating indivisible goods among agents with additive valuation functions and explore the connection between two prominent fairness notions: maximum Nash welfare (MNW) and envy-freeness up to any good (EFX). We establish that an MNW allocation is always EFX as long as there are at most two possible values for the goods, whereas this implication is no longer true for three or more distinct values. As a notable consequence, this proves the existence of EFX allocations for these restricted valuation functions. While the efficient computation of an MNW allocation for two possible values remains an open problem, we present a novel algorithm for directly constructing EFX allocations in this setting. Finally, we study the question of whether an MNW allocation implies any EFX guarantee for general additive valuation functions under a natural new interpretation of approximate EFX allocations.Strategy-proof club formation with indivisible club facilitieshttps://zbmath.org/1500.910802023-01-20T17:58:23.823708Z"Dutta, Bhaskar"https://zbmath.org/authors/?q=ai:dutta.bhaskar"Kar, Anirban"https://zbmath.org/authors/?q=ai:kar.anirban"Weymark, John A."https://zbmath.org/authors/?q=ai:weymark.john-aSummary: We investigate the strategy-proof provision and financing of indivisible club good facilities when individuals are subject to congestion costs that are non-decreasing in the number of other club members and in a private type parameter. An allocation rule specifies how the individuals are to be partitioned into clubs and how the costs of the facilities are to be shared by club members as a function of the types. We show that some combinations of our axioms are incompatible when congestion costs are continuous and strictly increasing in the type parameter, but that all of them are compatible if congestion costs are dichotomous and there is equal cost sharing. We present a number of examples of allocation rules with equal cost sharing and determine which of our axioms they satisfy when the congestion cost is linear in the type parameter. We also show that using iterative voting on ascending size to determine a club partition is not, in general, strategy-proof when each facility's cost is shared equally.Correction to: ``Equilibrium computation in resource allocation games''https://zbmath.org/1500.910812023-01-20T17:58:23.823708Z"Harks, Tobias"https://zbmath.org/authors/?q=ai:harks.tobias"Tan-Timmermans, Veerle"https://zbmath.org/authors/?q=ai:tan-timmermans.veerleFrom the text: This note contains a correction of Theorems 1 and 2 and the subroutine \textsc{Restore} of our article [ibid. 194, No. 1--2 (A), 1--34 (2022; Zbl 1494.91069)]. The correction leads to slightly increased sensitivity bounds (by a factor \(n)\) but all main results of the original paper contained in the previous version remain qualitatively intact. In the following, we describe the corrected sensitivity results of Theorems 1 and 2 and prove correctness and running time of the changed subroutine \textsc{Restore}. The full and corrected version including all changed bounds can be found at [``Equilibrium computation in resource allocation games'', Preprint, \url{arXiv:1612.00190}]. We thank Alex Skopalik who communicated to us that the proof of Theorem 1 is incorrect.Activity's resources models: cooperation and competitionhttps://zbmath.org/1500.910822023-01-20T17:58:23.823708Z"Novikov, Dmitry"https://zbmath.org/authors/?q=ai:novikov.dmitrii-p|novikov.dmitry-i|novikov.dmitri-a|novikov.dmitrySummary: Game-theoretical models of competition and cooperation of subjects of joint complex activity are considered for the case of of limited resources. Substantial interpretations are: competition for an external resource, including the dynamics of subjects' experience, competition for the market, competition for the subordinates in matrix organizational structures.
For the entire collection see [Zbl 1495.93004].Online resource allocation with personalized learninghttps://zbmath.org/1500.910832023-01-20T17:58:23.823708Z"Zhalechian, Mohammad"https://zbmath.org/authors/?q=ai:zhalechian.mohammad"Keyvanshokooh, Esmaeil"https://zbmath.org/authors/?q=ai:keyvanshokooh.esmaeil"Shi, Cong"https://zbmath.org/authors/?q=ai:shi.cong"Van Oyen, Mark P."https://zbmath.org/authors/?q=ai:van-oyen.mark-pSummary: Joint online learning and resource allocation is a fundamental problem inherent in many applications. In a general setting, heterogeneous customers arrive sequentially, each of which can be allocated to a resource in an online fashion. Customers stochastically consume the resources, allocations yield stochastic rewards, and the system receives feedback outcomes with delay. We introduce a generic framework that judiciously synergizes online learning with a broad class of online resource allocation mechanisms, where the sequence of customer contexts is adversarial, and the customer reward and the resource consumption are stochastic and unknown. First, we propose an online algorithm for a general resource allocation problem, called personalized resource allocation while learning with delay, which strikes a three-way balance between exploration, exploitation, and hedging against adversarial arrival sequence. We provide a performance guarantee for this online algorithm in terms of Bayesian regret. Next, we develop our second online algorithm for an advance scheduling problem, called personalized advance scheduling while learning with delay (PAS-LD), and evaluate its theoretical performance. The PAS-LD algorithm has a more delicate structure and offers multiday scheduling while accounting for the no-show behavior of customers. We demonstrate the practicality and efficacy of our PAS-LD algorithm using clinical data from a partner health system. Our results show that the proposed algorithm provides promising results compared with several benchmark policies.Patent portfolios and firms' technological choiceshttps://zbmath.org/1500.910842023-01-20T17:58:23.823708Z"Comino, Stefano"https://zbmath.org/authors/?q=ai:comino.stefano"Manenti, Fabio M."https://zbmath.org/authors/?q=ai:manenti.fabio-mSummary: In many industrial sectors, firms amass large patent portfolios to reinforce their bargaining position with competitors. In a context where patents have a pure strategic nature, we discuss how the presence and effectiveness of a patent system affect the technology decisions of firms. Specifically, we present a game where firms choose whether to \textit{agglomerate} (i.e. develop technologies for the same technological territory) or to \textit{separate} (i.e. develop technologies for different territories) prior to taking their patenting decisions. We show that strong patents may distort technology choices causing firms to follow inefficient technology trajectories in an attempt to reduce their competitors' patenting activity. We also discuss how such distortions change when a firm is prevented from obtaining its optimal number of patents.Equilibrium CEO contract with belief heterogeneityhttps://zbmath.org/1500.910852023-01-20T17:58:23.823708Z"Bianchi, Milo"https://zbmath.org/authors/?q=ai:bianchi.milo"Dana, Rose-Anne"https://zbmath.org/authors/?q=ai:dana.rose-anne"Jouini, Elyès"https://zbmath.org/authors/?q=ai:jouini.elyesSummary: Consider a firm owned by shareholders with heterogeneous beliefs and run by a manager who chooses random production plans. Shareholders do not observe the chosen plan but only its realization. The financial market consists of assets contingent on production realizations. A contract for the manager specifies her compensation as a function of the firm's production and possibly some restrictions to trade in the financial market. Shareholders are unrestricted. We define a concept of equilibrium between the manager and shareholders such that the equilibrium production plan is unanimously preferred by the manager and the shareholders, markets clear and the manager has no incentive to cheat. We first analyze the properties of such equilibria and in particular show that the contract should restrict the manager from trading. We next provide a framework where such equilibria exist. We lastly study the properties of equilibrium compensations when shareholders have beliefs that can be ranked in terms of optimism towards the equilibrium plan. Specific attention is given to their departure from linear compensations.Risk aversion and equilibrium selection in a vertical contracting setting: an experimenthttps://zbmath.org/1500.910862023-01-20T17:58:23.823708Z"Pasquier, Nicolas"https://zbmath.org/authors/?q=ai:pasquier.nicolas"Bonroy, Olivier"https://zbmath.org/authors/?q=ai:bonroy.olivier"Garapin, Alexis"https://zbmath.org/authors/?q=ai:garapin.alexisSummary: The theoretical literature on vertical relationships usually assumes that beliefs about secret contracts take specific forms. In a recent paper, Eguia et al. [Games Econ. Behav. 109, 465--483 (2018; Zbl 1390.91041)] propose a new selection criterion that does not impose any restriction on beliefs. In this article, we extend their criterion by generalizing it to risk-averse retailers, and we show that risk aversion modifies the size of the belief subsets that support each equilibrium. We conduct an experiment which revisits that of Eguia et al. [loc. cit.]. We design a new treatment effect on equilibrium selection depending on the retailers' risk sensitivity. Experimental results confirm the treatment effect: the more sensitivity there is towards risk, the more the equilibrium played is consistent with passive beliefs. In addition, extending Eguia et al.'s [loc. cit.] criterion to risk-averse retailers improves its predictive power on the equilibria played, especially for a population of retailers with moderate to extreme risk aversion.An agent-based model of consumer choice. An evaluation of the strategy of pricing and advertisinghttps://zbmath.org/1500.910872023-01-20T17:58:23.823708Z"Kot, Michał"https://zbmath.org/authors/?q=ai:kot.michalSummary: The authors develop an agent-based model of the market where firms and consumers exchange products. Consumers in the model are heterogeneous in terms of features, such as risk-aversion or owned assets, which impact their individual decisions. Consumers constantly learn about products' features through personal experience, word-of-mouth, or advertising, update their expectations and share their opinions with others. From the supply-side of the model, firms can influence consumers with two marketing tools: advertising and pricing policy. Series of experiments have been conducted with the model to investigate the relationship between advertising and pricing and to understand the underlying mechanism. Marketing strategies have been evaluated in terms of generated profit and recommendations have been formulated.Negligence rules coping with hidden precautionhttps://zbmath.org/1500.910882023-01-20T17:58:23.823708Z"Schweizer, Urs"https://zbmath.org/authors/?q=ai:schweizer.ursSummary: This paper investigates the implementation of negligence rules when the negligent act constitutes a hidden action in the sense of principal-agent theory and where the available evidence is generated by a signal. Any liability rule exclusively based on the available evidence comes with an incentive threshold. Agents with cost savings from the negligent act above this threshold commit the negligent act whereas the others do not. In a first step, liability rules are examined that implement a given incentive threshold at minimum error costs. As this is a linear programming problem, the present paper makes heavy use of duality theory. The multiplier of the incentive constraint if understood as shadow price allows for an intuitive explanation of the results. As a second step, a comparative statics analysis with respect to the incentive threshold is provided. Surprisingly enough, the relation between the threshold and minimum error costs need not be monotonic.Optimal contracting under mean-volatility joint ambiguity uncertaintieshttps://zbmath.org/1500.910892023-01-20T17:58:23.823708Z"Sung, Jaeyoung"https://zbmath.org/authors/?q=ai:sung.jaeyoungSummary: We examine a continuous-time principal-agent problem under mean-volatility joint ambiguity uncertainties. Both the principal and the agent exhibit Gilboa-Schmeidler's [\textit{I. Gilboa} and \textit{D. Schmeidler}, J. Math. Econ. 18, No. 2, 141--153 (1989; Zbl 0675.90012)] extreme ambiguity aversion with exponential utilities. We distinguish between \textit{ex post} realized and \textit{ex ante} perceived volatilities, and argue that the second-best contract necessarily consists of two sharing rules: one for realized outcome and the other for realized volatility. The outcome-sharing rule is for uncertainty sharing and work incentives, as usual, and the volatility-sharing rule is to align the agent's worst prior with that of the principal. At optimum, their worst priors are symmetrized, and realized compensation is positively related to realized volatility. This theoretical positive relation can be consistent with popular managerial compensation practices such as restricted stock plus stock option grants. A closed-form solution to a linear-quadratic example is provided.The complex interplay between COVID-19 and economic activityhttps://zbmath.org/1500.910902023-01-20T17:58:23.823708Z"Cerqueti, Roy"https://zbmath.org/authors/?q=ai:cerqueti.roy"Tramontana, Fabio"https://zbmath.org/authors/?q=ai:tramontana.fabio"Ventura, Marco"https://zbmath.org/authors/?q=ai:ventura.marcoSummary: We introduce a dynamical system to model the complex interaction between COVID-19 and economic activity. The model introduces some novelties not accounted by SIR-like models. The equilibrium of the system is an unstable focus, with fluctuations having increasing size and periodicity. Numerical simulations of the model produce waves which reproduce the pandemic dynamics. In observing the stylized facts linking economics and pandemic and stating related reasonable assumptions, we obtain a Lotka-Volterra co-dynamics. This outcome is confirmed by extensive simulations. The outcomes obtained qualitatively replicate some important stylized facts deepening the knowledge about the role of some parameters in their origin and eventually in their shaping.A model for competition of technologies for limiting resourceshttps://zbmath.org/1500.910912023-01-20T17:58:23.823708Z"Mustafin, A."https://zbmath.org/authors/?q=ai:mustafin.almaz"Kantarbayeva, A."https://zbmath.org/authors/?q=ai:kantarbayeva.aliya-kSummary: A mathematical model for the development of technologies competing for common productive resources is proposed and analyzed. The model is based on the principles of evolutionary economics and is given by a ``consumer-resource'' system of equations. Consumers are homogeneous populations of firms employing the same technology. The output of firms is characterized by the production function with complementary factors. A technology can increase owing to the entry of new firms at a specific rate proportional to the output, and decrease due to ruin of a firm. Resources consumed enter the industry from the outside; unused resources leave the industry. The lower the minimum demand of a technology for a given resource, the higher its competitiveness with respect to this resource. We obtain the conditions for the coexistence of technologies, according to which each competitor should surpass the others in the efficiency of using one resource and be inferior to them in the efficiency of using other resources. We show the existence of two fundamentally different mechanisms of natural selection of the dominant technology, namely, by selection value and by the initial conditions. We investigate the potential possibility of regulating the technological diversity of the industry by managing the rates of resource supply.Protectionist demands in globalizationhttps://zbmath.org/1500.910922023-01-20T17:58:23.823708Z"Kıbrıs, Arzu"https://zbmath.org/authors/?q=ai:kibris.arzu"Kıbrıs, Özgür"https://zbmath.org/authors/?q=ai:kibris.ozgur"Gürdal, Mehmet Yiğit"https://zbmath.org/authors/?q=ai:gurdal.mehmet-yigitSummary: We construct a game theoretic model that offers to explain the increase in trade protectionism as a rational reaction of the voters to their increased concern that the policy choices of their governments are being influenced by international actors. More specifically, we construct a small open economy in which the citizens declare their most preferred tariff rate on an import good to their government. While the government has incentive not to deviate too much from the publicly demanded tariff rate, its final decision is determined after bargaining with a foreign lobby which offers benefits to the government in return of lowered tariffs. We show that the expectation of such foreign influence affects the citizens' voting behavior. Namely, they tend to vote for more protectionist policies. Moreover, this behavior leads to an increase in benefits by the foreign lobby to the government.Walrasian equilibrium without homogeneity and Walras' lawhttps://zbmath.org/1500.910932023-01-20T17:58:23.823708Z"D'Agata, Antonio"https://zbmath.org/authors/?q=ai:dagata.antonioSummary: This paper introduces two new boundary conditions that ensure the existence of a Walrasian equilibrium in monetary economies violating Walras' Law and homogeneity of degree zero of the excess demand function. The two conditions are more general than those adopted by the existing literature. The existence of a Walrasian equilibrium with free disposal is also considered. As a by-product of our analysis, a refinement of the celebrated Uzawa's equivalence theorem and extensions of the Hartman-Stampacchia and Poincaré-Miranda theorems are also provided.Research of cooperation strategy of government-enterprise digital transformation based on differential gamehttps://zbmath.org/1500.910942023-01-20T17:58:23.823708Z"Xie, Weihong"https://zbmath.org/authors/?q=ai:xie.weihong"Zheng, Diwen"https://zbmath.org/authors/?q=ai:zheng.diwen"Luo, Jianbin"https://zbmath.org/authors/?q=ai:luo.jianbin"Wang, Zhong"https://zbmath.org/authors/?q=ai:wang.zhong.1"Wang, Yongjian"https://zbmath.org/authors/?q=ai:wang.yongjian(no abstract)Matching with contracts: calculation of the complete set of stable allocationshttps://zbmath.org/1500.910952023-01-20T17:58:23.823708Z"Pepa Risma, Eliana"https://zbmath.org/authors/?q=ai:pepa-risma.elianaSummary: For a many-to-many matching model with contracts, where all the agents have substitutable preferences, we provide an algorithm to compute the full set of stable allocations. This is based on the lattice structure of such set.Impact of financial crisis on economic growth: a stochastic modelhttps://zbmath.org/1500.910962023-01-20T17:58:23.823708Z"Tadmon, Calvin"https://zbmath.org/authors/?q=ai:tadmon.calvin"Tchaptchet, Eric Rostand Njike"https://zbmath.org/authors/?q=ai:njike-tchaptchet.eric-rostand(no abstract)Sharing the global outcomes of finite natural resource exploitation: a dynamic coalitional stability perspectivehttps://zbmath.org/1500.910972023-01-20T17:58:23.823708Z"Gonzalez, Stéphane"https://zbmath.org/authors/?q=ai:gonzalez.stephane"Rostom, Fatma Zahra"https://zbmath.org/authors/?q=ai:rostom.fatma-zahraSummary: The article explores the implications of natural resource scarcity in terms of global cooperation and trade. We investigate whether there exist stable international long-term agreements that take into account the disparities between countries in terms of geological endowments and productive capacity, while caring about future generations. For that purpose, we build an original cooperative game framework, where countries can form coalitions in order to optimize their discounted consumption stream in the long-run, within the limits of their stock of natural resources. We use the concept of the strong sequential core that satisfies both coalitional stability and time consistency. We show that this set is nonempty, stating that an international long-term agreement along the optimal path will be self-enforcing. The presented model sets out a conceptual framework for exploring the fair sharing of the fruits of global economic growth.The impact of geometry on monochrome regions in the flip Schelling processhttps://zbmath.org/1500.910982023-01-20T17:58:23.823708Z"Bläsius, Thomas"https://zbmath.org/authors/?q=ai:blasius.thomas"Friedrich, Tobias"https://zbmath.org/authors/?q=ai:friedrich.tobias"Krejca, Martin S."https://zbmath.org/authors/?q=ai:krejca.martin-s"Molitor, Louise"https://zbmath.org/authors/?q=ai:molitor.louiseUsing probability- and graph-theoretical approaches, the authors consider the model of self-organized segregation of agents of two types placed on a graph. In a nutshell, the model assumes that an agent tends to change his type depending on the predominant type of his neighborhood. For random geometric and Erdos-Rényi graphs, general results are shown for the expected fraction of monochrome edges (i.e. with similar-type vertices) of a graph. Ability of joint neighborhood of two vertices to push them towards a similar type in the process of segregation is linked to a concept of decisiveness that is formalized as the excess of the number of majority-type vertices over the minority type. It is shown that greater decisiveness of joint neighborhood of two vertices as compared to decisiveness of their exclusive neighborhoods leads to elevated probability of the two vertices falling into the same type. Further, assuming a simple probabilistic model when initial types of vertices are randomly set with probabilities \(1/2\) for each of the two types, it is shown that decisiveness is related to the sheer size of the neighborhood. Formal results are supplemented by simulations. For random geometric graphs, where the vertices are connected depending on their spatial proximity and, therefore, common neighborhoods are easily formed, it is shown that segregation is likely to occur. For the Erdos-Rényi graphs lacking spatial structure, on the other hand, the model eventually leads to a monochrome-edges state.
Reviewer: Dalkhat M. Ediev (Cherkessk)Optimal majority dynamics for the diffusion of an opinion when multiple alternatives are availablehttps://zbmath.org/1500.910992023-01-20T17:58:23.823708Z"Auletta, Vincenzo"https://zbmath.org/authors/?q=ai:auletta.vincenzo"Ferraioli, Diodato"https://zbmath.org/authors/?q=ai:ferraioli.diodato"Greco, Gianluigi"https://zbmath.org/authors/?q=ai:greco.gianluigiSummary: We consider opinion diffusion on social graphs where agents hold opinions and where social pressure leads them to conform to the opinion manifested by the majority of their neighbors. Within this setting, we look for dynamics that allows us to maximize the diffusion of a target opinion given the initial opinions of all agents. In particular, we focus on the setting where more than two opinions are available to the agents, and we show that the properties of this setting are entirely different from those characterizing the setting where agents hold binary opinions only. Indeed, while it is well-known that greedy dynamics are always optimal ones in the binary case, this is no longer true in our more general setting and -- rather surprisingly -- even if just three opinions are available. Moreover, while it is possible to decide in polynomial time if a dynamics leading to consensus exists when agents have two available opinions, the problem becomes computationally intractable with three opinions, regardless of the fraction of agents that have the target opinion as their initial opinion.Persuasion in networks: public signals and coreshttps://zbmath.org/1500.911002023-01-20T17:58:23.823708Z"Candogan, Ozan"https://zbmath.org/authors/?q=ai:candogan.ozanSummary: We consider a setting where agents in a social network take binary actions that exhibit local strategic complementarities. Their payoffs are affine and increasing in an underlying real-valued state of the world. An information designer commits to a signaling mechanism that publicly reveals a signal that is potentially informative about the state. She wants to maximize the expected number of agents who take action 1. We study the structure and design of optimal public signaling mechanisms. The designer's payoff is an increasing step function of the posterior mean (of the state) induced by the realization of her signal. We provide a convex optimization formulation and an algorithm that obtain an optimal public signaling mechanism whenever the designer's payoff admits this structure. This structure is prevalent, making our formulation and results useful well beyond persuasion in networks. In our problem, the step function is characterized in terms of the cores of the underlying network. The optimal mechanism is based on a ``double-interval partition'' of the set of states: it associates up to two subintervals of the set of states with each core, and when the state realization belongs to the interval(s) associated with a core, the mechanism publicly reveals this fact. In turn, this induces the agents in the relevant core to take action 1. We also provide a framework for obtaining asymptotically optimal public signaling mechanisms for a class of random networks. Our approach uses only the limiting degree distribution information, thereby making it useful even when the network structure is not fully known. Finally, we explore which networks are more amenable to persuasion, and show that more assortative connection structures lead to larger payoffs for the designer. Conversely, the dependence of the designer's payoff on the agents' degrees can be quite counterintuitive. In particular, we focus on networks sampled uniformly at random from the set of all networks consistent with a degree sequence and illustrate that when the degrees of some nodes increase, this can reduce the designer's expected payoff, despite an increase in the extent of (positive) network externalities.Seeding with costly network informationhttps://zbmath.org/1500.911012023-01-20T17:58:23.823708Z"Eckles, Dean"https://zbmath.org/authors/?q=ai:eckles.dean"Esfandiari, Hossein"https://zbmath.org/authors/?q=ai:esfandiari.hossein"Mossel, Elchanan"https://zbmath.org/authors/?q=ai:mossel.elchanan"Rahimian, M. Amin"https://zbmath.org/authors/?q=ai:rahimian.mohammad-aminSummary: We study the task of selecting \(k\) nodes, in a social network of size \(n\), to seed a diffusion with maximum expected spread size, under the independent cascade model with cascade probability \(p\). Most of the previous work on this problem (known as influence maximization) focuses on efficient algorithms to approximate the optimal seed set with provable guarantees given knowledge of the entire network; however, obtaining full knowledge of the network is often very costly in practice. Here we develop algorithms and guarantees for approximating the optimal seed set while bounding how much network information is collected. First, we study the achievable guarantees using a sublinear influence sample size. We provide an almost tight approximation algorithm with an additive \(\varepsilon\) \( n\) loss and show that the squared dependence of sample size on \(k\) is asymptotically optimal when \(\varepsilon\) is small. We then propose a probing algorithm that queries edges from the graph and use them to find a seed set with the same almost tight approximation guarantee. We also provide a matching (up to logarithmic factors) lower-bound on the required number of edges. This algorithm is implementable in field surveys or in crawling online networks. Our probing takes \(p\) as an input which may not be known in advance, and we show how to down-sample the probed edges to match the best estimate of \(p\) if they are collected with a higher probability. Finally, we test our algorithms on an empirical network to quantify the tradeoff between the cost of obtaining more refined network information and the benefit of the added information for guiding improved seeding strategies.Optimizing opinions with stubborn agentshttps://zbmath.org/1500.911022023-01-20T17:58:23.823708Z"Hunter, David Scott"https://zbmath.org/authors/?q=ai:hunter.david-scott"Zaman, Tauhid"https://zbmath.org/authors/?q=ai:zaman.tauhidSummary: We consider the problem of optimizing the placement of stubborn agents in a social network in order to maximally influence the population. We assume the network contains stubborn users whose opinions do not change, and nonstubborn users who can be persuaded. We further assume that the opinions in the network are in an equilibrium that is common to many opinion dynamics models, including the well-known DeGroot model. We develop a discrete optimization formulation for the problem of maximally shifting the equilibrium opinions in a network by targeting users with stubborn agents. The opinion objective functions that we consider are the opinion mean, the opinion variance, and the number of individuals whose opinion exceeds a fixed threshold. We show that the mean opinion is a monotone submodular function, allowing us to find a good solution using a greedy algorithm. We find that on real social networks in Twitter consisting of tens of thousands of individuals, a small number of stubborn agents can nontrivially influence the equilibrium opinions. Furthermore, we show that our greedy algorithm outperforms several common benchmarks. We then propose an opinion dynamics model where users communicate noisy versions of their opinions, communications are random, users grow more stubborn with time, and there is heterogeneity in how users' stubbornness increases. We prove that, under fairly general conditions on the stubbornness rates of the individuals, the opinions in this model converge to the same equilibrium as the DeGroot model, despite the randomness and user heterogeneity in the model.Flock effect drives information fractal propagationhttps://zbmath.org/1500.911032023-01-20T17:58:23.823708Z"Nian, Fuzhong"https://zbmath.org/authors/?q=ai:nian.fuzhong"Cui, Yuanlin"https://zbmath.org/authors/?q=ai:cui.yuanlin"Yang, Yang"https://zbmath.org/authors/?q=ai:yang.yang.17"Wang, Xingyuan"https://zbmath.org/authors/?q=ai:wang.xingyuanSummary: In this paper, first, the structure of social networks was investigated from fractal perspective and the fractal network was built from online social networks. Then inspired by the flock effect, the characteristics of ``separation'', ``cohesion'' and ``alignment'' in information propagation were studied from the inherent granularity structure of fractal communities. Accordingly, the agreement degree and controversy degree between community members at different granularities were defined. And the ``seeking common ground while shelving differences'' propagation rule in fractal network was presented. On this basis, a new S2I propagation model was proposed. From this model, communicators were divided into two states, I and OI. People in status I have a tendency to express support for a message. While those in state OI have a tendency to oppose it. The information propagation process and characteristics were analyzed further by using S2I model. Finally, the effectiveness of S2I model was verified by comparative experiments of simulation and real data analysis.Profit maximization for competitive social advertisinghttps://zbmath.org/1500.911042023-01-20T17:58:23.823708Z"Shi, Qihao"https://zbmath.org/authors/?q=ai:shi.qihao"Wang, Can"https://zbmath.org/authors/?q=ai:wang.can"Ye, Deshi"https://zbmath.org/authors/?q=ai:ye.deshi"Chen, Jiawei"https://zbmath.org/authors/?q=ai:chen.jiawei"Zhou, Sheng"https://zbmath.org/authors/?q=ai:zhou.sheng"Feng, Yan"https://zbmath.org/authors/?q=ai:feng.yan"Chen, Chun"https://zbmath.org/authors/?q=ai:chen.chun"Huang, Yanhao"https://zbmath.org/authors/?q=ai:huang.yanhaoSummary: In social advertising, the social platform host may run marketing campaigns for multiple competing clients simultaneously. In this case, each client comes up with a budget and an influence spread requirement. The host runs campaigns by allocating a set of seed nodes for each client. If the influence spread triggered by a seed set meets the requirement, the host can earn the budget from the corresponding client. In this paper, we study the problem of profit maximization, considering that different seeds incur different costs. Given all the clients' requirements met, we aim to find the optimal seed allocation with minimum cost. Under the competitive K-LT propagation model, we show the profit maximization problem is NP-hard and NP-hard to approximate with any factor. To find a feasible solution, we propose an effective algorithm that iteratively selects a candidate set and obtains an approximate allocation. The experimental results over a real-world dataset validate the effectiveness of the proposed methods.Rumor correction maximization problem in social networkshttps://zbmath.org/1500.911052023-01-20T17:58:23.823708Z"Zhang, Yapu"https://zbmath.org/authors/?q=ai:zhang.yapu"Yang, Wenguo"https://zbmath.org/authors/?q=ai:yang.wenguo"Du, Ding-Zhu"https://zbmath.org/authors/?q=ai:du.ding-zhuSummary: Admittedly, innovations can spread rapidly in online social networks, while the spread of malicious rumors can lead to a series of negative consequences. Therefore, it is necessary to take effective measures to limit the influence of negative information. In reality, people will become an adopter of innovations after being influenced by their friends. Meanwhile, they can be more likely to become a follower if they have received relevant information in advance. Motivated by these observations, we study the rumor correction maximization problem using both seed and boost nodes. We first focus on the boost nodes and propose the \textit{boosting rumor correction maximization} (BRCM) problem under the \textit{boosting independent cascade} model. We prove that the BRCM problem is NP-hard, and the objective function is non-submodular. To handle it, we devise an efficient algorithm with a data-dependent approximation ratio. To explore the seed nodes, the \textit{seed selection} problem and \textit{minimum seed selection} problem are proposed, respectively. Accordingly, we design two efficient algorithms. Finally, extensive empirical results in three networks manifest the efficiency of our approaches and show superiority over other baselines.A model of semantic completion in generative episodic memoryhttps://zbmath.org/1500.911062023-01-20T17:58:23.823708Z"Fayyaz, Zahra"https://zbmath.org/authors/?q=ai:fayyaz.zahra"Altamimi, Aya"https://zbmath.org/authors/?q=ai:altamimi.aya"Zoellner, Carina"https://zbmath.org/authors/?q=ai:zoellner.carina"Klein, Nicole"https://zbmath.org/authors/?q=ai:klein.nicole"Wolf, Oliver T."https://zbmath.org/authors/?q=ai:wolf.oliver-t"Cheng, Sen"https://zbmath.org/authors/?q=ai:cheng.sen"Wiskott, Laurenz"https://zbmath.org/authors/?q=ai:wiskott.laurenzSummary: Many studies have suggested that episodic memory is a generative process, but most computational models adopt a storage view. In this article, we present a model of the generative aspects of episodic memory. It is based on the central hypothesis that the hippocampus stores and retrieves selected aspects of an episode as a memory trace, which is necessarily incomplete. At recall, the neocortex reasonably fills in the missing parts based on general semantic information in a process we call semantic completion. The model combines two neural network architectures known from machine learning, the vector-quantized variational autoencoder (VQ-VAE) and the pixel convolutional neural network (PixelCNN). As episodes, we use images of digits and fashion items (MNIST) augmented by different backgrounds representing context. The model is able to complete missing parts of a memory trace in a semantically plausible way up to the point where it can generate plausible images from scratch, and it generalizes well to images not trained on. Compression as well as semantic completion contribute to a strong reduction in memory requirements and robustness to noise. Finally, we also model an episodic memory experiment and can reproduce that semantically congruent contexts are always recalled better than incongruent ones, high attention levels improve memory accuracy in both cases, and contexts that are not remembered correctly are more often remembered semantically congruently than completely wrong. This model contributes to a deeper understanding of the interplay between episodic memory and semantic information in the generative process of recalling the past.Algorithm of the correction of bigram method for the problem of the text author identificationhttps://zbmath.org/1500.911072023-01-20T17:58:23.823708Z"Voronina, M. Yu."https://zbmath.org/authors/?q=ai:voronina.m-yu"Kislitsyn, A. A."https://zbmath.org/authors/?q=ai:kislitsyn.alexey-a"Orlov, Yu. N."https://zbmath.org/authors/?q=ai:orlov.yurii-nSummary: The paper proposes a model for recognizing authors of literary texts based on the proximity of an individual text to the author's standard. The standard is the empirical frequency distribution of letter combinations, constructed according to all reliably known works of the author. Proximity is understood in the sense of the norm in L1. The author of an unknown text is assigned the one to whose standard the text under test is closest. For identification, a library of authors is used, each of which has a sufficiently large number of works defining the corresponding standards of two letter combinations. Testing of this identification method on the authors of the library has shown that it is very accurate. In the analyzed corpus of texts, 1783 texts of 100 authors were collected, the recognition error by the best method turned out to be 0.12. It is important that after the exclusion of erroneously recognized texts, a library of 88 authors and 1450 texts remained, each of which was identified correctly. The problem under study is the assessment of the probability that there is no standard of the author of the tested text among the library standards. To solve it, the paper analyzes the dependence of the probability of erroneous identification on the length of the text. Using the example of an unmistakably determined subgroup of texts, it turned out that the empirical probability of correct recognition of a text fragment, although it decreases with a decrease in the length of the fragment, still exceeds 0.5 up to the fragmentation of the text into 10 parts. If we take smaller fragments, some of them are identified incorrectly. If the correct standard is excluded from consideration, the second closest standard is assigned as such, but it turns out to be unstable: the ambiguity of such identification of the author of fragments occurs already when the text is cut into 4 fragments. Thus, the stability of the identification of the author of text fragments can be proposed as a criterion for the correctness of the method.Valuation of a DB underpin hybrid pension under a regime-switching Lévy modelhttps://zbmath.org/1500.911082023-01-20T17:58:23.823708Z"Ai, Meiqiao"https://zbmath.org/authors/?q=ai:ai.meiqiao"Zhang, Zhimin"https://zbmath.org/authors/?q=ai:zhang.zhimin.1"Zhong, Wei"https://zbmath.org/authors/?q=ai:zhong.wei|zhong.wei.1|zhong.wei.2Summary: This paper studies the valuation problem of the defined benefit (DB) underpin guarantee. We consider that the salary process follows a geometric Brownian motion, and the stochastic price index process of the funds in the defined contribution (DC) account is modeled by a regime-switching Lévy process. Under this framework, the explicit valuation formula of the DB underpin option is derived by the Fourier cosine series expansion (COS) method, and the corresponding error analysis is provided. Numerous simulation experiments are performed to illustrate the accuracy and efficiency of the proposed method. In addition, the convergence of this method and its sensitivity with respect to various model paraments are analyzed.Optimal pension fund management under risk and uncertainty: the case study of Polandhttps://zbmath.org/1500.911092023-01-20T17:58:23.823708Z"Baltas, I."https://zbmath.org/authors/?q=ai:baltas.ioannis-d"Szczepański, M."https://zbmath.org/authors/?q=ai:szczepanski.marek"Dopierala, L."https://zbmath.org/authors/?q=ai:dopierala.l"Kolodziejczyk, K."https://zbmath.org/authors/?q=ai:kolodziejczyk.krzysztof"Weber, Gerhard-Wilhelm"https://zbmath.org/authors/?q=ai:weber.gerhard-wilhelm"Yannacopoulos, A. N."https://zbmath.org/authors/?q=ai:yannacopoulos.athanasios-nSummary: During the last decade, and especially after the financial crisis, the problem of providing supplementary pensions to the retirees has attracted a lot of attention from official bodies, as well as private financial institutions, worldwide. In this effort, there are various possible solutions, one of which is provided by pension fund schemes. Essentially, a pension fund scheme constitutes an independent legal entity that represents accumulated wealth stemming from pooled contributions of its members. The aim of the proposed research is to study the problem of optimal management of defined contribution (DC) pension fund schemes within general, complex and (as much as possible) realistic frameworks. From both a theoretical and practical point of view, one of the most important issue regarding fund management is the construction of optimal investment portfolio, because the success of a DC plan crucially depends on the effective investment of the available funds. Even though this problem has been heavily studied in the relative literature, the vast majority of the available works focuses: (i) on simple stylized models which allow for a very general understanding and are mainly based on intentionally unrealistic assumptions in order to provide closed-form (and paradigmatic) solutions, and (ii) on risk levels (unrealistic) rather than uncertainty (realistic). This chapter presents preliminary results/general ideas of our project and aims to provide a detailed and an (as much as possible) realistic framework that takes into account the exposure of the fund portfolio into several market risks as well as model uncertainty with respect to the evolution of several unknown market parameters that govern the behavior of the fund portfolio. Our research will be directed towards the new public and occupational pension schemes in Poland.
For the entire collection see [Zbl 1481.92004].Optimal control of investment in a collective pension insurance model: study of singular nonlinear problems for integro-differential equationshttps://zbmath.org/1500.911102023-01-20T17:58:23.823708Z"Belkina, T. A."https://zbmath.org/authors/?q=ai:belkina.tatyana-andreevna"Konyukhova, N. B."https://zbmath.org/authors/?q=ai:konyukhova.nadja-b"Kurochkin, S. V."https://zbmath.org/authors/?q=ai:kurochkin.sergey-vladimirovichSummary: For a collective pension insurance model (dual risk model), the optimal control of investments aimed at maximizing the survival probability of an insurance company is considered. The search for an optimal strategy by applying dynamic programming leads to singular nonlinear boundary value problems for integro-differential equations. In the case of an exponential premium size distribution, these problems are studied analytically. Numerical results are presented and compared with previous computations in the case of simple investment strategies (risky and risk-free) in the considered model.On the Gompertz-Makeham law: a useful mortality model to deal with human mortalityhttps://zbmath.org/1500.911112023-01-20T17:58:23.823708Z"Castellares, Fredy"https://zbmath.org/authors/?q=ai:castellares.fredy"Patrício, Silvio"https://zbmath.org/authors/?q=ai:patricio.silvio-c"Lemonte, Artur J."https://zbmath.org/authors/?q=ai:lemonte.artur-joseSummary: The Gompertz-Makeham model was introduced as an extension of the Gompertz model in the second half of the 19th century by the British actuary William M. Makeham. Since then, this model has been successfully used in biology, actuarial science, and demography to describe mortality patterns in numerous species (including humans), determine policies in insurance, establish actuarial tables and growth models. In this paper, we derive some structural properties of the Gompertz-Makeham model in statistics, demography, and actuarial sciences, and present some other ones already introduced in the literature. All structural properties we provide are expressed in closed-form, which eliminates the need to evaluate them with numerical integration directly. In addition, we study the estimation of the Gompertz-Makeham model parameters through the discrete Poisson and Bell distributions. In particular, we verify that the recently introduced discrete Bell distribution can be an interesting alternative to the Poisson distribution, mainly because it is suitable to deal with over dispersion, unlike the Poisson distribution. On the basis of real mortality datasets, we compute the remaining life expectancy for several countries and verify that the Gompertz-Makeham model, especially under the Bell distribution, provides proper results to deal with human mortality in practice.Usage-based insurance -- impact on insurers and potential implications for InsurTechhttps://zbmath.org/1500.911122023-01-20T17:58:23.823708Z"Che, Xin"https://zbmath.org/authors/?q=ai:che.xin"Liebenberg, Andre"https://zbmath.org/authors/?q=ai:liebenberg.andre-p"Xu, Jianren"https://zbmath.org/authors/?q=ai:xu.jianrenSummary: Insurers are increasingly embracing the InsurTech ecosystem. The most important InsurTech-related trend in automobile insurance is usage-based insurance (UBI), which enables insurers to access and incorporate drivers' behavioral risk factors in actuarial pricing. Using a difference-in-difference research design with firm fixed effects, we provide evidence that UBI improves private passenger auto liability (PPAL) insurers' underwriting performance by reducing their loss ratio. However, the improvement in underwriting performance is only significant among early UBI adopters, highlighting the early-mover advantage in InsurTech. Also, UBI produces benefits only when it matures. Our findings are robust to analyses that address potential reverse causality and self-selection bias. Additional tests show that early UBI adopters experience a significant increase in their market share, implying UBI's advantage to attract low-risk drivers from other insurers. The overall performance effect of UBI programs for early adopters is a 1\% increase in ROA and a 3\% increase in ROE. The policy implications of our findings from the perspective of insurers should be of interest to firms' management, actuaries, investors, and rating agencies.Distributionally robust goal-reaching optimization in the presence of background riskhttps://zbmath.org/1500.911132023-01-20T17:58:23.823708Z"Chi, Yichun"https://zbmath.org/authors/?q=ai:chi.yichun"Xu, Zuo Quan"https://zbmath.org/authors/?q=ai:xu.zuoquan"Zhuang, Sheng Chao"https://zbmath.org/authors/?q=ai:zhuang.shengchaoSummary: In this article, we examine the effect of background risk on portfolio selection and optimal reinsurance design under the criterion of maximizing the probability of reaching a goal. Following the literature, we adopt dependence uncertainty to model the dependence ambiguity between financial risk (or insurable risk) and background risk. Because the goal-reaching objective function is nonconcave, these two problems bring highly unconventional and challenging issues for which classical optimization techniques often fail. Using a quantile formulation method, we derive the optimal solutions explicitly. The results show that the presence of background risk does not alter the shape of the solution but instead changes the parameter value of the solution. Finally, numerical examples are given to illustrate the results and verify the robustness of our solutions.Optimal long-term Tier 1 employee pension management with an application to Chinese urban areashttps://zbmath.org/1500.911142023-01-20T17:58:23.823708Z"Ji, Bingbing"https://zbmath.org/authors/?q=ai:ji.bingbing"Chen, Zhiping"https://zbmath.org/authors/?q=ai:chen.zhiping"Consigli, Giorgio"https://zbmath.org/authors/?q=ai:consigli.giorgio"Yan, Zhe"https://zbmath.org/authors/?q=ai:yan.zheSummary: We formulate a stochastic optimization problem from the perspective of an investment committee responsible for Tier 1 social security pension policies and whose decisions are bound to have relevant economic and social consequences. The adopted modelling approach combines canonical \textit{multistage stochastic programming} (MSP) with \textit{dynamic stochastic control} (DSC): the first applies to the short-medium term, the second to the long-term. Through the combined framework, we are able to span a long planning horizon without jeopardizing the accuracy of scenario tree based medium-term planning. We apply this methodology to the Chinese pension system, which relies on two large reference areas for rural and urban populations. In this article, we concentrate on the ever-growing \textit{urban public pension} system, which is facing significant challenges due to a declining workforce and a rapidly ageing population. This welfare area, originally conceived as a \textit{pay-as-you-go} (PAYG) system, has undergone several recent reforms to enhance its long-term sustainability and reduce the interventions of the central government required to improve its funding condition. Among those relevant in our setting, is the reduction of policy constraints that until 2015 severely limited the possibility to invest in assets other than traditional, locally traded, long-term fixed income securities. We propose an optimization model in which the decisions of the investment management aim at significantly reducing central government interventions as a last resort liquidity provider and progressively improving the system funding condition. A rich set of computational and economic evidence is presented to validate the methodology and clarify its potential benefits to pension system efficiency.Dynamic fund protection for property marketshttps://zbmath.org/1500.911152023-01-20T17:58:23.823708Z"Siu, Tak Kuen"https://zbmath.org/authors/?q=ai:siu.tak-kuen"Nguyen, Ha"https://zbmath.org/authors/?q=ai:nguyen.ha-q|nguyen.ha-hoang|nguyen.kha-v|nguyen.ha-thai|nguyen.ha-nam|nguyen.ha-x"Wang, Ning"https://zbmath.org/authors/?q=ai:wang.ningSummary: This article aims to investigate, from an academic perspective, a potential application of dynamic fund protection to protect a mortgagor of a property against the downside risk due to falling property price. The valuation of the dynamic fund protection is discussed through modeling the property price and interest rate, which may be considered to be two key factors having a material impact on the mortgagor. Specifically, a mean-reverting process is used to describe the property price and the Heath-Jarrow-Morton theory is used to model the interest rate. The valuation is done via the use of a forward measure approach. The numerical solution to the pricing partial differential equation is obtained via applying the finite difference method. Numerical results with some model parameters being estimated from the data on an Australian residential property index and Australian zero-coupon yields and forward rates are provided. The implications of the numerical results for the potential implementation of the dynamic fund protection are discussed.Semiparametric regression for dual population mortalityhttps://zbmath.org/1500.911162023-01-20T17:58:23.823708Z"Venter, Gary"https://zbmath.org/authors/?q=ai:venter.gary-g"Şahin, Şule"https://zbmath.org/authors/?q=ai:sahin.sule-onselSummary: Parameter shrinkage applied optimally can always reduce error and projection variances from those of maximum likelihood estimation. Many variables that actuaries use are on numerical scales, like age or year, which require parameters at each point. Rather than shrinking these toward zero, nearby parameters are better shrunk toward each other. Semiparametric regression is a statistical discipline for building curves across parameter classes using shrinkage methodology. It is similar to but more parsimonious than cubic splines. We introduce it in the context of Bayesian shrinkage and apply it to joint mortality modeling for related populations. Bayesian shrinkage of slope changes of linear splines is an approach to semiparametric modeling that evolved in the actuarial literature. It has some theoretical and practical advantages, like closed-form curves, direct and transparent determination of degree of shrinkage and of placing knots for the splines, and quantifying goodness of fit. It is also relatively easy to apply to the many nonlinear models that arise in actuarial work. We find that it compares well to a more complex state-of-the-art statistical spline shrinkage approach on a popular example from that literature.Robust optimal reinsurance in minimizing the penalized expected time to reach a goalhttps://zbmath.org/1500.911172023-01-20T17:58:23.823708Z"Yuan, Yu"https://zbmath.org/authors/?q=ai:yuan.yu.1"Liang, Zhibin"https://zbmath.org/authors/?q=ai:liang.zhibin.1"Han, Xia"https://zbmath.org/authors/?q=ai:han.xiaSummary: This paper studies a robust optimal reinsurance problem for an ambiguity-averse insurer, who does not have perfect information in the drift term of insurance process. The objective is to minimize the robust value involving the expected time to reach a given capital level before ruin and a penalization of model ambiguity. By using the techniques of stochastic control theory and exponential transformation, we derive the closed-form expressions of the optimal reinsurance strategy and the associated value function for the risk model with cheap reinsurance. For the non-cheap reinsurance, we prove that there exists a ``safe level'' such that the optimization problem becomes a trivial one when the initial surplus is below this safe level. Therefore, for this case, we focus on solving the corresponding boundary-value problems when the initial surplus is greater than the safe level, and the value function is obtained explicitly as well. Furthermore, we investigate the influence of model ambiguity in theory. Some properties and numerical examples are also presented to show the impact of model parameters on the optimal results.Modeling the risk in mortality projectionshttps://zbmath.org/1500.911182023-01-20T17:58:23.823708Z"Zhu, Nan"https://zbmath.org/authors/?q=ai:zhu.nan"Bauer, Daniel"https://zbmath.org/authors/?q=ai:bauer.daniel-jSummary: This paper presents and applies models for the valuation and management of mortality-contingent exposures. Such exposures include insurance and pension benefits, as well as novel mortality-linked securities traded in financial markets. Unlike conventional approaches to modeling mortality, we consider the stochastic evolution of \textit{mortality projections} rather than realized \textit{mortality rates}. Relying on a time series of age-specific mortality forecasts, we develop a set of stochastic models that -- unlike conventional mortality models -- capture the evolution of mortality forecasts over the past 50 years. In particular, the dynamics of our models reflect the substantial observed variability of long-term projections and are therefore particularly well-suited for financial applications where long-term demographic uncertainty is relevant.The optimal payoff for a Yaari investorhttps://zbmath.org/1500.911192023-01-20T17:58:23.823708Z"Boudt, K."https://zbmath.org/authors/?q=ai:boudt.kris"Dragun, K."https://zbmath.org/authors/?q=ai:dragun.k"Vanduffel, S."https://zbmath.org/authors/?q=ai:vanduffel.stevenSummary: \textit{M. E. Yaari}'s dual theory of choice under risk [Econometrica 55, 95--115 (1987; Zbl 0616.90005)] is the natural counterpart of expected utility theory. While optimal payoff choice for an expected utility maximizer is well studied in the literature, less is known about the optimal payoff for a Yaari investor. We perform a fairly general analysis and derive optimal payoffs in a variety of relevant cases. As a main contribution, we provide the optimal payoff for a Yaari investor under a variance constraint; thus, extending mean-variance optimization to distorted expectation-variance optimization. We also derive the optimal payoff for an investor who aims to outperform an external benchmark under the requirement that the payoff stays in the neighbourhood of this benchmark.Portfolio optimization with tri-objective for index fund managementhttps://zbmath.org/1500.911202023-01-20T17:58:23.823708Z"Chen, Yao-Tsung"https://zbmath.org/authors/?q=ai:chen.yao-tsung"Sheng, Yang"https://zbmath.org/authors/?q=ai:sheng.yang(no abstract)The effects of errors in means, variances, and correlations on the mean-variance frameworkhttps://zbmath.org/1500.911212023-01-20T17:58:23.823708Z"Chung, Munki"https://zbmath.org/authors/?q=ai:chung.munki"Lee, Yongjae"https://zbmath.org/authors/?q=ai:lee.yongjae"Kim, Jang Ho"https://zbmath.org/authors/?q=ai:kim.jang-ho-robert|kim.jangho"Kim, Woo Chang"https://zbmath.org/authors/?q=ai:kim.woo-chang"Fabozzi, Frank J."https://zbmath.org/authors/?q=ai:fabozzi.frank-jSummary: The mean-variance (MV) framework has been a fundamental tenet of investment management, yet it has been criticized for being too sensitive to parameter estimation errors. Hence, it is important to understand how the errors in parameters affect the MV framework. Although a number of researchers have studied how errors in parameters affect MV optimal portfolios, these studies do not show the complete picture. The MV framework is a tool for systematic evaluation of investment alternatives based on the risk-return trade-off, and MV optimal portfolios are its outputs. In this study, we investigate the effect of errors in parameters on the entire MV framework. We analyze the Sharpe ratio distribution of all possible portfolios, which represents how investments are evaluated under the risk-return trade-off. While means have been widely considered as the most important parameter in the MV optimization, our full-distributional analyses reveal that correlations mostly dominate other parameters.Expected utility theory on general affine GARCH modelshttps://zbmath.org/1500.911222023-01-20T17:58:23.823708Z"Escobar-Anel, Marcos"https://zbmath.org/authors/?q=ai:escobar-anel.marcos"Spies, Ben"https://zbmath.org/authors/?q=ai:spies.ben"Zagst, Rudi"https://zbmath.org/authors/?q=ai:zagst.rudiSummary: Expected utility theory has produced abundant analytical results in continuous-time finance, but with very little success for discrete-time models. Assuming the underlying asset price follows a general affine GARCH model which allows for non-Gaussian innovations, our work produces an approximate closed-form recursive representation for the optimal strategy under a constant relative risk aversion (CRRA) utility function. We provide conditions for optimality and demonstrate that the optimal wealth is also an affine GARCH. In particular, we fully develop the application to the IG-GARCH model hence accommodating negatively skewed and leptokurtic asset returns. Relying on two popular daily parametric estimations, our numerical analyses give a first window into the impact of the interaction of heteroscedasticity, skewness and kurtosis on optimal portfolio solutions. We find that losses arising from following Gaussian (suboptimal) strategies, or Merton's static solution, can be up to 2.5\% and 5\%, respectively, assuming low-risk aversion of the investor and using a five-years time horizon.Optimal investment policy in a multi-stage problem with bankruptcy and stage-by-stage probability constraintshttps://zbmath.org/1500.911232023-01-20T17:58:23.823708Z"Golubin, A. Y."https://zbmath.org/authors/?q=ai:golubin.a-yu|golubin.alexey-ySummary: The present paper studies a multi-stage portfolio optimization problem with bankruptcy and stage-by-stage value at risk (VaR) constraints that impose boundary for probability of the percentage of the investor's capital shortfall at each stage. The goal function is the mean value of final investor's capital. Making use of the stage-by-stage VaR constraints and a multivariate normal model for rates of return, the method of dynamic programming is applied. Due to peculiarities of this optimal control problem of the Markov chain with a set of absorbing states, an optimal investment policy turns out to be a relatively simple policy. More exactly, at each stage optimal portfolio depends only on the number of stage, but not on the value of current investor's capital. The initial problem is reduced to a sequence of one-stage portfolio optimization problems. Analysis of such one-stage problem is mainly based on known results, providing sufficient and necessary conditions for fulfillment of Slater's constraint qualification, as well as conditions for optimality. In addition, we extend the obtained results to a non-normality situation by use of elliptical distributions.Uncertainty linguistic summarizer to evaluate the performance of investment fundshttps://zbmath.org/1500.911242023-01-20T17:58:23.823708Z"Grajales, Carlos Alexander"https://zbmath.org/authors/?q=ai:grajales.carlos-alexander"Medina Hurtado, Santiago"https://zbmath.org/authors/?q=ai:medina-hurtado.santiagoSummary: This chapter proposes a methodology to implement the uncertain linguistic summarizer posed in Liu's uncertain logic to measure the performance of investment funds in the Colombian capital market. The algorithm extracts a truth value for a set of linguistic summaries, written as propositions in predicate logic, where the terms for the quantifier, subject, and predicate are unsharp. The linguistic summarizer proves to be autonomous, successful, efficient, and close to human language. Furthermore, the implementation has a general scope and could become a data mining tool under uncertainty. The propositions found characterize with plenty of sense the investment funds data. Finally, a corollary that allows accelerating the obtention of the summaries is presented.
For the entire collection see [Zbl 1492.90011].Risk contributions of lambda quantileshttps://zbmath.org/1500.911252023-01-20T17:58:23.823708Z"Ince, A."https://zbmath.org/authors/?q=ai:ince.a-nejat"Peri, I."https://zbmath.org/authors/?q=ai:peri.ilaria"Pesenti, S."https://zbmath.org/authors/?q=ai:pesenti.silvana-mSummary: Risk contributions of portfolios form an indispensable part of risk-adjusted performance measurement. The risk contribution of a portfolio, e.g. in the Euler or Aumann-Shapley framework, is given by the partial derivatives of a risk measure applied to the portfolio profit and loss in the direction of the asset units. For risk measures that are not positively homogeneous of degree 1, however, known capital allocation principles do not apply. We study the class of lambda quantile risk measures that includes the well-known value-at-risk as a special case but for which no known allocation rule is applicable. We prove differentiability and derive explicit formulae of the derivatives of lambda quantiles with respect to their portfolio composition, that is, their risk contribution. For this purpose, we define lambda quantiles on the space of portfolio compositions and consider generic (also non-linear) portfolio operators. We further derive the Euler decomposition of lambda quantiles for generic portfolios and show that lambda quantiles are homogeneous in the space of portfolio compositions, with a homogeneity degree that depends on the portfolio composition and the lambda function. This result is in stark contrast to the positive homogeneity properties of risk measures defined in the space of random variables, which admit a constant homogeneity degree. We introduce a generalised version of Euler contributions and Euler allocation rule, which are compatible with risk measures of any homogeneity degree and non-linear but homogeneous portfolios. These concepts are illustrated by a non-linear portfolio using financial market data.Optimal dynamic momentum strategieshttps://zbmath.org/1500.911262023-01-20T17:58:23.823708Z"Li, Kai"https://zbmath.org/authors/?q=ai:li.kai"Liu, Jun"https://zbmath.org/authors/?q=ai:liu.jun.9Summary: We explicitly solve for the optimal dynamic trading strategy between a riskless asset and a risky asset with momentum. The optimal portfolio weight depends not only on the momentum, as in Merton's framework, but also on the historical price path; this contrasts with Merton. Because of their path dependence, optimal portfolio weights have a wide distribution for a given level of momentum; for example, investors may short the risky asset if it has rebound price paths but leverage if it has hump-shaped price paths. This effect tends to be the most significant after large price swings. Path dependence is solved with explicit formulas and presented with heuristic statistics.Optimal characteristic portfolioshttps://zbmath.org/1500.911272023-01-20T17:58:23.823708Z"McGee, Richard J."https://zbmath.org/authors/?q=ai:mcgee.richard-j"Olmo, Jose"https://zbmath.org/authors/?q=ai:olmo.joseSummary: Characteristic-sorted portfolios are the workhorses of modern empirical finance, deployed widely to evaluate anomalies and construct asset pricing models. We propose a new method for their estimation that is simple to compute, makes no ex-ante assumption on the nature of the relationship between the characteristic and returns, and does not require \textit{ad hoc} selections of percentile breakpoints or portfolio weighting schemes. Characteristic portfolio weights are implied directly from data, through maximizing a mean-variance objective function with mean and variance estimated non-parametrically from the cross-section of assets. To illustrate the method, we evaluate the size, value and momentum anomalies and find overwhelming empirical evidence of the outperformance of our methodology compared to standard methods for constructing characteristic-sorted portfolios.Group sparse enhanced indexation model with adaptive beta valuehttps://zbmath.org/1500.911282023-01-20T17:58:23.823708Z"Xu, Fengmin"https://zbmath.org/authors/?q=ai:xu.fengmin"Ma, Jieao"https://zbmath.org/authors/?q=ai:ma.jieao"Lu, Haibing"https://zbmath.org/authors/?q=ai:lu.haibingSummary: Enhanced indexing, which has been used by professional portfolio managers for decades, is a portfolio management strategy that attempts to increase returns by building a portfolio around core, index-like positions and adding tactical tilts toward specific styles or individual stocks. This paper proposes an improved enhanced indexation model by considering the systematic risk, measured by Beta value, and the industry rotation phenomenon. The systematic risk is the risk related to the stock market as a whole and can be reasonably controlled to improve portfolio performance, by actively tracking and forecasting the market trend. Sector rotation refers to the investment strategy of taking money that's invested in one stock market industry and moving it to another, by taking advantage of the historical performances of specific industries during different phases of the cycle. In specific, our model aims to find a small set of industries that is mostly likely to thrive in the anticipated future, which is mathematically realized by dividing stocks into industries and minimizing their \(L_{2,1}\) norm. To evaluate our strategy, we conducted extensive numerical experiments against some major world indices, e.g. CSI 300, S\&P 500, FTSE 100 and Nikkei 225. The experimental result shows that our approach can generate sparse portfolios with excellent out-of-sample excess returns and high robustness after deducting transaction costs.Dynamic quantile function modelshttps://zbmath.org/1500.911292023-01-20T17:58:23.823708Z"Chen, Wilson Ye"https://zbmath.org/authors/?q=ai:chen.wilson-ye"Peters, Gareth W."https://zbmath.org/authors/?q=ai:peters.gareth-william"Gerlach, Richard H."https://zbmath.org/authors/?q=ai:gerlach.richard-h"Sisson, Scott A."https://zbmath.org/authors/?q=ai:sisson.scott-aSummary: Motivated by the need for effectively summarising, modelling, and forecasting the distributional characteristics of intra-daily returns, as well as the recent work on forecasting histogram-valued time-series in the area of symbolic data analysis, we develop a time-series model for forecasting quantile-function-valued (QF-valued) daily summaries for intra-daily returns. We call this model the dynamic quantile function (DQF) model. Instead of a histogram, we propose to use a \(g\)-and-\(h\) quantile function to summarise the distribution of intra-daily returns. We work with a Bayesian formulation of the DQF model in order to make statistical inference while accounting for parameter uncertainty; an efficient MCMC algorithm is developed for sampling-based posterior inference. Using ten international market indices and approximately 2000 days of out-of-sample data from each market, the performance of the DQF model compares favourably, in terms of forecasting VaR of intra-daily returns, against the interval-valued and histogram-valued time-series models. Additionally, we demonstrate that the QF-valued forecasts can be used to forecast VaR measures at the daily timescale via a simple quantile regression model on daily returns (QR-DQF). In certain markets, the resulting QR-DQF model is able to provide competitive VaR forecasts for daily returns.High dimensional Markovian trading of a single stockhttps://zbmath.org/1500.911302023-01-20T17:58:23.823708Z"Elliott, Robert"https://zbmath.org/authors/?q=ai:elliott.robert-j"Madan, Dilip B."https://zbmath.org/authors/?q=ai:madan.dilip-b"Wang, King"https://zbmath.org/authors/?q=ai:wang.king-hangSummary: OU processes with long term drifts that are tempered fractional Lévy processes reduce to a \(d+1\) dimensional Markovian system when the parameter \(d\) is an integer. Markovian optimization problems are formulated for the proportion of a dollar to be invested in a risky stock following the specified dynamics. The objective evaluates the cumulated discounted returns to a dollar being invested continuously through time. Risk sensitivity is accomplished by maximizing a conservative financial valuation seen as a nonlinear expectation. Trading policies are determined by solutions of nonlinear partial integro-differential equations. The policies are evaluated on a quantized set of representative Markovian states in the higher dimensions. Gaussian process regressions are then employed to deliver general functions of the state. The nonlinear policy functions deliver good trading outcomes on simulated data. The policy functions are then applied to trading \textit{SPY} from 2008 through 2020 with good results. They are also employed to trade 874 stocks over a four year period with reasonable results. Only three policy functions trained on one year of \textit{SPY} data for 2020 are reported on. It is conjectured that a variety of functions may be trained on other data sets over other periods and selections may then be made for the functions actually traded on a particular stock at a particular time from this collection. The underlying dynamics may also be further enriched by allowing for a Markov chain of states that code changes in the parameter values for the driving Lévy process.Monitoring stock market returns: a stochastic approachhttps://zbmath.org/1500.911312023-01-20T17:58:23.823708Z"Peovski, Filip"https://zbmath.org/authors/?q=ai:peovski.filip"Cvetkoska, Violeta"https://zbmath.org/authors/?q=ai:cvetkoska.violeta"Trpeski, Predrag"https://zbmath.org/authors/?q=ai:trpeski.predrag"Ivanovski, Igor"https://zbmath.org/authors/?q=ai:ivanovski.igor(no abstract)Moments of integrated exponential Lévy processes and applications to Asian options pricinghttps://zbmath.org/1500.911322023-01-20T17:58:23.823708Z"Brignone, Riccardo"https://zbmath.org/authors/?q=ai:brignone.riccardoSummary: We find explicit formulas for the moments of the time integral of an exponential Lévy process. We consider both the cases of unconditional moments and conditional on the Lévy process level at the endpoints of the time interval. We propose a new methodology for reconstructing the unknown density of the time integral based on unconditional moments and an efficient simulation scheme based on conditional moments. These methodologies are applied for Asian option pricing, an important problem in financial literature.Semi-robust replication of barrier-style claims on price and volatilityhttps://zbmath.org/1500.911332023-01-20T17:58:23.823708Z"Carr, Peter"https://zbmath.org/authors/?q=ai:carr.peter-p"Lee, Roger"https://zbmath.org/authors/?q=ai:lee.roger-y|lee.roger-w"Lorig, Matthew"https://zbmath.org/authors/?q=ai:lorig.matthew-jSummary: We show how to price and replicate a variety of barrier-style claims written on the log price \(X\) and quadratic variation \(\langle X\rangle\) of a risky asset. Our framework assumes no arbitrage, frictionless markets and zero interest rates. We model the risky asset as a strictly positive continuous semimartingale with an independent volatility process. The volatility process may exhibit jumps and may be non-Markovian. As hedging instruments, we use only the underlying risky asset, zero-coupon bonds, and European calls and puts with the same maturity as the barrier-style claim. We consider knock-in, knock-out and rebate claims in single and double barrier varieties.Positive XVAshttps://zbmath.org/1500.911342023-01-20T17:58:23.823708Z"Crépey, Stéphane"https://zbmath.org/authors/?q=ai:crepey.stephaneSummary: Since the 2008 crisis, derivative dealers charge to their clients various add-ons, dubbed XVAs, meant to account for counterparty risk and its capital and funding implications. As banks cannot replicate jump-to-default related cash flows, deals trigger wealth transfers and shareholders need to set capital at risk. We devise an XVA policy, whereby so called contra-liabilities and cost of capital are sourced from bank clients at trade inceptions, on top of the fair valuation of counterparty risk, in order to guarantee to the shareholders a hurdle rate \(h \) on their capital at risk. The resulting all-inclusive XVA formula reads (CVA+FVA+KVA), where C sits for credit, F for funding, and where the KVA is a cost of capital risk premium. All these XVA metrics are portfolio-wide, nonnegative and, despite the fact that we include the default of the bank itself in our modeling, they are ultimately unilateral. This makes them naturally in line with the requirement that capital at risk and reserve capital should not decrease simply because the credit risk of the bank has worsened. An application of this approach to a dealer bank reveals, in particular, the XVA implications of the centrally cleared hedging side of the derivative portfolio of the bank.Pricing of spread and exchange options in a rough jump-diffusion markethttps://zbmath.org/1500.911352023-01-20T17:58:23.823708Z"Hainaut, Donatien"https://zbmath.org/authors/?q=ai:hainaut.donatienSummary: Asset dynamics with rough volatility recently received a great deal of attention in finance because they are consistent with empirical observations. This article provides a detailed analysis of the impact of roughness on prices of spread and exchange options. We consider a bivariate extension of the rough Heston model with jumps and derive the joint characteristic functions of asset log-returns under the risk neutral measure and under a measure using a risky asset as numeraire. These characteristic functions are expressed in terms of solutions of fractional differential equations (FDE's). To infer these FDE's, we rewrite the rough model as an infinite dimensional Markov process and propose a finite dimensional approximation. Next, we show that characteristic functions of log-returns admit a representation in terms of forward differential equations. FDE's are obtained by passing to the limit. Spread and exchange options are valued with a two or a one dimensional discrete Fourier transform. The numerical illustration reveals that considering a rough instead of Brownian volatility does not systematically increase exchange option prices.Proof of non-convergence of the short-maturity expansion for the SABR modelhttps://zbmath.org/1500.911362023-01-20T17:58:23.823708Z"Lewis, Alan L."https://zbmath.org/authors/?q=ai:lewis.alan-l"Pirjol, Dan"https://zbmath.org/authors/?q=ai:pirjol.danSummary: We study the convergence properties of the short maturity expansion of option prices in the uncorrelated log-normal \((\beta = 1)\) SABR model. In this model, the option time-value can be represented as an integral of the form \(V(T) = \int_0^\infty e^{- \frac{u^2}{2T}} g(u) \mathrm{d}u\) with \(g(u)\) a `payoff function' which is given by an integral over the McKean kernel \(\mathcal{G}(t,s)\). We study the analyticity properties of the function \(g(u)\) in the complex \(u\)-plane and show that it is holomorphic in the strip \(| \mathfrak{I}(u) | < \pi\). Using this result, we show that the \(T\)-series expansion of \(V(T)\) and implied volatility are asymptotic (non-convergent for any \(T>0\)). In a certain limit which can be defined either as the large volatility limit \(\sigma_0 \to \infty\) at fixed \(\omega = 1\), or the small vol-of-vol limit \(\omega \to 0\) limit at fixed \(\omega \sigma_0\), the short maturity \(T\)-expansion for the implied volatility has a finite convergence radius \(T_c = \frac{1.32}{\omega \sigma_0}\).\( G\)-expectation approach to stochastic orderinghttps://zbmath.org/1500.911372023-01-20T17:58:23.823708Z"Ly, Sel"https://zbmath.org/authors/?q=ai:ly.sel"Privault, Nicolas"https://zbmath.org/authors/?q=ai:privault.nicolasSummary: This paper studies stochastic ordering under nonlinear expectations \(\mathcal{E}_{\mathcal{G}}\) generated by solutions of \( G \)-backward stochastic differential equations (\(G \)-BSDEs) defined on \( G \)-expectation spaces. We derive sufficient conditions for the convex, increasing convex, and monotonic \( G \)-stochastic orderings of \( G \)-diffusion processes at terminal time. Our approach relies on comparison properties for \( G \)-forward-backward stochastic differential equations (\(G\)-FBSDEs) and on relevant extensions of convexity, monotonicity and continuous dependence properties for the solutions of associated Hamilton-Jacobi-Bellman (HJB) equations. Applications of \( G \)-stochastic ordering to contingent claim superhedging price comparison under ambiguous coefficients are provided.Implied price processes anchored in statistical realizationshttps://zbmath.org/1500.911382023-01-20T17:58:23.823708Z"Madan, Dilip B."https://zbmath.org/authors/?q=ai:madan.dilip-b"Wang, King"https://zbmath.org/authors/?q=ai:wang.king-hangSummary: It is observed that statistical and risk neutral densities of compound Poisson processes are unconstrained relative to each other. Continuous processes are too constrained and generally not consistent with market data. Pure jump limit laws deliver operational models simultaneously consistent with both data sets with the additional imposition of no measure change on the arbitrarily small moves. The measure change density must have a finite Hellinger distance from unity linking the two worlds. Models are constructed using the bilateral gamma and the CGMY models for the risk neutral specification. They are linked to the physical process by measure change models. The resulting models simultaneously calibrate statistical tail probabilities and option prices. The resulting models have up to eight or ten parameters permitting the study of risk reward relations at a finer level. Rewards measured by power variations of the up and down moves are observed to value negatively(positively) the even(odd) variations of their own side with the converse holding for the opposite side.Empirical analysis of rough and classical stochastic volatility models to the SPX and VIX marketshttps://zbmath.org/1500.911392023-01-20T17:58:23.823708Z"Rømer, Sigurd Emil"https://zbmath.org/authors/?q=ai:romer.sigurd-emilSummary: We conduct an empirical analysis of rough and classical stochastic volatility models to the SPX and VIX options markets. Our analysis focusses primarily on calibration quality and is split in two parts. In part one, we perform a historical calibration to SPX options over the years 2004--2019 of a selection of models that include the one-factor rough Bergomi and rough Heston models. In part two, we consider three calibration dates with low, typical, and high volatility, examine a wide selection of models, and calibrate to both SPX options as well as jointly to SPX and VIX options. The key results are as follows: The rough Bergomi and rough Heston models fail to create a term structure of smile effect that is sufficiently pronounced for SPX options. Moreover, we discover that short-expiry SPX smiles generally are more symmetric than long-expiry smiles, a feature we neither find that these models can reproduce. We propose an alternative volatility model driven by two Ornstein-Uhlenbeck processes that uses a non-standard transformation function. Calibrating it to SPX options, we obtain almost perfect fits, and calibrating it jointly to SPX and VIX options, we obtain very decent fits. This suggests, contrary to what one might be led to believe based on much of the existing literature, that the joint SPX-VIX calibration problem is largely solvable with classical two-factor volatility, all without roughness and jumps.Valuation of volatility derivatives with time-varying volatility: an analytical probabilistic approach using a mixture distribution for pricing nonlinear payoff volatility derivatives in discrete observation casehttps://zbmath.org/1500.911402023-01-20T17:58:23.823708Z"Rujivan, Sanae"https://zbmath.org/authors/?q=ai:rujivan.sanaeSummary: In this paper, we present an analytical probabilistic approach for pricing nonlinear payoff volatility derivatives with discrete sampling by assuming that the underlying asset price evolves according to the Black-Scholes model with time-varying volatility. A major difficulty to solve the pricing problem analytically is the volatility of the underlying asset price is time-varying, resulting the realized variance is distributed according to a mixture distribution, the probability density function of which is unknown. By utilizing the properties of a linear combination of noncentral chi-square random variables, we can calculate the expectation of square root of the realized variance analytically and provide formulas for pricing volatility swaps, volatility options, and variance options, including put-call parity relationships. Furthermore, we demonstrate an interesting application of our formulas by constructing simple closed-form approximate formulas for pricing the volatility derivatives for the constant elasticity of variance model. Finally, Monte Carlo simulations are conducted to illustrate the performance our approach, and effects of price volatility on the fair strike prices of the volatility derivatives are investigated and analyzed through several numerical experiments.Asset price bubbles in markets with transaction costshttps://zbmath.org/1500.911412023-01-20T17:58:23.823708Z"Biagini, Francesca"https://zbmath.org/authors/?q=ai:biagini.francesca"Reitsam, Thomas"https://zbmath.org/authors/?q=ai:reitsam.thomasSummary: We study asset price bubbles in market models with proportional transaction costs \(\lambda\in (0, 1)\) and finite time horizon \(T\) in the setting of \textit{W. Schachermayer} [Lect. Notes Math. 2123, 317--331 (2014; Zbl 1390.91286)]. By following \textit{M. Herdegen} and \textit{M. Schweizer} [Int. J. Theor. Appl. Finance 19, No. 4, Article ID 1650022, 44 p. (2016; Zbl 1350.91019)], we define the fundamental value \( F\) of a risky asset \(S \) as the price of a super-replicating portfolio for a position terminating in one unit of the asset and zero cash. We then obtain a dual representation for the fundamental value by using the super-replication theorem of \textit{W. Schachermayer} [Math. Financ. Econ. 8, No. 4, 383--398 (2014; Zbl 1309.91136)]. We say that an asset price has a bubble if its fundamental value differs from the ask-price \((1+\lambda)S \). We investigate the impact of transaction costs on asset price bubbles and show that our model intrinsically includes the birth of a bubble.Asset pricing under smooth ambiguity in continuous timehttps://zbmath.org/1500.911422023-01-20T17:58:23.823708Z"Hansen, Lars Peter"https://zbmath.org/authors/?q=ai:hansen.lars-peter"Miao, Jianjun"https://zbmath.org/authors/?q=ai:miao.jianjunSummary: We study asset pricing implications of a revealing and tractable formulation of smooth ambiguity investor preferences in a continuous-time environment. Investors do not observe a hidden Markov state and instead make inferences about this state using past data. We show that ambiguity about this hidden state distribution alters investor decisions and equilibrium asset prices. Our continuous-time formulation allows us to apply recursive filtering and Hamilton-Jacobi-Bellman methods to solve the modified decision problem. Using such methods, we show how characterizations of portfolio allocations and local uncertainty-return tradeoffs change when investors are ambiguity-averse.A comparative study of corporate credit ratings prediction with machine learninghttps://zbmath.org/1500.911432023-01-20T17:58:23.823708Z"Doğan, Seyyide"https://zbmath.org/authors/?q=ai:dogan.seyyide"Büyükkör, Yasin"https://zbmath.org/authors/?q=ai:buyukkor.yasin"Atan, Murat"https://zbmath.org/authors/?q=ai:atan.muratSummary: Credit scores are critical for financial sector investors and government officials, it is important to develop reliable, transparent and appropriate tools for obtaining ratings. The aim of this study is to predict company credit scores with machine learning and modern statistical methods, both in sectoral and aggregated data. Analyzes are made on 1881 companies operating in three different sectors that applied for loans from Turkey's largest public bank. The results of the experiment are compared in terms of classification accuracy, sensitivity, specificity, precision, and Mathews correlation coefficient. When credit ratings are estimated on sectoral basis, it is observed that the classification rate changes considerably. Considering the analysis results, it is seen that logistic regression analysis, support vector machines, random forest and XGBoost have better performance than decision tree and \(k\)-nearest neighbour for all data sets.Pricing of debt and equity in a financial network with comonotonic endowmentshttps://zbmath.org/1500.911442023-01-20T17:58:23.823708Z"Banerjee, Tathagata"https://zbmath.org/authors/?q=ai:banerjee.tathagata"Feinstein, Zachary"https://zbmath.org/authors/?q=ai:feinstein.zacharySummary: In this paper, we present formulas for the valuation of debt and equity of firms in a financial network under comonotonic endowments. We demonstrate that the comonotonic setting provides a lower bound and Jensen's inequality provides an upper bound to the price of debt under Eisenberg-Noe financial networks with bankruptcy costs. Such financial networks encode the interconnection of firms through debt claims. The proposed pricing formulas consider the realized, endogenous recovery rate on debt claims. We endogenously construct the comonotonic endowment setting from an equity maximizing standpoint with capital transfers. We conclude by, numerically, comparing the network valuation problem with two single firm baseline heuristics that can, respectively, approximate the price of debt and equity.Model-based approach for scenario design: stress test severity and banks' resiliencyhttps://zbmath.org/1500.911452023-01-20T17:58:23.823708Z"Barbieri, Paolo Nicola"https://zbmath.org/authors/?q=ai:barbieri.paolo-nicola"Lusignani, Giuseppe"https://zbmath.org/authors/?q=ai:lusignani.giuseppe"Prosperi, Lorenzo"https://zbmath.org/authors/?q=ai:prosperi.lorenzo"Zicchino, Lea"https://zbmath.org/authors/?q=ai:zicchino.leaSummary: After the financial crisis, evaluating the financial health of banks under stressed scenarios has become common practice among supervisors. According to supervisory guidelines, the adverse scenarios prepared for stress tests need to be severe but plausible. The first contribution of this paper is to propose a model-based approach to assess the severity of the scenarios. To do so, we use a large Bayesian VAR model estimated on the Italian economy where potential spillovers between the macroeconomy and the aggregate banking sector are explicitly considered. We show that the 2018 exercise has been the most severe so far, in particular, due to the path of GDP, the stock market index and the 3-month Euribor rate. Our second contribution is an evaluation of whether the resilience of the Italian banking sector to adverse scenarios has increased over time (for example, thanks to improved risk management practices induced by greater awareness of risks that come with performing stress test exercises). To this scope, we construct counterfactual exercises by recalibrating the scenarios of the 2016 and 2018 exercises so that they have the same severity as the 2014 exercise. We find that in 2018, the economy would have experienced a smaller decline in loans compared to the previous exercises. This implies that banks could afford to deleverage less, i.e. maintain a higher exposure to risk in their balance sheets. We interpret this as evidence of increased resilience.Robust leverage dynamics without commitmenthttps://zbmath.org/1500.911462023-01-20T17:58:23.823708Z"Li, Shilin"https://zbmath.org/authors/?q=ai:li.shilin"Yang, Jinqiang"https://zbmath.org/authors/?q=ai:yang.jinqiang"Zhao, Siqi"https://zbmath.org/authors/?q=ai:zhao.siqiSummary: This paper analyzes the dynamic capital structure choices with model uncertainty. We find that robustness concerns from shareholders and creditors have distinct implications. Creditor ambiguity aversion allows a firm to take advantage of the debt tax shield in no-commitment equilibrium because ambiguity aversion serves as a commitment device that disciplines the leverage ratchet effect and even results in debt buybacks. In contrast, shareholder ambiguity aversion mitigates overborrowing incentives only when the default option is out-of-the-money. If the default option is sufficiently in-the-money, ambiguity-averse shareholders are tempted to adopt a more aggressive debt policy because they can transfer model uncertainty to creditors upon default. Interestingly, we show that the commitment against future debt dilution could be suboptimal because of inefficient ambiguity sharing. Finally, we highlight that model uncertainty and volatility have distinct impacts on target leverage, default, and debt capacity.A critical look at the Aumann-Serrano and Foster-Hart measures of riskinesshttps://zbmath.org/1500.911472023-01-20T17:58:23.823708Z"Chew, Soo Hong"https://zbmath.org/authors/?q=ai:chew.soo-hong"Sagi, Jacob S."https://zbmath.org/authors/?q=ai:sagi.jacob-sSummary: \textit{S. Hart} [J. Polit. Econ. 119, No. 4, 617--638 (2011; \url{doi:10.1086/662222})] argues that the \textit{R. J. Aumann} and \textit{R. Serrano} [J. Polit. Econ. 116, No. 5, 810--836 (2008; Zbl 1341.91040)] and \textit{D. P. Foster} and \textit{S. Hart} [J. Polit. Econ. 117, No. 5, 785--814 (2009; \url{doi:10.1086/644840})] measures of riskiness have an objective and universal appeal with respect to a subset of expected utility preferences, \({{\mathcal{U}}}_H\). We show that mean-riskiness decision-making criteria using either measure violate expected utility and are generally inconsistent with optimal portfolio choices made by investors with preferences in \({{\mathcal{U}}}_H\). We also demonstrate that riskiness measures satisfying Hart's other behavioral requirements do not generally exist when his argument is generalized to incorporate non-expected utility preferences. Finally, we identify other attributes of the Aumann-Serrano and Foster-Hart measures that raise concerns over their operationalizability and usefulness in various decision making, risk management, and risk assessment settings.Vulnerability-CoVaR: investigating the crypto-markethttps://zbmath.org/1500.911482023-01-20T17:58:23.823708Z"Waltz, Martin"https://zbmath.org/authors/?q=ai:waltz.martin"Singh, Abhay Kumar"https://zbmath.org/authors/?q=ai:singh.abhay-kumar"Okhrin, Ostap"https://zbmath.org/authors/?q=ai:okhrin.ostapSummary: This paper proposes an important extension to conditional value-at-risk (CoVaR), the popular systemic risk measure, and investigates its properties on the cryptocurrency market. The proposed vulnerability-CoVaR (VCoVaR) is defined as the value-at-risk (VaR) of a financial system or institution, given that at least one other institution is equal or below its VaR. The VCoVaR relaxes normality assumptions and is estimated via copula. While important theoretical findings of the measure are detailed, the empirical study analyses how different distressing events of the cryptocurrencies impact the risk level of each other. The results show that litecoin displays the largest impact on bitcoin and that each cryptocurrency is significantly affected if an event of joint distress among the remaining market participants occurs. The VCoVaR is shown to capture domino effects better than other CoVaR extensions.Inferring mechanisms of auditory attentional modulation with deep neural networkshttps://zbmath.org/1500.920092023-01-20T17:58:23.823708Z"Kuo, Ting-Yu"https://zbmath.org/authors/?q=ai:kuo.ting-yu"Liao, Yuanda"https://zbmath.org/authors/?q=ai:liao.yuanda"Li, Kai"https://zbmath.org/authors/?q=ai:li.kai"Hong, Bo"https://zbmath.org/authors/?q=ai:hong.bo"Hu, Xiaolin"https://zbmath.org/authors/?q=ai:hu.xiaolinSummary: Humans have an exceptional ability to extract specific audio streams of interest in a noisy environment; this is known as the cocktail party effect. It is widely accepted that this ability is related to selective attention, a mental process that enables individuals to focus on a particular object. Evidence suggests that sensory neurons can be modulated by top-down signals transmitted from the prefrontal cortex. However, exactly how the projection of attention signals to the cortex and subcortex influences the cocktail effect is unclear. We constructed computational models to study whether attentional modulation is more effective at earlier or later stages for solving the cocktail party problem along the auditory pathway. We modeled the auditory pathway using deep neural networks (DNNs), which can generate representational neural patterns that resemble the human brain. We constructed a series of DNN models in which the main structures were autoencoders. We then trained these DNNs on a speech separation task derived from the dichotic listening paradigm, a common paradigm to investigate the cocktail party effect. We next analyzed the modulation effects of attention signals during all stages. Our results showed that the attentional modulation effect is more effective at the lower stages of the DNNs. This suggests that the projection of attention signals to lower stages within the auditory pathway plays a more significant role than the higher stages in solving the cocktail party problem. This prediction could be tested using neurophysiological experiments.Assortment and reciprocity mechanisms for promotion of cooperation in a model of multilevel selectionhttps://zbmath.org/1500.920622023-01-20T17:58:23.823708Z"Cooney, Daniel B."https://zbmath.org/authors/?q=ai:cooney.daniel-bSummary: In the study of the evolution of cooperation, many mechanisms have been proposed to help overcome the self-interested cheating that is individually optimal in the prisoners' dilemma game. These mechanisms include assortative or networked social interactions, other-regarding preferences considering the payoffs of others, reciprocity rules to establish cooperation as a social norm, and multilevel selection involving simultaneous competition between individuals favoring cheaters and competition between groups favoring cooperators. In this paper, we build on recent work studying PDE replicator equations for multilevel selection to understand how within-group mechanisms of assortment, other-regarding preferences, and both direct and indirect reciprocity can help to facilitate cooperation in concert with evolutionary competition between groups. We consider a group-structured population in which interactions between individuals consist of prisoners' dilemma games and study the dynamics of multilevel competition determined by the payoffs individuals receive when interacting according to these within-group mechanisms. We find that the presence of each of these mechanisms acts synergistically with multilevel selection for the promotion of cooperation, decreasing the strength of between-group competition required to sustain long-time cooperation and increasing the collective payoff achieved by the population. However, we find that only other-regarding preferences allow for the achievement of socially optimal collective payoffs for prisoners' dilemma games in which average payoff is maximized by an intermediate mix of cooperators and defectors. For the other three mechanisms, the multilevel dynamics remain susceptible to a shadow of lower-level selection, as the collective outcome fails to exceed the payoff of the all-cooperator group.Ship navigation in narrowness passes and channels in uncertain conditions: intelligent decision supporthttps://zbmath.org/1500.930612023-01-20T17:58:23.823708Z"Kondratenko, Yuriy"https://zbmath.org/authors/?q=ai:kondratenko.yuriy-p"Sidorenko, Serhiy"https://zbmath.org/authors/?q=ai:sidorenko.serhiySummary: The intelligent approach and methods for increasing the safety of ships' navigation in channels, fairways, rivers, straits, etc. based on automation of the processes of decision making in uncertain conditions are considered in the chapter. The structure of the decision-making units based on fuzzy-logic devices, namely fuzzification, aggregation, accumulation and solving devices, is presented to highlight their complementing role to accomplish desired target functions. Novel fast-acting working algorithms for a qualitative coordinates fuzzification device for a case of the first Gaussian form of membership function are proposed. The results of extensive computer simulations confirm the efficiency of the proposed algorithms as well as fuzzy models of ships passing afloat narrow places and channels. Thus some new insight into the underlying complexities of physical constraints and limitations has been achieved while confirming the validity of the proposed novel results. The tasks of the computerized control and decision support system design including hierarchy, reconfiguration, structure-parametrical optimisation, reliability and survivability are discussed.
For the entire collection see [Zbl 1495.93004].Multi-rate threshold FlipThemhttps://zbmath.org/1500.940372023-01-20T17:58:23.823708Z"Leslie, David"https://zbmath.org/authors/?q=ai:leslie.david-s"Sherfield, Chris"https://zbmath.org/authors/?q=ai:sherfield.chris"Smart, Nigel P."https://zbmath.org/authors/?q=ai:smart.nigel-pSummary: A standard method to protect data and secrets is to apply threshold cryptography in the form of secret sharing. This is motivated by the acceptance that adversaries will compromise systems at some point; and hence using threshold cryptography provides a defence in depth. The existence of such powerful adversaries has also motivated the introduction of game theoretic techniques into the analysis of systems, e.g. via the FlipIt game of \textit{M. van Dijk} et al. [J. Cryptology 26, No. 4, 655--713 (2013; Zbl 1283.94089)]. This work further analyses the case of FlipIt when used with multiple resources, dubbed FlipThem in prior papers. We examine two key extensions of the FlipThem game to more realistic scenarios; namely separate costs and strategies on each resource, and a learning approach obtained using so-called fictitious play in which players do not know about opponent costs, or assume rationality.
For the entire collection see [Zbl 1493.68009].