Recent zbMATH articles in MSC 62https://zbmath.org/atom/cc/622023-01-20T17:58:23.823708ZWerkzeugEstimate of traffic emissions through multiscale second order models with heterogeneous datahttps://zbmath.org/1500.352082023-01-20T17:58:23.823708Z"Balzotti, Caterina"https://zbmath.org/authors/?q=ai:balzotti.caterina"Briani, Maya"https://zbmath.org/authors/?q=ai:briani.mayaSummary: In this paper we propose a multiscale traffic model, based on the family of Generic Second Order Models, which integrates multiple trajectory data into the velocity function. This combination of a second order macroscopic model with microscopic information allows us to reproduce significant variations in speed and acceleration that strongly influence traffic emissions. We obtain accurate approximations even with a few trajectory data. The proposed approach is therefore a computationally efficient and highly accurate tool for calculating macroscopic traffic quantities and estimating emissions.A distribution function from population genetics statistics using Stirling numbers of the first kind: asymptotics, inversion and numerical evaluationhttps://zbmath.org/1500.410092023-01-20T17:58:23.823708Z"Chen, Swaine L."https://zbmath.org/authors/?q=ai:chen.swaine-l"Temme, Nico M."https://zbmath.org/authors/?q=ai:temme.nico-mThe authors consider the cumulative distribution function
\[
S_{n,m}(\theta):=\frac{1}{(\theta)_n}\sum_{k=m}^n(-1)^{n-k}S_n^{(k)}\theta^k,\theta>0,
\]
where \(S_n^{(k)}\) are the Stirling numbers of the first kind and \((\theta)_n\) denotes the Pochhammer symbol. The main interest of the authors is the study of the inversion problem: ``given a certain \(s\in(0,1)\) and fixed natural numbers \(m\le n\), find \(\theta>0\) solution of the equation \(S_{n,m}(\theta)=s\)''. To accomplish this task, the authors introduce first some properties of the Stirling numbers and their sums, such as recurrence relations. They also summarize some earlier results, like contour integral representations of \(S_{n,m}(\theta)\) and \(T_{n,m}(\theta):=1-S_{n,m}(\theta)\).
The authors indicate the interest of this study in the area of population genetics statistics. This paper improves several results derived by the authors in previous papers, providing additional results about the asymptotic approximation of the cumulative distribution functions \(S_{n,m}(\theta)\) and \(T_{n,m}(\theta)\).
Although the main results is the derivation of two algorithms for an approximative computation of the solution \(\theta\) of the inversion problem \(S_{n,m}(\theta)=s\). One of the algorithms is based on Newton iterations. The second algorithm is based on asymptotic approximations derived from the above mentioned contour integral representations.
It is shown that there is some loss of accuracy near the transition values, but these points are relatively less important for population genetic statistics. In any case, the error may always be reduced by adding more and more terms to the expansion.
Reviewer: José L. Lopez (Pamplona)Probabilistic solutions of integral equations from optimal controlhttps://zbmath.org/1500.450012023-01-20T17:58:23.823708Z"Lefebvre, Mario"https://zbmath.org/authors/?q=ai:lefebvre.marioThe author gives a probabilistic interpretation of the solution to a certain inhomogeneous Fredholm integral equation of the second kind which originates from an optimal control problem for autoregressive processes of order 1. Then he uses this interpretation to obtain an approximated solution of a generalization of the integral equation. He also compares the approximated solution with the one obtained by the classical Neumann series in various particular cases. In his examples, the constructed approximated solutions are better than the ones obtained with the classical Neumann series and the convergence may be even faster.
Reviewer: Qi Lu (Chengdu)A non-commutative Bayes' theoremhttps://zbmath.org/1500.460522023-01-20T17:58:23.823708Z"Parzygnat, Arthur J."https://zbmath.org/authors/?q=ai:parzygnat.arthur-j"Russo, Benjamin P."https://zbmath.org/authors/?q=ai:russo.benjamin-pOver the last 20 years, there has been a lot of interest in generalizing Bayesian inference from classical probability to quantum probability, see for example [\textit{H. Barnum} and \textit{E. Knill}, J. Math. Phys. 43, No. 5, 2097--2106 (2002; Zbl 1059.81027); \textit{M. S. Leifer}, AIP Conf. Proc. 889, 172--186 (2007; Zbl 1138.81334); \textit{B. Coecke} and \textit{R. W. Spekkens}, Synthese 186, No. 3, 651--696 (2012; Zbl 1275.60006); \textit{B. Jacobs}, Electron. Proc. Theor. Comput. Sci. (EPTCS) 287, 225--238 (2019; Zbl 1486.60004)]. Among the main challenges are to find a well-behaved generalization of Bayesian updating to the quantum setting and to develop methods for computing it.
Together with other recent preprints by the same authors, the present paper contributes to this line of investigation. It studies a particularly natural, but restrictive notion of Bayesian inversion in the Heisenberg picture: given finite-dimensional C*-algebras \(\mathcal{A}\) and \(\mathcal{B}\) with states \(\omega : \mathcal{A} \to \mathbb{C}\) and \(\xi : \mathcal{B} \to \mathbb{C}\), the authors define a quantum channel \(G : \mathcal{A} \to \mathcal{B}\) to be a Bayesian inverse of a quantum channel \(F : \mathcal{B} \to \mathcal{A}\) if
\[
\xi(G(A) B) = \omega(A F(B)) \qquad \forall A \in \mathcal{A}, \: B \in \mathcal{B}.
\]
The main result of the paper is a necessary and sufficient condition for when the Bayesian inverse \(G\) exists (Theorems 5.62 and 6.22). The obvious necessary condition \(\xi = \omega \circ F\) is assumed throughout, but is found not to be sufficient. Much of the subtlety with the problem of existence of \(G\) is due to the difficulties that arise in the case where \(\xi\) does not have full support, and the authors emphasize that their careful treatment of this aspect improves upon other works significantly.
Prior to solving the existence question, Section 3 discusses some basics of the above notion of Bayesian inversion, and in particular explains the sense in which the Bayesian inverse \(G\) is unique up to almost sure equality. Section 4 provides a good selection of special cases of the above problem, and provides a number of more concrete criteria for the existence of a Bayesian inverse. These criteria take the form of commutativity conditions, which suggests that Bayesian inverses in the above sense may be relatively rare.
Reviewer: Tobias Fritz (Innsbruck)On the density for sums of independent exponential, Erlang and gamma variateshttps://zbmath.org/1500.600082023-01-20T17:58:23.823708Z"Levy, Edmond"https://zbmath.org/authors/?q=ai:levy.edmondThe paper begins with the formula for the hypo-exponential density for the sum of independent exponentials having pairwise distinct parameters.
The author points out that this density has a divided difference characteristic which immediately suggests a novel perspective to further explore the densities of sums of independent exponentials.
The paper advances a succinct representation for the density of independent Erlang distributed variables and demonstrates agreement with others papers where such formulae have been found (and often rediscovered) by other means.
The author extends these results further, by using the tools of fractional calculus (see for example [\textit{K. B. Oldham} and \textit{J. Spanier}, The fractional calculus. Theory and applications of differentiation and integration to arbitrary order. Elsevier, Amsterdam (1974; Zbl 0292.26011)]), a representation is also founded for the density for sums of distinct independent gamma random variables. The paper concludes by showing how this approach produces the density function itself.
Reviewer: Romeo Negrea (Timişoara)A probabilistic interpretation of the Dzhrbashyan fractional integralhttps://zbmath.org/1500.600092023-01-20T17:58:23.823708Z"Zhao, Dazhi"https://zbmath.org/authors/?q=ai:zhao.dazhi"Yu, Guozhu"https://zbmath.org/authors/?q=ai:yu.guozhu"Yu, Tao"https://zbmath.org/authors/?q=ai:yu.tao"Zhang, Lu"https://zbmath.org/authors/?q=ai:zhang.luSummary: Physical and probabilistic interpretations of the fractional derivatives and integrals are basic problems to their applications. In this paper, we establish a relation between the Dzhrbashyan fractional integral and the expectation of a corresponding random variable by constructing the cumulative distribution function. As examples, interpretations of the Riemann-Liouville fractional integral and Kober integral operator are given. Furthermore, probabilistic interpretations of the Caputo fractional derivative and the fractional integral of a function with respect to another function are discussed too. With the help of probabilistic interpretations proposed in this paper, models described by fractional derivatives and integrals can be endowed with corresponding statistical meanings, while some statistical physics models can be rewritten in fractional calculus too.Lorenz and polarization orderings of the double-Pareto lognormal distribution and other size distributionshttps://zbmath.org/1500.600102023-01-20T17:58:23.823708Z"Okamoto, Masato"https://zbmath.org/authors/?q=ai:okamoto.masatoSummary: Polarization indices such as the Foster-Wolfson index have been developed to measure the extent of clustering in a few classes with wide gaps between them in terms of income distribution. However, \textit{X. Zhang} and \textit{Kanbur} [``What difference do polarization measures make? An application to China'', J. Dev. Stud. 37, 85--98 (2001)] failed to empirically find clear differences between polarization and inequality indices in the measurement of intertemporal distributional changes. This paper addresses this `distinction' problem on the level of the respective underlying stochastic orders, the polarization order (PO) in distributions divided into two nonoverlapping classes and the Lorenz order (LO) of inequality in distributions. More specifically, this paper investigates whether a distribution \(F\) can be either more or less polarized than a distribution \(H\) in terms of the PO if \(F\) is more unequal than \(H\) in terms of the LO. Furthermore, this paper derives conditions for the LO and PO of the double-Pareto lognormal (dPLN) distribution. The derived conditions are applicable to sensitivity analyses of inequality and polarization indices with respect to distributional changes. From this application, a suggestion for appropriate two-class polarization indices is made.Central limit theorem for bifurcating Markov chains under pointwise ergodic conditionshttps://zbmath.org/1500.600132023-01-20T17:58:23.823708Z"Penda, S. Valère Bitseki"https://zbmath.org/authors/?q=ai:bitseki-penda.s-valere"Delmas, Jean-François"https://zbmath.org/authors/?q=ai:delmas.jean-francoisSummary: Bifurcating Markov chains (BMC) are Markov chains indexed by a full binary tree representing the evolution of a trait along a population where each individual has two children. We provide a central limit theorem for general additive functionals of BMC, and prove the existence of three regimes. This corresponds to a competition between the reproducing rate (each individual has two children) and the ergodicity rate for the evolution of the trait. This is in contrast with the work of \textit{J. Guyon} [Ann. Appl. Probab. 17, No. 5--6, 1538--1569 (2007; Zbl 1143.62049)], where the considered additive functionals are sums of martingale increments, and only one regime appears. Our result can be seen as a discrete time version, but with general trait evolution, of results in the time continuous setting of branching particle system from [\textit{R. Adamczak} and \textit{P. Miłoś}, Electron. J. Probab. 20, Paper No. 42, 35 p. (2015; Zbl 1321.60035)], where the evolution of the trait is given by an Ornstein-Uhlenbeck process.A CLT for second difference estimators with an application to volatility and intensityhttps://zbmath.org/1500.600142023-01-20T17:58:23.823708Z"Stoltenberg, Emil A."https://zbmath.org/authors/?q=ai:stoltenberg.emil-aas"Mykland, Per A."https://zbmath.org/authors/?q=ai:mykland.per-aslak"Zhang, Lan"https://zbmath.org/authors/?q=ai:zhang.lanSummary: In this paper, we introduce a general method for estimating the quadratic covariation of one or more spot parameter processes associated with continuous time semimartingales, and present a central limit theorem that has this class of estimators as one of its applications. The class of estimators we introduce, that we call Two-Scales Quadratic Covariation \((\text{TSQC})\) estimators, is based on sums of increments of second differences of the observed processes, and the intervals over which the differences are computed are rolling and overlapping. This latter feature lets us take full advantage of the data, and, by sufficiency considerations, ought to outperform estimators that are based on only one partition of the observational window. Moreover, a two-scales approach is employed to deal with asymptotic bias terms in a systematic manner, thus automatically giving consistent estimators without having to work out the form of the bias term on a case-to-case basis. We highlight the versatility of our central limit theorem by applying it to a novel leverage effect estimator that does not belong to the class of \(\text{TSQC}\) estimators. The principal empirical motivation for the present study is that the discrete times at which a continuous time semimartingale is observed might depend on features of the observable process other than its level, such as its spot-volatility process. As an application of the \(\text{TSQC}\) estimators, we therefore show how it may be used to estimate the quadratic covariation between the spot-volatility process and the intensity process of the observation times, when both of these are taken to be semimartingales. The finite sample properties of this estimator are studied by way of a simulation experiment, and we also apply this estimator in an empirical analysis of the Apple stock. Our analysis of the Apple stock indicates a rather strong correlation between the spot volatility process of the log-prices process and the times at which this stock is traded and hence observed.The multivariate functional de Jong CLThttps://zbmath.org/1500.600182023-01-20T17:58:23.823708Z"Döbler, Christian"https://zbmath.org/authors/?q=ai:dobler.christian"Kasprzak, Mikołaj"https://zbmath.org/authors/?q=ai:kasprzak.mikolaj-j"Peccati, Giovanni"https://zbmath.org/authors/?q=ai:peccati.giovanniSummary: We prove a multivariate functional version of \textit{P. de Jong}'s CLT [J. Multivariate Anal. 34, No. 2, 275--289 (1990; Zbl 0709.60019)] yielding that, given a sequence of vectors of Hoeffding-degenerate U-statistics, the corresponding empirical processes on \([0, 1]\) weakly converge in the Skorohod space as soon as their fourth cumulants in \(t=1\) vanish asymptotically and a certain strengthening of the Lindeberg-type condition is verified. As an application, we lift to the functional level the `universality of Wiener chaos' phenomenon first observed in [\textit{I. Nourdin} et al., Ann. Probab. 38, No. 5, 1947--1985 (2010; Zbl 1246.60039)].Geometric ergodicity of the multivariate COGARCH(1,1) processhttps://zbmath.org/1500.600452023-01-20T17:58:23.823708Z"Stelzer, Robert"https://zbmath.org/authors/?q=ai:stelzer.robert"Vestweber, Johanna"https://zbmath.org/authors/?q=ai:vestweber.johannaThe authors deduce, under the assumption of irreducibility sufficient conditions for the uniqueness of the stationary distribution, the convergence to it with an exponential rate and some finite \(p\)-moment of the stationary distribution of the MUCOGARCH (see [\textit{R. Stelzer}, Bernoulli 16, No. 1, 80--115 (2010; Zbl 1200.62110)], which is an extension of classical general autoregressive conditionally heteroscedastic (GARCH) time series models introduced by \textit{T. Bollerslev} [J. Econom. 31, 307--327 (1986; Zbl 0616.62119)]) volatility process \(Y\). They show this using the theory of Markov process, see, e.g., [\textit{D. Down} et al., Ann. Probab. 23, No. 4, 1671--1691 (1995; Zbl 0852.60075); \textit{S. P. Meyn} and \textit{R. L. Tweedie}, Adv. Appl. Probab. 25, No. 3, 487--517 (1993; Zbl 0781.60052)].
Reviewer: Romeo Negrea (Timişoara)Uniform in time propagation of chaos for a Moran modelhttps://zbmath.org/1500.600472023-01-20T17:58:23.823708Z"Cloez, Bertrand"https://zbmath.org/authors/?q=ai:cloez.bertrand"Corujo, Josué"https://zbmath.org/authors/?q=ai:corujo.josue-m|corujo.josueSummary: This article studies the limit of the empirical distribution induced by a mutation-selection multi-allelic Moran model. Our results include a uniform in time bound for the propagation of chaos in \(\mathbb{L}^p\) of order \(\sqrt{N}\), and the proof of the asymptotic normality with zero mean and explicit variance, when the number of individuals tend towards infinity, for the approximation error between the empirical distribution and its limit. Additionally, we explore the interpretation of this Moran model as a particle process whose empirical probability measure approximates a quasi-stationary distribution, in the same spirit as the Fleming-Viot particle systems.Bayesian estimation of incompletely observed diffusionshttps://zbmath.org/1500.600512023-01-20T17:58:23.823708Z"van der Meulen, Frank"https://zbmath.org/authors/?q=ai:van-der-meulen.frank-h"Schauer, Moritz"https://zbmath.org/authors/?q=ai:schauer.moritzSummary: We present a general framework for Bayesian estimation of incompletely observed multivariate diffusion processes. Observations are assumed to be discrete in time, noisy and incomplete. We assume the drift and diffusion coefficient depend on an unknown parameter. A data-augmentation algorithm for drawing from the posterior distribution is presented which is based on simulating diffusion bridges conditional on a noisy incomplete observation at an intermediate time. The dynamics of such filtered bridges are derived and it is shown how these can be simulated using a generalised version of the guided proposals introduced in
[\textit{M. Schauer} et al., Bernoulli 23, No. 4A, 2917--2950 (2017; Zbl 1415.65022)].Optimal signal detection in some spiked random matrix models: likelihood ratio tests and linear spectral statisticshttps://zbmath.org/1500.620012023-01-20T17:58:23.823708Z"Banerjee, Debapratim"https://zbmath.org/authors/?q=ai:banerjee.debapratim"Ma, Zongming"https://zbmath.org/authors/?q=ai:ma.zongmingSummary: We study signal detection by likelihood ratio tests in a number of spiked random matrix models, including but not limited to Gaussian mixtures and spiked Wishart covariance matrices. We work directly with multi-spiked cases in these models and with flexible priors on signal components that allow dependence across spikes. We derive asymptotic normality for the log-likelihood ratios when the signal-to-noise ratios are below certain bounds. In addition, the log-likelihood ratios can be asymptotically decomposed as weighted sums of a collection of statistics which we call bipartite signed cycles. Based on this decomposition, we show that below the bounds we could always achieve the asymptotically optimal powers of likelihood ratio tests via tests based on linear spectral statistics which have polynomial time complexity.Exact minimax risk for linear least squares, and the lower tail of sample covariance matriceshttps://zbmath.org/1500.620022023-01-20T17:58:23.823708Z"Mourtada, Jaouad"https://zbmath.org/authors/?q=ai:mourtada.jaouadSummary: We consider random-design linear prediction and related questions on the lower tail of random matrices. It is known that, under boundedness constraints, the minimax risk is of order \(d/n\) in dimension \(d\) with \(n\) samples. Here, we study the minimax expected excess risk over the full linear class, depending on the distribution of covariates. First, the least squares estimator is exactly minimax optimal in the well-specified case, for every distribution of covariates. We express the minimax risk in terms of the distribution of statistical leverage scores of individual samples, and deduce a minimax lower bound of \(d/(n-d+1)\) for any covariate distribution, nearly matching the risk for Gaussian design. We then obtain sharp nonasymptotic upper bounds for covariates that satisfy a ``small ball''-type regularity condition in both well-specified and misspecified cases.
Our main technical contribution is the study of the lower tail of the smallest singular value of empirical covariance matrices at small values. We establish a lower bound on this lower tail, valid for any distribution in dimension \(d\ge 2\), together with a matching upper bound under a necessary regularity condition. Our proof relies on the PAC-Bayes technique for controlling empirical processes, and extends an analysis of Oliveira devoted to a different part of the lower tail.Construction of a class of copula using the finite difference methodhttps://zbmath.org/1500.650912023-01-20T17:58:23.823708Z"Bagré, Remi Guillaume"https://zbmath.org/authors/?q=ai:bagre.remi-guillaume"Béré, Frédéric"https://zbmath.org/authors/?q=ai:bere.frederic"Loyara, Vini Yves Bernadin"https://zbmath.org/authors/?q=ai:loyara.vini-yves-bernadinSummary: The definition of a copula function and the study of its properties are at the same time not obvious tasks, as there is no general method for constructing them. In this paper, we present a method that allows us to obtain a class of copula as a solution to a boundary value problem. For this, we use the finite difference method which is a common technique for finding approximate solutions of partial differential equations which consists in solving a system of relations (numerical scheme) linking the values of the unknown functions at certain points sufficiently close to each other.Computational intelligence. A methodological introduction. With contributions from Frank Klawonn and Christian Moeweshttps://zbmath.org/1500.680012023-01-20T17:58:23.823708Z"Kruse, Rudolf"https://zbmath.org/authors/?q=ai:kruse.rudolf"Mostaghim, Sanaz"https://zbmath.org/authors/?q=ai:mostaghim.sanaz"Borgelt, Christian"https://zbmath.org/authors/?q=ai:borgelt.christian"Braune, Christian"https://zbmath.org/authors/?q=ai:braune.christian"Steinbrecher, Matthias"https://zbmath.org/authors/?q=ai:steinbrecher.matthiasThe book presents a thorough exposition of the main concepts of computational intelligence. It is divided into four parts, neural networks, evolutionary algorithms, fuzzy systems and Bayesian networks, that are very well covered. The book is self-contained, all the necessary notions for understanding the concepts are included. Moreover, the four parts are independent, the reader may study only one part without needing to read another part in order to understand the notions.
The book has plenty of examples that make the understanding of the concepts easier, contains high-quality figures that present various problems, representations or results obtained from different simulations, and many algorithms written in pseudocode. Each chapter has its separate bibliography section.
It is an interesting book that may serve very well a wide audience, providing material for researchers, students as well as people working in industry.
For the preceding editions see [Zbl 1283.68280; Zbl 1358.68003].
Reviewer: Catalin Stoean (Craiova)Experimental tests of Lieb-Robinson boundshttps://zbmath.org/1500.810432023-01-20T17:58:23.823708Z"Cheneau, Marc"https://zbmath.org/authors/?q=ai:cheneau.marcSummary: Judging by the enormous body of work that it has inspired, \textit{E. H. Lieb} and \textit{D. W. Robinson}'s 1972 article [``The finite group velocity of quantum spin systems'', Commun. Math. Phys. 28, 251--257 (1972; \url{doi:10.1007/BF01645779})] on the ``Finite group velocity of quantum spin systems'' can be regarded as a ``high-impact paper'', as research accountants say. But for more than 30 years, this major contribution to quantum physics has remained pretty much confidential. Lieb and Robinson's work eventually found a large audience in the years 2000, with the rapid and concomitant development of quantum information theory and new experimental platforms. In this chapter, I will first remind the reader of the central result of Lieb and Robinson's work, which is the emergence of a local causality structure in the dynamics of non-relativistic quantum systems, and manifests by the exponential suppression of the commutator of any two operators outside an effective light cone in space-time. I will then review the experiments that most closely relate to this finding, in the sense that they reveal the group velocity of information propagation in ``real'' quantum systems. Finally, as an outlook, I will attempt to make a connection with the quantum version of the butterfly effect recently studied in chaotic quantum systems.
For the entire collection see [Zbl 1491.46002].Maximal speed of propagation in open quantum systemshttps://zbmath.org/1500.810532023-01-20T17:58:23.823708Z"Breteaux, Sébastien"https://zbmath.org/authors/?q=ai:breteaux.sebastien"Faupin, Jérémy"https://zbmath.org/authors/?q=ai:faupin.jeremy"Lemm, Marius"https://zbmath.org/authors/?q=ai:lemm.marius"Sigal, Israel Michael"https://zbmath.org/authors/?q=ai:sigal.israel-michaelSummary: We prove a maximal velocity bound for the dynamics of Markovian open quantum systems. The dynamics is described by one-parameter semigroups of quantum channels satisfying the von Neumann-Lindblad equation. Our result says that dynamically evolving states are contained inside a suitable light cone up to polynomial errors. We also give a bound on the slope of the light cone, i.e. the maximal propagation speed. The result implies an upper bound on the speed of propagation of local perturbations of stationary states in open quantum systems.
For the entire collection see [Zbl 1491.46002].Predicting slow relaxation timescales in open quantum systemshttps://zbmath.org/1500.810542023-01-20T17:58:23.823708Z"Poulsen, Felipe"https://zbmath.org/authors/?q=ai:poulsen.felipe"Hansen, Thorsten"https://zbmath.org/authors/?q=ai:hansen.thorsten"Reuter, Matthew G."https://zbmath.org/authors/?q=ai:reuter.matthew-gSummary: Molecules in open systems may be modeled using so-called reduced descriptions that keep focus on the molecule while including the effects of the environment. Mathematically, the matrices governing the Markovian equations of motion for the reduced density matrix, such as the Lindblad and Redfield equations, belong to the family of non-normal matrices. Tools for predicting the behavior of normal matrices (e.g., eigenvalue decompositions) are inadequate for describing the qualitative dynamics of systems governed by non-normal matrices. For example, such a system may relax to equilibrium on timescales much longer than expected from the eigenvalues. In this paper we contrast normal and non-normal matrices, expose mathematical tools for analyzing non-normal matrices, and apply these tools to a representative example system. We show how these tools may be used to predict dissipation timescales at both intermediate and asymptotic times, and we compare these tools to the conventional eigenvalue analyses. Interactions between the molecule and the environment, while generally resulting in dissipation on long timescales, can directly induce transient or even amplified behavior on short and intermediate timescales.Exact solution of the \(\Phi_2^3\) finite matrix modelhttps://zbmath.org/1500.810602023-01-20T17:58:23.823708Z"Kanomata, Naoyuki"https://zbmath.org/authors/?q=ai:kanomata.naoyuki"Sako, Akifumi"https://zbmath.org/authors/?q=ai:sako.akifumiSummary: We find the exact solutions of the \(\Phi_2^3\) finite matrix model (Grosse-Wulkenhaar model). In the \(\Phi_2^3\) finite matrix model, multipoint correlation functions are expressed as \(G_{|a_1^1 \dots a_{N_1}^1| \dots |a_1^B \dots a_{N_B}^B|} \). The \(\sum_{i = 1}^B N_i\)-point function denoted by \(G_{|a_1^1 \dots a_{N_1}^1| \dots |a_1^B \dots a_{N_B}^B|}\) is given by the sum over all Feynman diagrams (ribbon graphs) on Riemann surfaces with \(B\)-boundaries, and each \(| a_1^i \cdots a_{N_i}^i |\) corresponds to the Feynman diagrams having \(N_i\)-external lines from the \(i\)-th boundary. It is known that any \(G_{|a_1^1\dots a_{N_1}^1|\dots|a_1^B\dots a_{N_B}^B|}\) can be expressed using \(G_{|a^1|\dots|a^n|}\) type \(n\)-point functions. Thus we focus on rigorous calculations of \(G_{|a^1|\dots|a^n|}\). The formula for \(G_{|a^1|\dots|a^n|}\) is obtained, and it is achieved by using the partition function \(\mathcal{Z}[J]\) calculated by the Harish-Chandra-Itzykson-Zuber integral. We give \(G_{|a|}\), \(G_{|ab|}\), \(G_{|a|b|}\), and \(G_{|a|b|c|}\) as the specific simple examples. All of them are described by using the Airy functions.The Lieb-Oxford lower bounds on the Coulomb energy, their importance to electron density functional theory, and a conjectured tight bound on exchangehttps://zbmath.org/1500.810742023-01-20T17:58:23.823708Z"Perdew, John P."https://zbmath.org/authors/?q=ai:perdew.john-p"Sun, Jianwei"https://zbmath.org/authors/?q=ai:sun.jianweiSummary: \textit{E. H. Lieb} and \textit{S. Oxford} [``Improved lower bound on the indirect Coulomb energy'', J. Quantum Chem. 19, No. 3, 427--439 (1981; \url{doi:10.1002/qua.560190306})] derived rigorous lower bounds, in the form of local functionals of the electron density, on the indirect part of the Coulomb repulsion energy. The greatest lower bound for a given electron number \(N\) depends monotonically upon \(N\), and the \(N\to\infty N\) limit is a bound for all \(N\). These bounds have been shown to apply to the exact density functionals for the exchange- and exchange-correlation energies that must be approximated for an accurate and computationally efficient description of atoms, molecules, and solids. A tight bound on the exact exchange energy has been derived there from for two-electron ground states, and is conjectured to apply to all spin-unpolarized electronic ground states. Some of these and other exact constraints have been used to construct two generations of non-empirical density functionals beyond the local density approximation: the Perdew-Burke-Ernzerhof (PBE) generalized gradient approximation (GGA), and the strongly constrained and appropriately normed (SCAN) meta-GGA.
For the entire collection see [Zbl 1491.46003].Information theoretical statistical discrimination measures for electronic densitieshttps://zbmath.org/1500.810852023-01-20T17:58:23.823708Z"Laguna, Humberto G."https://zbmath.org/authors/?q=ai:laguna.humberto-g"Salazar, Saúl J. C."https://zbmath.org/authors/?q=ai:salazar.saul-j-c"Sagar, Robin P."https://zbmath.org/authors/?q=ai:sagar.robin-pSummary: Information theoretical measures are examined as methodologies for optimizing linear and non-linear parameters to obtain the best densities for particular classes of functions. We focus on the use of Gaussian type functions to represent the hydrogen atom, and examine combinations of these functions which have been used in the STO-\(n\)G basis sets. The densities obtained from these procedures are compared and contrasted to those obtained from energy optimization, and from least-squares fitting to the wave function and to the density, by evaluation of density expectation values and comparisons to their exact values. We show how densities obtained from the optimization of Kullback-Leibler (KL) measures yield better results in general, as compared to the ones obtained from energy optimization or least-squares fitting procedures. Furthermore, these types of densities are observed to provide exact results in the case of two expectation values, for all the studied classes of functions. The densities obtained from optimization of the cumulative residual KL measures, based on survival densities, provide the most accurate tail behaviour of the densities and hence the most accurate higher-order moments.The classical Jellium and the Laughlin phasehttps://zbmath.org/1500.810902023-01-20T17:58:23.823708Z"Rougerie, Nicolas"https://zbmath.org/authors/?q=ai:rougerie.nicolasSummary: I discuss results bearing on a variational problem of a new type, inspired by fractional quantum Hall physics. In the latter context, the main result reviewed herein can be spelled as ``the phase of independent quasi-holes generated from Laughlin's wave-function is stable against external potentials and weak long-range interactions''. The main ingredient of the proof is a connection between fractional quantum Hall wave-functions and statistical mechancreening ics problems that generalize the two-dimensional one-component plasma (jellium model). Universal bounds on the density of such systems, coined ``Incompressibility estimates'' are obtained via the construction of sregions for any configuration of points with positive electric charges. The latter regions are patches of constant, negative electric charge density, whose shape is optimized for the total system (points plus patch) not to generate any electric potential in its exterior.
For the entire collection see [Zbl 1491.46003].Multiplicity, localization, and domains in the Hartree-Fock ground state of the two-dimensional Hubbard modelhttps://zbmath.org/1500.820152023-01-20T17:58:23.823708Z"Matsuyama, Kazue"https://zbmath.org/authors/?q=ai:matsuyama.kazue"Greensite, Jeff"https://zbmath.org/authors/?q=ai:greensite.jeffSummary: We explore certain properties of the Hartree-Fock approximation to the ground state of the two-dimensional Hubbard model, emphasizing the fact that in the Hartree approach there is an enormous multiplicity of self-consistent solutions which are nearly degenerate in energy, reminiscent of a spin glass, but which may differ substantially in other bulk properties. It is argued that this multiplicity is physically relevant at low temperatures. We study the localization properties of the one-particle wavefunctions comprising the Hartree-Fock states, and find that these are unlocalized at small and moderate values of \(U/t\), in particular in the stripe region, but become highly localized at values corresponding to strong repulsion. We also find rectangular domains as well as stripes in the stripe region of the phase diagram, and study pair correlations in the neighborhood of half-filling.Complete classification of Friedmann-Lemaître-Robertson-Walker solutions with linear equation of state: parallelly propagated curvature singularities for general geodesicshttps://zbmath.org/1500.830382023-01-20T17:58:23.823708Z"Harada, Tomohiro"https://zbmath.org/authors/?q=ai:harada.tomohiro"Igata, Takahisa"https://zbmath.org/authors/?q=ai:igata.takahisa"Sato, Takuma"https://zbmath.org/authors/?q=ai:sato.takuma"Carr, Bernard"https://zbmath.org/authors/?q=ai:carr.bernard-jSummary: We completely classify the Friedmann-Lemaître-Robertson-Walker solutions with spatial curvature \(K = 0, \pm 1\) for perfect fluids with linear equation of state \(p = w \rho\), where \(\rho\) and \(p\) are the energy density and pressure, without assuming any energy conditions. We extend our previous work to include all geodesics and parallelly propagated (p.p.) curvature singularities, showing that no non-null geodesic emanates from or terminates at the null portion of conformal infinity and that the initial singularity for \(K = 0, -1\) and \(-5/3 < w < -1\) is a null non-scalar polynomial curvature singularity. We thus obtain the Penrose diagrams for all possible cases and identify \(w = -5/3\) as a critical value for both the future big-rip singularity and the past null conformal boundary.The effective BRKGA algorithm for the \(k\)-medoids clustering problemhttps://zbmath.org/1500.900552023-01-20T17:58:23.823708Z"Brito, Jose Andre"https://zbmath.org/authors/?q=ai:brito.jose-andre-m"Semaan, Gustavo"https://zbmath.org/authors/?q=ai:semaan.gustavo-silva"Fadel, Augusto"https://zbmath.org/authors/?q=ai:fadel.augusto-cesarSummary: This paper presents a biased random-key genetic algorithm for \(k\)-medoids clustering problem. A novel heuristic operator was implemented and combined with a parallelized local search procedure. Experiments were carried out with fifty literature data sets with small, medium, and large sizes, considering several numbers of clusters, showed that the proposed algorithm outperformed eight other algorithms, for example, the classics PAM and CLARA algorithms. Furthermore, with the results of a linear integer programming formulation, we found that our algorithm obtained the global optimal solutions for most cases and, despite its stochastic nature, presented stability in terms of quality of the solutions obtained and the number of generations required to produce such solutions. In addition, considering the solutions (clusterings) produced by the algorithms, a relative validation index (average silhouette) was applied, where, again, was observed that our method performed well, producing cluster with a good structure.Admissible and Bayes decisions with fuzzy-valued losseshttps://zbmath.org/1500.910582023-01-20T17:58:23.823708Z"Shvedov, Alexey S."https://zbmath.org/authors/?q=ai:shvedov.alexey-sSummary: Some results of classical statistical decision theory are generalized by means of the theory of fuzzy sets. The concepts of an admissible decision in the restricted sense, an admissible decision in the broad sense, a Bayes decision in the restricted sense, and a Bayes decision in the broad sense are introduced. It is proved that any Bayes decision in the broad sense with positive prior discrete density is admissible in the restricted sense. The class of Bayes decisions is shown to be complete under certain conditions on the loss function. Problems with a finite set of possible states are considered.On the Gompertz-Makeham law: a useful mortality model to deal with human mortalityhttps://zbmath.org/1500.911112023-01-20T17:58:23.823708Z"Castellares, Fredy"https://zbmath.org/authors/?q=ai:castellares.fredy"Patrício, Silvio"https://zbmath.org/authors/?q=ai:patricio.silvio-c"Lemonte, Artur J."https://zbmath.org/authors/?q=ai:lemonte.artur-joseSummary: The Gompertz-Makeham model was introduced as an extension of the Gompertz model in the second half of the 19th century by the British actuary William M. Makeham. Since then, this model has been successfully used in biology, actuarial science, and demography to describe mortality patterns in numerous species (including humans), determine policies in insurance, establish actuarial tables and growth models. In this paper, we derive some structural properties of the Gompertz-Makeham model in statistics, demography, and actuarial sciences, and present some other ones already introduced in the literature. All structural properties we provide are expressed in closed-form, which eliminates the need to evaluate them with numerical integration directly. In addition, we study the estimation of the Gompertz-Makeham model parameters through the discrete Poisson and Bell distributions. In particular, we verify that the recently introduced discrete Bell distribution can be an interesting alternative to the Poisson distribution, mainly because it is suitable to deal with over dispersion, unlike the Poisson distribution. On the basis of real mortality datasets, we compute the remaining life expectancy for several countries and verify that the Gompertz-Makeham model, especially under the Bell distribution, provides proper results to deal with human mortality in practice.Semiparametric regression for dual population mortalityhttps://zbmath.org/1500.911162023-01-20T17:58:23.823708Z"Venter, Gary"https://zbmath.org/authors/?q=ai:venter.gary-g"Şahin, Şule"https://zbmath.org/authors/?q=ai:sahin.sule-onselSummary: Parameter shrinkage applied optimally can always reduce error and projection variances from those of maximum likelihood estimation. Many variables that actuaries use are on numerical scales, like age or year, which require parameters at each point. Rather than shrinking these toward zero, nearby parameters are better shrunk toward each other. Semiparametric regression is a statistical discipline for building curves across parameter classes using shrinkage methodology. It is similar to but more parsimonious than cubic splines. We introduce it in the context of Bayesian shrinkage and apply it to joint mortality modeling for related populations. Bayesian shrinkage of slope changes of linear splines is an approach to semiparametric modeling that evolved in the actuarial literature. It has some theoretical and practical advantages, like closed-form curves, direct and transparent determination of degree of shrinkage and of placing knots for the splines, and quantifying goodness of fit. It is also relatively easy to apply to the many nonlinear models that arise in actuarial work. We find that it compares well to a more complex state-of-the-art statistical spline shrinkage approach on a popular example from that literature.Expected utility theory on general affine GARCH modelshttps://zbmath.org/1500.911222023-01-20T17:58:23.823708Z"Escobar-Anel, Marcos"https://zbmath.org/authors/?q=ai:escobar-anel.marcos"Spies, Ben"https://zbmath.org/authors/?q=ai:spies.ben"Zagst, Rudi"https://zbmath.org/authors/?q=ai:zagst.rudiSummary: Expected utility theory has produced abundant analytical results in continuous-time finance, but with very little success for discrete-time models. Assuming the underlying asset price follows a general affine GARCH model which allows for non-Gaussian innovations, our work produces an approximate closed-form recursive representation for the optimal strategy under a constant relative risk aversion (CRRA) utility function. We provide conditions for optimality and demonstrate that the optimal wealth is also an affine GARCH. In particular, we fully develop the application to the IG-GARCH model hence accommodating negatively skewed and leptokurtic asset returns. Relying on two popular daily parametric estimations, our numerical analyses give a first window into the impact of the interaction of heteroscedasticity, skewness and kurtosis on optimal portfolio solutions. We find that losses arising from following Gaussian (suboptimal) strategies, or Merton's static solution, can be up to 2.5\% and 5\%, respectively, assuming low-risk aversion of the investor and using a five-years time horizon.Dynamic quantile function modelshttps://zbmath.org/1500.911292023-01-20T17:58:23.823708Z"Chen, Wilson Ye"https://zbmath.org/authors/?q=ai:chen.wilson-ye"Peters, Gareth W."https://zbmath.org/authors/?q=ai:peters.gareth-william"Gerlach, Richard H."https://zbmath.org/authors/?q=ai:gerlach.richard-h"Sisson, Scott A."https://zbmath.org/authors/?q=ai:sisson.scott-aSummary: Motivated by the need for effectively summarising, modelling, and forecasting the distributional characteristics of intra-daily returns, as well as the recent work on forecasting histogram-valued time-series in the area of symbolic data analysis, we develop a time-series model for forecasting quantile-function-valued (QF-valued) daily summaries for intra-daily returns. We call this model the dynamic quantile function (DQF) model. Instead of a histogram, we propose to use a \(g\)-and-\(h\) quantile function to summarise the distribution of intra-daily returns. We work with a Bayesian formulation of the DQF model in order to make statistical inference while accounting for parameter uncertainty; an efficient MCMC algorithm is developed for sampling-based posterior inference. Using ten international market indices and approximately 2000 days of out-of-sample data from each market, the performance of the DQF model compares favourably, in terms of forecasting VaR of intra-daily returns, against the interval-valued and histogram-valued time-series models. Additionally, we demonstrate that the QF-valued forecasts can be used to forecast VaR measures at the daily timescale via a simple quantile regression model on daily returns (QR-DQF). In certain markets, the resulting QR-DQF model is able to provide competitive VaR forecasts for daily returns.Model-based approach for scenario design: stress test severity and banks' resiliencyhttps://zbmath.org/1500.911452023-01-20T17:58:23.823708Z"Barbieri, Paolo Nicola"https://zbmath.org/authors/?q=ai:barbieri.paolo-nicola"Lusignani, Giuseppe"https://zbmath.org/authors/?q=ai:lusignani.giuseppe"Prosperi, Lorenzo"https://zbmath.org/authors/?q=ai:prosperi.lorenzo"Zicchino, Lea"https://zbmath.org/authors/?q=ai:zicchino.leaSummary: After the financial crisis, evaluating the financial health of banks under stressed scenarios has become common practice among supervisors. According to supervisory guidelines, the adverse scenarios prepared for stress tests need to be severe but plausible. The first contribution of this paper is to propose a model-based approach to assess the severity of the scenarios. To do so, we use a large Bayesian VAR model estimated on the Italian economy where potential spillovers between the macroeconomy and the aggregate banking sector are explicitly considered. We show that the 2018 exercise has been the most severe so far, in particular, due to the path of GDP, the stock market index and the 3-month Euribor rate. Our second contribution is an evaluation of whether the resilience of the Italian banking sector to adverse scenarios has increased over time (for example, thanks to improved risk management practices induced by greater awareness of risks that come with performing stress test exercises). To this scope, we construct counterfactual exercises by recalibrating the scenarios of the 2016 and 2018 exercises so that they have the same severity as the 2014 exercise. We find that in 2018, the economy would have experienced a smaller decline in loans compared to the previous exercises. This implies that banks could afford to deleverage less, i.e. maintain a higher exposure to risk in their balance sheets. We interpret this as evidence of increased resilience.Vulnerability-CoVaR: investigating the crypto-markethttps://zbmath.org/1500.911482023-01-20T17:58:23.823708Z"Waltz, Martin"https://zbmath.org/authors/?q=ai:waltz.martin"Singh, Abhay Kumar"https://zbmath.org/authors/?q=ai:singh.abhay-kumar"Okhrin, Ostap"https://zbmath.org/authors/?q=ai:okhrin.ostapSummary: This paper proposes an important extension to conditional value-at-risk (CoVaR), the popular systemic risk measure, and investigates its properties on the cryptocurrency market. The proposed vulnerability-CoVaR (VCoVaR) is defined as the value-at-risk (VaR) of a financial system or institution, given that at least one other institution is equal or below its VaR. The VCoVaR relaxes normality assumptions and is estimated via copula. While important theoretical findings of the measure are detailed, the empirical study analyses how different distressing events of the cryptocurrencies impact the risk level of each other. The results show that litecoin displays the largest impact on bitcoin and that each cryptocurrency is significantly affected if an event of joint distress among the remaining market participants occurs. The VCoVaR is shown to capture domino effects better than other CoVaR extensions.PathOGiST: a novel method for clustering pathogen isolates by combining multiple genotyping signalshttps://zbmath.org/1500.920772023-01-20T17:58:23.823708Z"Katebi, Mohsen"https://zbmath.org/authors/?q=ai:katebi.mohsen"Feijao, Pedro"https://zbmath.org/authors/?q=ai:feijao.pedro"Booth, Julius"https://zbmath.org/authors/?q=ai:booth.julius"Mansouri, Mehrdad"https://zbmath.org/authors/?q=ai:mansouri.mehrdad"La, Sean"https://zbmath.org/authors/?q=ai:la.sean"Sweeten, Alex"https://zbmath.org/authors/?q=ai:sweeten.alex"Miraskarshahi, Reza"https://zbmath.org/authors/?q=ai:miraskarshahi.reza"Nguyen, Matthew"https://zbmath.org/authors/?q=ai:nguyen.matthew"Wong, Johnathan"https://zbmath.org/authors/?q=ai:wong.johnathan"Hsiao, William"https://zbmath.org/authors/?q=ai:hsiao.william"Chauve, Cedric"https://zbmath.org/authors/?q=ai:chauve.cedric"Chindelevitch, Leonid"https://zbmath.org/authors/?q=ai:chindelevitch.leonidSummary: In this paper we study the problem of clustering bacterial isolates into epidemiologically related groups from next-generation sequencing data. Existing methods for this problem mainly use a single genotyping signal, and either use a distance-based method with a pre-specified number of clusters, or a phylogenetic tree-based method with a pre-specified threshold. We propose PathOGiST, an algorithmic framework for clustering bacterial isolates by leveraging multiple genotypic signals and calibrated thresholds. PathOGiST uses different genotypic signals, clusters the isolates based on these individual signals with correlation clustering, and combines the clusterings based on the individual signals through consensus clustering. We implemented and tested PathOGiST on three different bacterial pathogens -- \textit{Escherichia coli}, \textit{Yersinia pseudotuberculosis}, and \textit{Mycobacterium tuberculosis} -- and we conclude by discussing further avenues to explore.
For the entire collection see [Zbl 1496.92004].Stratified test alleviates batch effects in single-cell datahttps://zbmath.org/1500.920782023-01-20T17:58:23.823708Z"Liang, Shaoheng"https://zbmath.org/authors/?q=ai:liang.shaoheng"Liang, Qingnan"https://zbmath.org/authors/?q=ai:liang.qingnan"Chen, Rui"https://zbmath.org/authors/?q=ai:chen.rui"Chen, Ken"https://zbmath.org/authors/?q=ai:chen.kenSummary: Analyzing single-cell sequencing data across batches is challenging. We find that the van Elteren test, a stratified version of Wilcoxon rank-sum test, elegantly mitigates the problem. We also modified the common language effect size to supplement this test, further improving its utility. On both simulated and real patient data we show the ability of van Elteren test to control for false positives and false negatives. The effect size also estimates the differences between cell types more accurately.
For the entire collection see [Zbl 1496.92004].A topological data analysis approach on predicting phenotypes from gene expression datahttps://zbmath.org/1500.920792023-01-20T17:58:23.823708Z"Mandal, Sayan"https://zbmath.org/authors/?q=ai:mandal.sayan"Guzmán-Sáenz, Aldo"https://zbmath.org/authors/?q=ai:guzman-saenz.aldo"Haiminen, Niina"https://zbmath.org/authors/?q=ai:haiminen.niina"Basu, Saugata"https://zbmath.org/authors/?q=ai:basu.saugata"Parida, Laxmi"https://zbmath.org/authors/?q=ai:parida.laxmiSummary: The goal of this study was to investigate if gene expression measured from RNA sequencing contains enough signal to separate healthy and afflicted individuals in the context of phenotype prediction. We observed that standard machine learning methods alone performed somewhat poorly on the disease phenotype prediction task; therefore we devised an approach augmenting machine learning with topological data analysis.
We describe a framework for predicting phenotype values by utilizing gene expression data transformed into sample-specific topological signatures by employing feature subsampling and persistent homology. The topological data analysis approach developed in this work yielded improved results on Parkinson's disease phenotype prediction when measured against standard machine learning methods.
This study confirms that gene expression can be a useful indicator of the presence or absence of a condition, and the subtle signal contained in this high dimensional data reveals itself when considering the intricate topological connections between expressed genes.
For the entire collection see [Zbl 1496.92004].