Recent zbMATH articles in MSC 68https://zbmath.org/atom/cc/682023-01-20T17:58:23.823708ZWerkzeugA probabilistic logic between \(LPP_1\) and \(LPP_2\)https://zbmath.org/1500.030092023-01-20T17:58:23.823708Z"Dautović, Šejla"https://zbmath.org/authors/?q=ai:dautovic.sejlaThis short paper introduces a logic that sits between LLP\(_1\) and LPP\(_2\) which is called LLP\(_{\frac{3}{2}}\). Background to the two previously introduced logics and the overall approach can be found in [\textit{Z. Ognjanović} et al., Probability logics. Probability-based formalization of uncertain reasoning. Cham: Springer (2016; Zbl 1371.03001)].
The basic logic is in all three cases propositional. Probabilistic operators of the form \(P_{\geq r}\alpha\) are then introduced with the following intended interpretation: the probability of the formula \(\alpha\) is greater or equal than \(r\). The here introduced logic allows for the mixing of classical and probabilistic formulae but does not permit the iteration of probabilistic operators. In this sense, the new logic sits between LLP\(_1\) and LPP\(_2\). After introducing and briefly motivating the logic, the paper gives main characteristics of the new logic:
\begin{itemize}
\item[1.] LLP\(_{\frac{3}{2}}\) satisfiability problem is NP-complete.
\item[2.] An axiomatisation scheme comprising of six axioms and three rules of inference is sound and strongly complete for this logic.
\end{itemize}
Reviewer: Jürgen Landes (München)On rainbow mean colorings of treeshttps://zbmath.org/1500.050182023-01-20T17:58:23.823708Z"Hallas, James"https://zbmath.org/authors/?q=ai:hallas.james"Salehi, Ebrahim"https://zbmath.org/authors/?q=ai:salehi.ebrahim"Zhang, Ping"https://zbmath.org/authors/?q=ai:zhang.ping.1Summary: A mean coloring of a connected graph \(G\) of order 3 or more is an edge coloring of \(G\) with positive integers such that the mean of the colors of the edges incident with every vertex is an integer. The associated color of a vertex is its chromatic mean. If distinct vertices have distinct chromatic means, then the edge coloring is a rainbow mean coloring of \(G\). The maximum vertex color in a rainbow mean coloring \(c\) is the rainbow mean index of \(c\), while the rainbow mean index of the graph \(G\) is the minimum rainbow mean index among all rainbow mean colorings of \(G\). In this paper, rainbow mean colorings of trees are investigated.
For the entire collection see [Zbl 1495.05003].Role coloring bipartite graphshttps://zbmath.org/1500.050202023-01-20T17:58:23.823708Z"Pandey, Sukanya"https://zbmath.org/authors/?q=ai:pandey.sukanya"Sahlot, Vibha"https://zbmath.org/authors/?q=ai:sahlot.vibhaThe \(k\)-role colouring problem has as input an undirected graph \(G\) and a positive integer \(k\). The question is then whether there is a surjective function \(\alpha: V(G)\) to \(\{1,2,\ldots k\{\) such that if \(\alpha(u)=\alpha(v)\) then, letting \(N(w)\) denote the neighbourhood of a vertex \(w\) as usual, we have \(\alpha(N(u))=\alpha(N(v))\). This notion apparently has applications in social science. Previous work by various groups of authors had shown that for general graphs this problem is polynomial-time soluble when \(k=1\) and is NP-complete for \(k\geq 2\).
The aim of the paper under review is to study what happens for the class of bipartite graphs. The \(k\)-role colouring problem is trivial for \(k\leq 2\) for this class. The main contribution of the paper is to show that the \(k\)-role colouring problem is NP-complete on bipartite graphs for fixed \(k\geq 3\). The paper also gives a characterisation of so-called bipartite chain graphs which are 3-role-colourable and shows that 2-role colouring is NP complete on the class of ``almost bipartite graphs''.
Reviewer: David B. Penman (Colchester)On the extraconnectivity of arrangement graphshttps://zbmath.org/1500.050292023-01-20T17:58:23.823708Z"Cheng, Eddie"https://zbmath.org/authors/?q=ai:cheng.eddie"Lipták, László"https://zbmath.org/authors/?q=ai:liptak.laszlo"Tian, Daniel"https://zbmath.org/authors/?q=ai:tian.danielSummary: Extraconnectivity generalizes the concept of connectivity of a graph but it is more difficult to compute. In this note, we compute the \(g\)-extraconnectivity of the arrangement graph for small \(g\) (with \(g\le 6)\) with the help of a computer program. In addition, we provide an asymptotic result for general \(g\).
For the entire collection see [Zbl 1495.05003].On double signal number of a graphhttps://zbmath.org/1500.050302023-01-20T17:58:23.823708Z"Xaviour, X. Lenin"https://zbmath.org/authors/?q=ai:xaviour.x-lenin"Mary, S. Ancy"https://zbmath.org/authors/?q=ai:mary.s-ancySummary: A set \(S\) of vertices in a connected graph \({G=(V,E)}\) is called a signal set if every vertex not in \(S\) lies on a signal path between two vertices from \(S\). A set \(S\) is called a double signal set of \(G\) if \(S\) if for each pair of vertices \(x,y \in G\) there exist \(u,v \in S\) such that \(x,y \in L[u,v]\). The double signal number \(\operatorname{dsn}(G)\) of \(G\) is the minimum cardinality of a double signal set. Any double signal set of cardinality \(\operatorname{dsn}(G)\) is called \(\operatorname{dsn} \)-set of \(G\). In this paper we introduce and initiate some properties on double signal number of a graph. We have also given relation between geodetic number, signal number and double signal number for some classes of graphs.On the energy of transposition graphshttps://zbmath.org/1500.050342023-01-20T17:58:23.823708Z"DeDeo, M. R."https://zbmath.org/authors/?q=ai:dedeo.michelle-rSummary: We analyze and compare properties of Cayley graphs of permutation graphs called transposition graphs as this family of graphs has better degree and diameter properties than other families of graphs. Cayley graphs are directly related to the properties of its generator set and thus Cayley graphs of permutation groups generated by transpositions inherit almost all of the properties of the hypercube. In particular, we study properties of the complete transportation, (transposition) star graph, bubble-sort graph, modified bubble-sort graph and the binary hypercube and use these properties to determine bounds on the energy of these graphs.
For the entire collection see [Zbl 1495.05003].An exact bound on the number of chips of parallel chip-firing games that stabilizehttps://zbmath.org/1500.050422023-01-20T17:58:23.823708Z"Bu, Alan"https://zbmath.org/authors/?q=ai:bu.alan"Choi, Yunseo"https://zbmath.org/authors/?q=ai:choi.yunseo"Xu, Max"https://zbmath.org/authors/?q=ai:xu.max-wenqiangSummary: \textit{P. M. Kominers} and \textit{S. D. Kominers} [ibid. 95, No. 1, 9--13 (2010; Zbl 1230.05212)] showed that any parallel chip-firing game on \(G(V, E)\) with at least \(4|E|-|V|\) chips stabilizes with an eventual period of length 1. We make this bound exact: we prove that any parallel chip-firing game with more than \(3|E|-|V|\) or less than \(|E|\) chips must stabilize and that if the number of chips is outside this range, then there exists some parallel chip-firing game with that many chips that does not stabilize. In addition, as do Kominers and Kominers [loc. cit.], we provide an upper bound on the number of rounds before the game stabilizes.Positive integer powers of certain tridiagonal matrices and corresponding anti-tridiagonal matriceshttps://zbmath.org/1500.150042023-01-20T17:58:23.823708Z"Beiranvand, Mohammad"https://zbmath.org/authors/?q=ai:beiranvand.mohammad"Kamalvand, Mojtaba Ghasemi"https://zbmath.org/authors/?q=ai:kamalvand.mojtaba-ghasemiSummary: In this paper, we firstly derive a general expression for the entries of the \(m\) th \((m\in\mathbb{N})\) power for two certain types of tridiagonal matrices of arbitrary order. Secondly, we present a method for computing the positive integer powers of the anti-tridiagonal matrix corresponding to these matrices. Also, we give Maple 18 procedures in order to verify our calculations.Universal points in the asymptotic spectrum of tensorshttps://zbmath.org/1500.150202023-01-20T17:58:23.823708Z"Christandl, Matthias"https://zbmath.org/authors/?q=ai:christandl.matthias"Vrana, Péter"https://zbmath.org/authors/?q=ai:vrana.peter"Zuiddam, Jeroen"https://zbmath.org/authors/?q=ai:zuiddam.jeroenSummary: Motivated by the problem of constructing fast matrix multiplication algorithms, \textit{V. Strassen} [J. Reine Angew. Math. 384, 102--152 (1988; Zbl 0631.68033); J. Reine Angew. Math. 375/376, 406--443 (1987; Zbl 0621.68026)] introduced and developed the theory of asymptotic spectra of tensors. For any sub-semiring \(\mathcal{X}\) of tensors (under direct sum and tensor product), the duality theorem that is at the core of this theory characterizes basic asymptotic properties of the elements of \(\mathcal{X}\) in terms of the asymptotic spectrum of \(\mathcal{X} \), which is defined as the collection of semiring homomorphisms from \(\mathcal{X}\) to the non-negative reals with a natural monotonicity property. The asymptotic properties characterized by this duality encompass fundamental problems in complexity theory, combinatorics and quantum information. Universal spectral points are elements in the asymptotic spectrum of the semiring of all tensors. Finding all universal spectral points suffices to find the asymptotic spectrum of any sub-semiring. The construction of non-trivial universal spectral points has been an open problem for more than thirty years. We construct, for the first time, a family of non-trivial universal spectral points over the complex numbers, called quantum functionals. We moreover prove that the quantum functionals precisely characterise the asymptotic slice rank of complex tensors. Our construction, which relies on techniques from quantum information theory and representation theory, connects the asymptotic spectrum of tensors to the quantum marginal problem and entanglement polytopes.A logic based approach to finding real singularities of implicit ordinary differential equationshttps://zbmath.org/1500.340152023-01-20T17:58:23.823708Z"Seiler, Werner M."https://zbmath.org/authors/?q=ai:seiler.werner-m"Seiß, Matthias"https://zbmath.org/authors/?q=ai:seiss.matthias"Sturm, Thomas"https://zbmath.org/authors/?q=ai:sturm.thomasSummary: We discuss the effective computation of geometric singularities of implicit ordinary differential equations over the real numbers using methods from logic. Via the Vessiot theory of differential equations, geometric singularities can be characterised as points where the behaviour of a certain linear system of equations changes. These points can be discovered using a specifically adapted parametric generalisation of Gaussian elimination combined with heuristic simplification techniques and real quantifier elimination methods. We demonstrate the relevance and applicability of our approach with computational experiments using a prototypical implementation in \textsc{Reduce}.DNN expression rate analysis of high-dimensional PDEs: application to option pricinghttps://zbmath.org/1500.350092023-01-20T17:58:23.823708Z"Elbrächter, Dennis"https://zbmath.org/authors/?q=ai:elbrachter.dennis"Grohs, Philipp"https://zbmath.org/authors/?q=ai:grohs.philipp"Jentzen, Arnulf"https://zbmath.org/authors/?q=ai:jentzen.arnulf"Schwab, Christoph"https://zbmath.org/authors/?q=ai:schwab.christophSummary: We analyze approximation rates by deep ReLU networks of a class of multivariate solutions of Kolmogorov equations which arise in option pricing. Key technical devices are deep ReLU architectures capable of efficiently approximating tensor products. Combining this with results concerning the approximation of well-behaved (i.e., fulfilling some smoothness properties) univariate functions, this provides insights into rates of deep ReLU approximation of multivariate functions with tensor structures. We apply this in particular to the model problem given by the price of a European maximum option on a basket of \(d\) assets within the Black-Scholes model for European maximum option pricing. We prove that the solution to the \(d\)-variate option pricing problem can be approximated up to an \(\varepsilon\)-error by a deep ReLU network with depth \(\mathcal{O}\big(\ln(d)\ln(\varepsilon^{-1})+\ln(d)^2\big)\) and \(\mathcal{O}\big(d^{2+\frac{1}{n}}\varepsilon^{-\frac{1}{n}}\big)\) nonzero weights, where \(n\in\mathbb{N}\) is arbitrary (with the constant implied in \(\mathcal{O}(\cdot)\) depending on \(n)\). The techniques developed in the constructive proof are of independent interest in the analysis of the expressive power of deep neural networks for solution manifolds of PDEs in high dimension.Concurrent multiparameter learning demonstrated on the Kuramoto-Sivashinsky equationhttps://zbmath.org/1500.352272023-01-20T17:58:23.823708Z"Pachev, Benjamin"https://zbmath.org/authors/?q=ai:pachev.benjamin"Whitehead, Jared P."https://zbmath.org/authors/?q=ai:whitehead.jared-p"McQuarrie, Shane A."https://zbmath.org/authors/?q=ai:mcquarrie.shane-aSummary: We develop an algorithm based on the nudging data assimilation scheme for the concurrent (on-the-fly) estimation of scalar parameters for a system of evolutionary dissipative partial differential equations in which the state is partially observed. The algorithm takes advantage of the error that results from nudging a system with incorrect parameters with data from the true system. The intuitive nature of the algorithm makes its extension to several different systems immediate, and it allows for recovery of multiple parameters simultaneously. We test the method on the Kuramoto-Sivashinsky equation in one dimension and demonstrate its efficacy in this context.Constructive deep ReLU neural network approximationhttps://zbmath.org/1500.410022023-01-20T17:58:23.823708Z"Herrmann, Lukas"https://zbmath.org/authors/?q=ai:herrmann.lukas"Opschoor, Joost A. A."https://zbmath.org/authors/?q=ai:opschoor.joost-a-a"Schwab, Christoph"https://zbmath.org/authors/?q=ai:schwab.christophSummary: We propose an efficient, \textit{deterministic algorithm for constructing exponentially convergent deep neural network (DNN) approximations} of multivariate, analytic maps \(f:[-1,1]^K\rightarrow\mathbb{R}\). We address in particular networks with the rectified linear unit (ReLU) activation function. Similar results and proofs apply for many other popular activation functions. The algorithm is based on collocating \(f\) in deterministic families of grid points with small Lebesgue constants, and by a-priori (i.e., ``offline'') emulation of a spectral basis with DNNs to prescribed fidelity. Assuming availability of \(N\) function values of a possibly corrupted, numerical approximation \(\breve{f}\) of \(f\) in \([-1,1]^K\) and a bound on \(\Vert f-\breve{f} \Vert_{L^\infty ({[-1,1]^K})} \), we provide an explicit, computational construction of a ReLU DNN which attains accuracy \(\varepsilon\) (depending on \(N\) and \(\Vert f-\breve{f} \Vert_{L^\infty ({[-1,1]^K})})\) uniformly, with respect to the inputs. For analytic maps \(f: [-1,1]^K\rightarrow\mathbb{R}\), we prove \textit{exponential convergence of expression and generalization errors} of the constructed ReLU DNNs. Specifically, for every target accuracy \(\varepsilon \in (0,1)\), there exists \(N\) depending also on \(f\) such that the error of the construction algorithm with \(N\) evaluations of \(\breve{f}\) as input in the norm \(L^\infty ([-1,1]^K;\mathbb{R})\) is smaller than \(\varepsilon\) up to an additive data-corruption bound \(\Vert f-\breve{f} \Vert_{L^\infty ({[-1,1]^K})}\) multiplied with a factor growing slowly with \(1/\varepsilon\) and the number of non-zero DNN weights grows polylogarithmically with respect to \(1/\varepsilon\). The algorithmic construction of the ReLU DNNs which will realize the approximations, is explicit and deterministic in terms of the function values of \(\breve{f}\) in tensorized Clenshaw-Curtis grids in \([-1,1]^K\). We illustrate the proposed methodology by a constructive algorithm for (offline) computations of posterior expectations in Bayesian PDE inversion.Exponential ReLU DNN expression of holomorphic maps in high dimensionhttps://zbmath.org/1500.410082023-01-20T17:58:23.823708Z"Opschoor, J. A. A."https://zbmath.org/authors/?q=ai:opschoor.joost-a-a"Schwab, Ch."https://zbmath.org/authors/?q=ai:schwab.christoph"Zech, J."https://zbmath.org/authors/?q=ai:zech.jakobSummary: For a parameter dimension \(d\in\mathbb{N}\), we consider the approximation of many-parametric maps \(u:[-\,1,1]^d\rightarrow\mathbb{R}\) by deep ReLU neural networks. The input dimension \(d\) may possibly be large, and we assume quantitative control of the domain of holomorphy of \(u\): i.e., \(u\) admits a holomorphic extension to a Bernstein polyellipse \(\mathcal{E}_{\rho_1}\times\cdots\times\mathcal{E}_{\rho_d}\subset\mathbb{C}^d\) of semiaxis sums \(\rho_i> 1\) containing \([-\,1,1]^d\). We establish the exponential rate \(O(\exp(-bN^{1/(d+1)}))\) of expressive power in terms of the total NN size \(N\) and of the input dimension \(d\) of the ReLU NN in \(W^{1,\infty}([-1,1]^d)\). The constant \(b> 0\) depends on \((\rho_j)_{j=1}^d\) which characterizes the coordinate-wise sizes of the Bernstein-ellipses for \(u\). We also prove exponential convergence in stronger norms for the approximation by DNNs with more regular, so-called ``rectified power unit'' activations. Finally, we extend DNN expression rate bounds also to two classes of non-holomorphic functions, in particular to \(d\)-variate, Gevrey-regular functions, and, by composition, to certain multivariate probability distribution functions with Lipschitz marginals.Chaotic trigonometric Haar wavelet with focus on image encryptionhttps://zbmath.org/1500.420172023-01-20T17:58:23.823708Z"Ahadpour, Sodeif"https://zbmath.org/authors/?q=ai:ahadpour.sodeif"Sadra, Yaser"https://zbmath.org/authors/?q=ai:sadra.yaserThe authors introduce the chaotic trigonometric Haar wavelet transform to study some problems of encryption. The ingredients are the Haar wavelet transform and the chaotic trigonometric maps.
Reviewer: Françoise Bastin (Liège)Wavelet neural networks functional approximation and applicationhttps://zbmath.org/1500.420202023-01-20T17:58:23.823708Z"Zeglaoui, Anis"https://zbmath.org/authors/?q=ai:zeglaoui.anis"Ben Mabrouk, Anouar"https://zbmath.org/authors/?q=ai:benmabrouk.anouar|ben-mabrouk.anouar"Kravchenko, Oleg V."https://zbmath.org/authors/?q=ai:kravchenko.oleg-vIn this paper, the authors investigated wavelet neural network approximation which is a combination of neural networks and wavelets to approximate functions. They proved that any element of \(L^p(\mu)\) can be approximated by a combination of wavelet functions.
This paper is organized as follows. In Sec. 2, a review of neural networks and wavelet neural networks is presented. In Sec. 3, wavelet functional approximations are reviewed. Section 4 is devoted to presenting the main results. In Sec. 5, the proofs of the main results are developed. Section 6 is concerned with the development of some experimental results.
Reviewer: Mehdi Rashidi-Kouchi (Kerman)Optimal adaptive control of partially uncertain linear continuous-time systems with state delayhttps://zbmath.org/1500.490192023-01-20T17:58:23.823708Z"Moghadam, Rohollah"https://zbmath.org/authors/?q=ai:moghadam.rohollah"Jagannathan, S."https://zbmath.org/authors/?q=ai:jagannathan.sridhar|jagannathan.sarangapani|jagannathan.shriram|jagannathan.srinivasan|jagannathan.suresh"Narayanan, Vignesh"https://zbmath.org/authors/?q=ai:narayanan.vignesh"Raghavan, Krishnan"https://zbmath.org/authors/?q=ai:raghavan.krishnanSummary: In this chapter, the optimal adaptive control (OAC) of partially unknown linear continuous-time systems with state delay is introduced by using integral reinforcement learning (IRL). A quadratic cost function over infinite time horizon is considered and a value function is defined by considering the delayed state. A delay algebraic Riccati equation (DARE) is obtained using the value functional and utilized along with stationarity condition to obtain optimal control policy. Subsequently, it has been shown that the optimal control policy renders the closed-loop system asymptotically stable under mild conditions when the dynamics are known. Then, to overcome the need for drift dynamics, an actor-critic framework using the IRL approach is introduced for OAC. A novel update law for tuning the parameters of the critic function is derived. Lyapunov theory is employed to demonstrate the boundedness of the closed-loop system when the OAC uses periodic and event sampled feedback. Finally, future work involving an image sensor as part of the OAC loop using deep neural network based RL is reported. A simulation example is included to illustrate the effectiveness of the proposed approach.
For the entire collection see [Zbl 1492.49001].Closed-form geodesics and optimization for Riemannian logarithms of Stiefel and flag manifoldshttps://zbmath.org/1500.530562023-01-20T17:58:23.823708Z"Nguyen, Du"https://zbmath.org/authors/?q=ai:nguyen.du-dinhSummary: We provide two closed-form geodesic formulas for a family of metrics on Stiefel manifolds recently introduced by \textit{K. Hüper} et al. [J. Geom. Mech. 13, No. 1, 55--72 (2021; Zbl 1477.58006)], reparameterized by two positive numbers, having both the embedded and canonical metrics as special cases. The closed-form formulas allow us to compute geodesics by matrix exponential in reduced dimension for low-rank Stiefel manifolds. We follow the approach of minimizing the square Frobenius distance between a geodesic ending point to a given point on the manifold to compute the logarithm map and geodesic distance between two endpoints, using Fréchet derivatives to compute the gradient of this objective function. We focus on two optimization methods, \textit{gradient descent} and L-BFGS. This leads to a new framework to compute the geodesic distance for manifolds with known geodesic formula but no closed-form logarithm map. We show the approach works well for Stiefel as well as flag manifolds. The logarithm map could be used to compute the Riemannian center of mass for these manifolds equipped with the above metrics. The method to translate directional derivatives using Fréchet derivatives to a gradient could potentially be applied to other matrix equations.Persistent homology in \(\ell_\infty\) metrichttps://zbmath.org/1500.550032023-01-20T17:58:23.823708Z"Beltramo, Gabriele"https://zbmath.org/authors/?q=ai:beltramo.gabriele"Skraba, Primoz"https://zbmath.org/authors/?q=ai:skraba.primozBased on the \(l_{\infty}\)-metric, the paper examines a number of classical complexes, including the Čech, Vietoris-Rips, and Alpha complexes. ``We define two new families of flag complexes, the Alpha flag and Minibox complexes, and prove their equivalence to Čech complexes in homological degrees zero and one. Moreover, we provide algorithms for finding Minibox edges of two, three, and higher-dimensional points.'' Besides, using this algorithm, the paper presents computational experiments on random points. This approach can be useful to compute persistent homology of a complex.
Reviewer: Sang-Eon Han (Jeonju)Bayesian topological signal processinghttps://zbmath.org/1500.550052023-01-20T17:58:23.823708Z"Oballe, Christopher"https://zbmath.org/authors/?q=ai:oballe.christopher"Cherne, Alan"https://zbmath.org/authors/?q=ai:cherne.alan"Boothe, Dave"https://zbmath.org/authors/?q=ai:boothe.dave"Kerick, Scott"https://zbmath.org/authors/?q=ai:kerick.scott"Franaszczuk, Piotr J."https://zbmath.org/authors/?q=ai:franaszczuk.piotr-j"Maroulas, Vasileios"https://zbmath.org/authors/?q=ai:maroulas.vasileiosThis paper works on giving an interpretable framework for signal processing via sublevel homology. The paper is trying to establish interpretable links between sublevel set persistence diagrams of signals and their frequency domain which are used to study time series analysis. The authors explore Bayesian framework for the time series classification, from which they find that the Bayesian topological features are pretty useful just as other well-known features, such as those from power spectral densities and continuous wavelets. Finally, they apply their results to electroencephalography which is a neuroimaging technique wherein electrodes are placed on a subject's head to measure local changes in voltage over time, which are reported as a collection of time series.
Reviewer: Qingyun Zeng (Philadelphia)A view of computational models for image segmentationhttps://zbmath.org/1500.650052023-01-20T17:58:23.823708Z"Antonelli, Laura"https://zbmath.org/authors/?q=ai:antonelli.laura"De Simone, Valentina"https://zbmath.org/authors/?q=ai:de-simone.valentina"di Serafino, Daniela"https://zbmath.org/authors/?q=ai:di-serafino.danielaSummary: Image segmentation is a central topic in image processing and computer vision and a key issue in many applications, e.g., in medical imaging, microscopy, document analysis and remote sensing. According to the human perception, image segmentation is the process of dividing an image into non-overlapping regions. These regions, which may correspond, e.g., to different objects, are fundamental for the correct interpretation and classification of the scene represented by the image. The division into regions is not unique, but it depends on the application, i.e., it must be driven by the final goal of the segmentation and hence by the most significant features with respect to that goal. Thus, image segmentation can be regarded as a strongly ill-posed problem. A classical approach to deal with ill posedness consists in incorporating in the model a-priori information about the solution, e.g., in the form of penalty terms. In this work we provide a brief overview of basic computational models for image segmentation, focusing on edge-based and region-based variational models, as well as on statistical and machine-learning approaches. We also sketch numerical methods that are applied in computing solutions to these models. In our opinion, our view can help the readers identify suitable classes of methods for solving their specific problems.Approximating matrix eigenvalues by subspace iteration with repeated random sparsificationhttps://zbmath.org/1500.650142023-01-20T17:58:23.823708Z"Greene, Samuel M."https://zbmath.org/authors/?q=ai:greene.samuel-m"Webber, Robert J."https://zbmath.org/authors/?q=ai:webber.robert-j"Berkelbach, Timothy C."https://zbmath.org/authors/?q=ai:berkelbach.timothy-c"Weare, Jonathan"https://zbmath.org/authors/?q=ai:weare.jonathanTraditional numerical methods for calculating matrix eigenvalues are known to be prohibitively expensive for high-dimensional problems. Iterative random sparsification methods allow for the estimation of a single dominant eigenvalue at reduced cost by exploiting repeated random sampling and averaging. In the present work, a subspace iteration approach with repeated random sparsification for the estimation of multiple dominant eigenvalues, is proposed. This is based on a version of subspace iteration with several nonstandard design choices to increase the method's stability under random perturbations. Random perturbations are introduced to the vectors at each iteration to promote sparsity. These perturbations are optimized to have the smallest possible mean square magnitude, while preserving the original vector in expectation. The resulting randomized subspace iteration builds on previous methods for calculating the single dominant eigenvalue within the fast randomized iteration framework. However, the idea of extending from estimating one dominant eigenvalue to multiple dominant eigenvalues is novel; so is the random sparsification technique.The performance of the proposed method is illustrated for several benchmark problems in quantum chemistry.
Reviewer: Kanakadurga Sivakumar (Chennai)Numerical approximation to the MEW equation for the single solitary wave and different types of interactions of the solitary waveshttps://zbmath.org/1500.650372023-01-20T17:58:23.823708Z"Başhan, Ali"https://zbmath.org/authors/?q=ai:bashan.ali"Uçar, Yusuf"https://zbmath.org/authors/?q=ai:ucar.yusuf"Yağmurlu, N. Murat"https://zbmath.org/authors/?q=ai:yagmurlu.nuri-murat"Esen, Alaattin"https://zbmath.org/authors/?q=ai:esen.alaattinSummary: The main motivation of the current study is to find out better approximate solutions of the modified equal width wave (MEW) equation. In order to achieve this aim, the power of two numerical methods are combined and an extended literature survey has been carried out. Quartic B-spline base functions have been utilized since the first-order and second-order weighting coefficients that are needed for space discretization are obtained directly. As test problems, twelve different applications of single solitary wave and four different applications of the interaction between the two solitary waves are solved successfully. All of the approximate solutions have been compared to nearly fifty various earlier applications existing in the literature. Also, the rate of the convergence is given with error norms. Comparisons show the fact that the current method obtains improved results than most of the common earlier methods.Computational intelligence. A methodological introduction. With contributions from Frank Klawonn and Christian Moeweshttps://zbmath.org/1500.680012023-01-20T17:58:23.823708Z"Kruse, Rudolf"https://zbmath.org/authors/?q=ai:kruse.rudolf"Mostaghim, Sanaz"https://zbmath.org/authors/?q=ai:mostaghim.sanaz"Borgelt, Christian"https://zbmath.org/authors/?q=ai:borgelt.christian"Braune, Christian"https://zbmath.org/authors/?q=ai:braune.christian"Steinbrecher, Matthias"https://zbmath.org/authors/?q=ai:steinbrecher.matthiasThe book presents a thorough exposition of the main concepts of computational intelligence. It is divided into four parts, neural networks, evolutionary algorithms, fuzzy systems and Bayesian networks, that are very well covered. The book is self-contained, all the necessary notions for understanding the concepts are included. Moreover, the four parts are independent, the reader may study only one part without needing to read another part in order to understand the notions.
The book has plenty of examples that make the understanding of the concepts easier, contains high-quality figures that present various problems, representations or results obtained from different simulations, and many algorithms written in pseudocode. Each chapter has its separate bibliography section.
It is an interesting book that may serve very well a wide audience, providing material for researchers, students as well as people working in industry.
For the preceding editions see [Zbl 1283.68280; Zbl 1358.68003].
Reviewer: Catalin Stoean (Craiova)The art of algorithm designhttps://zbmath.org/1500.680022023-01-20T17:58:23.823708Z"Mohanty, Sachi Nandan"https://zbmath.org/authors/?q=ai:mohanty.sachi-nandan"Tripathy, Pabitra Kumar"https://zbmath.org/authors/?q=ai:tripathy.pabitra-kumar"Satpathy, Suneeta"https://zbmath.org/authors/?q=ai:satpathy.suneetaThis book is a new contender in the crowded market of textbooks on algorithm design and analysis. With about 300 pages it is much less extensive than [Introduction to algorithms. 2nd ed. Cambridge, MA: MIT Press (2001; Zbl 1047.68161)] by \textit{T. H. Cormen} et al. but still aims for a succinct and feasible introduction to the key concepts of algorithm design. It largely follows the canon of a first course on algorithms, e.g., none of higher data structures or network flows, and wraps it up with a brief sampler of methods for hard problems, matrices, polynomials and FFT, and number-theoretic algorithms. This is combined with a strong focus on detailed examples and many many pages of specific source code in C. An omission from the foundational canon, perhaps, is to not cover any type of binary search trees. Still, for that first course, the selection of content in Chapters 1 through 6 is reasonable. I'm a little torn about Chapters 7 through 9 because, despite the nice glimpse beyond, the authors present a change of pace as well as a jump in content. I wonder if more content in the style of the earlier chapters would have told a more cohesive story. Perhaps a future second edition would (like for most books) benefit from some touches on formatting, math typesetting, and images.
I think that the book best serves future engineers or students who wish both to learn programming basics and get a first taste of algorithm design while doing so. Because of the wealth of examples and detailed code, I could see the book as a valid choice for self-study, while a lecturer would otherwise provide many such examples on the board. I also expect that it would be a great help for students who have a hard time grasping the translation of abstract concepts into code or who simply desire a larger selection of examples of code that does something useful in order to boost their learning of programming. For aspiring theoreticians or for students successfully past their first programming course, I'd say that there are more suitable choices on the market, either offering more theorems and proofs, or offering insights on implementing advanced algorithms. But when have we had a book that serves everyone?
Reviewer: Stefan Kratsch (Berlin)Mathematics for computer graphicshttps://zbmath.org/1500.680032023-01-20T17:58:23.823708Z"Vince, John"https://zbmath.org/authors/?q=ai:vince.john-aThese days nobody can imagine a world without computer graphics. It is omnipresent, starting from simple animations up to AI applications in computer vision. However, behind computer graphics stands a wide range of advanced mathematics to be understood. It is a challenge to explain this theory in an easy-to-follow way. But this book shows that it is possible. It consists of twenty chapters. The first five chapters cover a general introduction to this topic, including number sets, algebra, trigonometry and coordinate systems. This basis is applied in the following chapters treating determinants, vector and matrix algebra, complex numbers, geometric transforms, quaternion algebra, quaternions in space, interpolation, curves, patches, analytical geometry and barycentric coordinates. Then, the more modern subject of geometric algebra is presented, followed by differential and integral calculus. The last chapter presents many worked examples. This sixth edition of this book is full of worked examples and colour illustrations and is therefore very practical and easy to follow. So, it can be useful for those who are making the first steps into computer graphics as the base reference. It can be used both as a textbook for a computer graphics course and for self-study by practitioners and starting researchers alike.
For the fifth edition see [Zbl 1375.68005].
Reviewer: Agnieszka Lisowska (Sosnowiec)An introduction to artificial intelligence based on reproducing kernel Hilbert spaceshttps://zbmath.org/1500.680042023-01-20T17:58:23.823708Z"Pereverzyev, Sergei"https://zbmath.org/authors/?q=ai:pereverzev.sergei-vThis is a very beautiful book, however, some mathematical knowledge is required. It deals with one of the most important subjects in supervised statistical machine learning, namely learning with regularization. Regularization is explored with the aid of kernels, and different methods are explained and investigated.
The book is composed of five chapters with an additional index. The first chapter introduces the learning algorithms with kernels in the reproducing kernel Hilbert spaces. For simplicity, the quality of learning algorithms is measured by the squared error. Chapter two deals with the formulation and derivation of the learning problem in the form of Fredholm integral equations. Chapter three deals with concepts and techniques of regularization theory. Chapter four contains examples of how the concept of the reproducing kernel Hilbert spaces and the regularization theory can contribute to supervised machine learning Chapter five indicates results on real data sets.
Everyone with mathematical background and interested in learning theory and regularization should, or rather must, read this book.
Reviewer: Andreas Wichert (Lisboa)Unique key Horn functionshttps://zbmath.org/1500.680052023-01-20T17:58:23.823708Z"Bérczi, Kristóf"https://zbmath.org/authors/?q=ai:berczi.kristof"Boros, Endre"https://zbmath.org/authors/?q=ai:boros.endre"Čepek, Ondřej"https://zbmath.org/authors/?q=ai:cepek.ondrej"Kučera, Petr"https://zbmath.org/authors/?q=ai:kucera.petr"Makino, Kazuhisa"https://zbmath.org/authors/?q=ai:makino.kazuhisaThe concept of Horn functions has been widely investigated under different guises such as directed hypergraphs in graph theory and combinatorics [\textit{G. Ausiello} et al., SIAM J. Comput. 15, 418--431 (1986; Zbl 0602.68056); CISM Courses Lect. 284, 125--157 (1984; Zbl 0571.68087)], as implication systems in machine learning [\textit{M. Arias} and \textit{J. L. Balcázar}, Mach. Learn. 85, No. 3, 273--297 (2011; Zbl 1237.68106); Lect. Notes Comput. Sci. 5809, 156--170 (2009; Zbl 1262.68059)], database theory
[\textit{W. W. Armstrong}, in: Information processing 74. Proceedings of IFIP congress 74. Amsterdam etc.: North-Holland Publishing Company; New York: American Elsevier Publishing Company. 580--583 (1974; Zbl 0296.68038); \textit{D. Maier}, J. Assoc. Comput. Mach. 27, 664--674 (1980; Zbl 0466.68085)], where the set of all functional dependencies defines a unique pure Horn function associated to the given database, and as lattices and closure systems in algebra and concept lattice analysis [\textit{N. Caspard} and \textit{B. Monjardet}, Discrete Appl. Math. 127, No. 2, 241--269 (2003; Zbl 1026.06008); \textit{J. L. Guigues} and \textit{V. Duquenne}, Math. Sci. Hum. 95, 5--18 (1986; Zbl 07635224)]. Horn functions form a fundamental subclass of Boolean functions endowed with interesting structural and computational properties. The satisfiability problem can be solved in linear time, while the equivalence of such formulas can be decided in polynomial time [\textit{W. F. Dowling} and \textit{J. H. Gallier}, J. Log. Program. 1, 267--284 (1984; Zbl 0593.68062)]. A \textit{key} in a relational database is a set of attributes whose values determine uniquely the values of all other attributes.
A pure Horn function is called \textit{key Horn} if the body of any of its implicates is a key of the function. Key Horn functions are a generalization of \textit{hydra functions} introduced in [\textit{R. H. Sloan} et al., Theor. Comput. Sci. 658, Part B, 417--428 (2017; Zbl 1357.68151)], where a \(2\)-approximation algorithm was given, while the minimization remains NP-hard even in this special case [\textit{P. Kučera}, Theor. Comput. Sci. 658, Part B, 399--416 (2017; Zbl 1357.68150)]. Finding a shortest conjunctive normal form of a given Horn function with respect to multiple relevant measures (number of clauses, number of literals, etc.) is computationally hard. The authors provided logarithmic factor approximation algorithms for general key Horn functions with respect to the above-mentioned measures in [SIAM J. Comput. 51, No. 1, 116--138 (2022; Zbl 07488097)].
This paper is concerned with the structure of the keys of a pure Horn function, being in particular interested in finding Sperner hypergraphs \(\mathcal{B}\) that form the set of minimal keys of a pure Horn function, where \(\mathcal{B}\) is called a \textit{unique key hypergraph}, while the corresponding Horn function \(h_{\mathcal{B}}\) is called a \textit{unique key Horn function}.
The synopsis of the paper goes as follows.
\begin{itemize}
\item[\S 2] is concerned with definitions and notations.
\item[\S 3] gives a characterization of unique key hypergraphs and unique key Horn functions, showing particularly that cuts of a matroid form a unique key hypergraph.
\item[\S 4] addresses the special case when every hyperedge has size two, showing that recognizing unique key graphs is co-NP-complete. Subsequently, several classes of graphs for which the recognition problem is to be decided in polynomial time are identified.
\item[\S 5] provides an algorithm generating all minimal keys of a pure Horn function with polynomial delays, which is to be used to generate all candidate keys of a relation, improving results in [\textit{C. L. Lucchesi} and \textit{S. L. Osborn}, J. Comput. Syst. Sci. 17, 270--279 (1978; Zbl 0395.68025); \textit{M. Hermann} and \textit{B. Sertkaya}, Lect. Notes Comput. Sci. 4933, 158--168 (2008; Zbl 1131.68537)]. Furthermore, it is shown that the problem of finding a minimum key of a pure Horn function and that of finding a minimum target set of a graph are closely related. The algorithm can be used to generate all minimal target sets with polynomial delay provided that the thresholds are bounded by a constant.
\end{itemize}
Reviewer: Hirokazu Nishimura (Tsukuba)Near-optimal NP-hardness of approximating \textsc{Max} \(k\)-\(\mathrm{CSP}_R\)https://zbmath.org/1500.680062023-01-20T17:58:23.823708Z"Manurangsi, Pasin"https://zbmath.org/authors/?q=ai:manurangsi.pasin"Nakkiran, Preetum"https://zbmath.org/authors/?q=ai:nakkiran.preetum"Trevisan, Luca"https://zbmath.org/authors/?q=ai:trevisan.lucaThis article studies approximability of a maximization version of constraint satisfaction problems over a non-Boolean domain \(R\) of arity \(k\), denoted by \textsc{Max} \(k\)-\(\mathrm{CSP}_R\). Since optimal results are available whenever \(k \geq R\), this work investigates the case \(k< R\). In particular, this work shows the following:
\begin{itemize}
\item[1.] Assuming \(\mathrm{P}\neq \mathrm{NP}\), \textsc{Max} \(k\)-\(\mathrm{CSP}_R\) is hard to approximate to within a \(k^{O(k)} \log(R)^{k/2}/R^{k-1}\) factor, for \(k \geq 2\) and \(R \geq 16\).
\item[2.] By extending (in black box manner) the algorithm of \textit{G. Kindler} et al. [in: SODA 2016, 1705--1714 (2016; Zbl 1410.68171)] for \textsc{Max} \(k\)-\(\mathrm{CSP}_R\), the authors show that \textsc{Max} \(k\)-\(\mathrm{CSP}_R\) can be approximated to within a \(\Omega(\log(R)/R^{k-1})\) factor.
\item[3.] Finally, this work also discusses to combine the dictatorship test of \textit{S. Khot} and \textit{R. Saket} [Lect. Notes Comput. Sci. 9134, 822--833 (2015; Zbl 1440.68103)] with the recently proved \(2\)-to-\(1\) Theorem, to obtain NP-hardness of approximating \textsc{Max} \(k\)-\(\mathrm{CSP}_R\) to within a \(O(2^{O(k)} \log(kR) / R^{k-1})\) factor. In particular, this ratio is better than the ratio from item 1 for \(k \geq 3\). Also, by analysing directly the completeness case of the dictatorship test of Khot and Saket, this work also gets improved Unique Games hardness of approximation ratio for \textsc{Max} \(k\)-\(\mathrm{CSP}_R\) of \(O(k^2\log(kR)/R^{k-1})\).
\end{itemize}
On the technical side, the result from the point 1 uses a standard reduction from Unique Games with completeness close to \(1/2\) in the spirit of the breakthrough work of \textit{S. Khot} et al. [SIAM J. Comput. 37, No. 1, 319--357 (2007; Zbl 1135.68019)]. In particular, the reduction is based on a 2-query noise-stability test. However, the analysis of the soundness case of the dictatorship test is fairly involved and this work overcomes multiple technical challenges.
The idea of the approximation algorithm from item 2 can be sketched as follows: simulate an instance of \textsc{Max} \(k\)-\(\mathrm{CSP}_R\) by an instance of \textsc{Max} \(2\)-\(\mathrm{CSP}_R\) by projecting each set of two variables from each constraint to a two-variable constraint. Then, the work uses the algorithm of Kindler et al. [loc. cit.] to solve the instance of arity \(2\), and, using this solution as an ``advice'', solves the original instance.
Finally, the idea of extending the result of Khot and Saket consists in using the fact that, due to the proof of the \(2\)-to-\(1\) Conjecture, one has access to NP-Hard Unique Games instances with completeness close to \(1/2\). Hence, the paper analyses the dictatorship test with the aforementioned completeness.
This paper is very well written, and it is of particular interest to anyone interested in studying \textsc{Max} \(k\)-\(\mathrm{CSP}\) over a non-Boolean domain \(R\), for which the question of what is the optimal approximation when \(k<R\) remains open.
Reviewer: Aleksa Stankovic (Stockholm)Approximating \(k\)-connected \(m\)-dominating setshttps://zbmath.org/1500.680072023-01-20T17:58:23.823708Z"Nutov, Zeev"https://zbmath.org/authors/?q=ai:nutov.zeevThe paper deals with a wealth of graph-theoretic algorithms, their mutual relations and gradual improvements of complexity estimations, providing 25 references to the literature. It may not be without practical relevance when supporting different aspects of network reliability; it will be interesting mainly for specialists in estimating such complexities.
Reviewer: Gunther Schmidt (München)Choquet-based fuzzy rough setshttps://zbmath.org/1500.680082023-01-20T17:58:23.823708Z"Theerens, Adnan"https://zbmath.org/authors/?q=ai:theerens.adnan"Lenz, Oliver Urs"https://zbmath.org/authors/?q=ai:lenz.oliver-urs"Cornelis, Chris"https://zbmath.org/authors/?q=ai:cornelis.chrisThis paper introduces a generalization of fuzzy rough sets based on Ordered Weighted Average (OWA) operators; namely, fuzzy rough sets based on the well-known Choquet integral (Choquet-based fuzzy rough sets). This generalization preserves some desirable theoretical properties (relation monotonicity, set monotonicity, duality). It also allows an integration of outlier detection algorithms, enhancing robustness of machine learning algorithms based on fuzzy rough sets.
OWA operators can be understood as Choquet integrals w.r.t. symmetric measures (i.e., measures fulfilling \(\mu(A) = \mu(B)\) whenever \(|A| = |B|\)). Therefore, the authors look for examples of non-symmetric measures for their Choquet-based fuzzy rough sets. These measures can incorporate an outlier score or unreliability/inaccuracy of observations. The authors provide several examples of such non-symmetric measures (fuzzy removal, weighted ordered weighted average). These measures have a clear intuitive motivation in terms of vague quantification.
Based on the theoretical results, the authors provide an experimental evaluation of the proposed methods for the task of classification.
Reviewer: Antonín Dvořák (Ostrava)Application of general regression neural network in identifying interfacial parameters under mixed-mode fracturehttps://zbmath.org/1500.740532023-01-20T17:58:23.823708Z"Junling, Hou"https://zbmath.org/authors/?q=ai:junling.hou"Xuan, Lu"https://zbmath.org/authors/?q=ai:xuan.lu"Qun, Li"https://zbmath.org/authors/?q=ai:qun.liSummary: It is generally accepted that the cohesive zone model is a common and highly effective method for simulating interlaminar damage in multilayer materials or structures. However, difficulties are encountered in identifying the interfacial parameters of the cohesive zone model, since it is time-consuming and expensive to experimentally obtain these parameters. Especially, some engineering materials or components under real engineering conditions cannot be measured directly. In this paper, based on the bilinear traction-separation relation of coupled mixed-mode cohesive law, a machine learning-based approach is constructed to identify the cohesive interfacial parameters by using the general regression neural network. No separate measurements are required, and seven independent interfacial parameters in mixed-mode fracture problems can be conveniently determined through machine learning-based analysis. In this case, only an experimental force-displacement curve is needed. A numerical example under a mixed-mode displacement load is considered to validate the proposed method. The results show that this machine learning-based approach provides an effective and general pathway to identify the interfacial parameters under mixed-mode fracture.Ensemble wavelet-learning approach for predicting the effective mechanical properties of concrete composite materialshttps://zbmath.org/1500.740582023-01-20T17:58:23.823708Z"Linghu, Jiale"https://zbmath.org/authors/?q=ai:linghu.jiale"Dong, Hao"https://zbmath.org/authors/?q=ai:dong.hao"Cui, Junzhi"https://zbmath.org/authors/?q=ai:cui.junzhiSummary: This paper proposes a high-accuracy and efficient ensemble wavelet-neural network method to predict the equivalent mechanical parameters of concrete composites. The doubly random uncertainties in structural heterogeneities and mechanical properties of concrete composites result in a challenging task to handle high-dimensional data properties, highly-complex mappings and huge computational cost for the repeated prediction of their mechanical parameters. The significant characteristics of this study are: (i) The random uncertainties both of structural heterogeneities and mechanical properties of concrete composites are modeled based on authors' previous work and Weibull probabilistic model, respectively. (ii) Asymptotic homogenization method (AHM) and the proposed background mesh technique are introduced to thoroughly extract the doubly random geometric and material characteristics of concrete composites for establishing concrete material databases. (iii) The wavelet transform is used to preprocess the high-dimensional data features of the material database, and the wavelet coefficients are used as the new input neurons of the artificial neural network (ANN) to establish the ensemble wavelet-neural network model. It should be noted that the wavelet-based learning strategy can not only extract important data features and resist noise from material database, but also achieve a great reduction in input data of neural networks from the entire material database and ensuring the successful training the neural networks. Finally, numerical experiments indicate that the proposed ensemble approach is a robust method for the high-accuracy and efficient prediction of equivalent mechanical properties of concrete composites.Sparse identification of nonlinear dynamics with low-dimensionalized flow representationshttps://zbmath.org/1500.760682023-01-20T17:58:23.823708Z"Fukami, Kai"https://zbmath.org/authors/?q=ai:fukami.kai"Murata, Takaaki"https://zbmath.org/authors/?q=ai:murata.takaaki"Zhang, Kai"https://zbmath.org/authors/?q=ai:zhang.kai.1|zhang.kai.2|zhang.kai.5|zhang.kai.3|zhang.kai.4"Fukagata, Koji"https://zbmath.org/authors/?q=ai:fukagata.kojiSummary: We perform a sparse identification of nonlinear dynamics (SINDy) for low-dimensionalized complex flow phenomena. We first apply the SINDy with two regression methods, the thresholded least square algorithm and the adaptive least absolute shrinkage and selection operator which show reasonable ability with a wide range of sparsity constant in our preliminary tests, to a two-dimensional single cylinder wake at \(Re_D=100\), its transient process and a wake of two-parallel cylinders, as examples of high-dimensional fluid data. To handle these high-dimensional data with SINDy whose library matrix is suitable for low-dimensional variable combinations, a convolutional neural network-based autoencoder (CNN-AE) is utilized. The CNN-AE is employed to map a high-dimensional dynamics into a low-dimensional latent space. The SINDy then seeks a governing equation of the mapped low-dimensional latent vector. Temporal evolution of high-dimensional dynamics can be provided by combining the predicted latent vector by SINDy with the CNN decoder which can remap the low-dimensional latent vector to the original dimension. The SINDy can provide a stable solution as the governing equation of the latent dynamics and the CNN-SINDy-based modelling can reproduce high-dimensional flow fields successfully, although more terms are required to represent the transient flow and the two-parallel cylinder wake than the periodic shedding. A nine-equation turbulent shear flow model is finally considered to examine the applicability of SINDy to turbulence, although without using CNN-AE. The present results suggest that the proposed scheme with an appropriate parameter choice enables us to analyse high-dimensional nonlinear dynamics with interpretable low-dimensional manifolds.Physics-driven learning of the steady Navier-Stokes equations using deep convolutional neural networkshttps://zbmath.org/1500.760712023-01-20T17:58:23.823708Z"Ma, Hao"https://zbmath.org/authors/?q=ai:ma.hao"Zhang, Yuxuan"https://zbmath.org/authors/?q=ai:zhang.yuxuan"Thuerey, Nils"https://zbmath.org/authors/?q=ai:thurey.nils"Hu, Xiangyu"https://zbmath.org/authors/?q=ai:hu.xiangyu"Haidn, Oskar J."https://zbmath.org/authors/?q=ai:haidn.oskar-jSummary: Recently, physics-driven deep learning methods have shown particular promise for the prediction of physical fields, especially to reduce the dependency on large amounts of pre-computed training data. In this work, we target the physics-driven learning of complex flow fields with high resolutions. We propose the use of \textit{Convolutional neural networks} (CNN) based U-net architectures to efficiently represent and reconstruct the input and output fields, respectively. By introducing Navier-Stokes equations and boundary conditions into loss functions, the physics-driven CNN is designed to predict corresponding steady flow fields directly. In particular, this prevents many of the difficulties associated with approaches employing fully connected neural networks. Several numerical experiments are conducted to investigate the behavior of the CNN approach, and the results indicate that a first-order accuracy has been achieved. Specifically for the case of a flow around a cylinder, different flow regimes can be learned and the adhered ``twin-vortices'' are predicted correctly. The numerical results also show that the training for multiple cases is accelerated significantly, especially for the difficult cases at low Reynolds numbers, and when limited reference solutions are used as supplementary learning targets.Clifford quantum cellular automata: trivial group in 2D and Witt group in 3Dhttps://zbmath.org/1500.810182023-01-20T17:58:23.823708Z"Haah, Jeongwan"https://zbmath.org/authors/?q=ai:haah.jeongwanSummary: We study locality preserving automorphisms of operator algebras on \(D\)-dimensional uniform lattices of prime \(p\)-dimensional qudits quantum cellular automata (QCAs), specializing in those that are translation invariant (TI), and map every prime \(p\)-dimensional Pauli matrix to a tensor product of Pauli matrices (Clifford). We associate antihermitian forms of the unit determinant over Laurent polynomial rings to TI Clifford QCA with lattice boundaries and prove that the form determines the QCA up to Clifford circuits and shifts (trivial). It follows that every 2D TI Clifford QCA is trivial since the antihermitian form in this case is always trivial. Furthermore, we prove that for any \(D\), the fourth power of any TI Clifford QCA is trivial. We present explicit examples of nontrivial TI Clifford QCA for \(D = 3\) and any odd prime \(p\) and show that the Witt group of the finite field \(\mathbb{F}_p\) is a subgroup of the group \(\mathfrak{C}(D = 3, p)\) of all TI Clifford QCA modulo trivial ones. That is, \(\mathfrak{C}(D = 3, p \equiv 1 \operatorname{mod} 4) \supseteq \mathbb{Z}_2 \times \mathbb{Z}_2\) and \(\mathfrak{C}(D = 3, p \equiv 3 \operatorname{mod} 4) \supseteq \mathbb{Z}_4\). The examples are found by disentangling the ground state of a commuting Pauli Hamiltonian, which is constructed by coupling layers of prime dimensional toric codes such that an exposed surface has an anomalous topological order that is not realizable by commuting Pauli Hamiltonians strictly in two dimensions. In an appendix independent of the main body of this paper, we revisit a recent theorem of Freedman and Hastings that any two-dimensional QCA, which is not necessarily Clifford or translation invariant, is a constant depth quantum circuit followed by a shift. We give a more direct proof of the theorem without using any ancillas.
{\copyright 2021 American Institute of Physics}Classical verification of quantum computationshttps://zbmath.org/1500.810192023-01-20T17:58:23.823708Z"Mahadev, Urmila"https://zbmath.org/authors/?q=ai:mahadev.urmilaSummary: We present the first protocol allowing a classical computer to interactively verify the result of an efficient quantum computation. We achieve this by constructing a measurement protocol, which enables a classical verifier to use a quantum prover as a trusted measurement device. The protocol forces the prover to behave as follows: the prover must construct an \(n\) qubit state of his choice, measure each qubit in the Hadamard or standard basis as directed by the verifier, and report the measurement results to the verifier. The soundness of this protocol is enforced based on the assumption that the learning with errors problem is computationally intractable for efficient quantum machines.Evaluation of quantum cryptanalysis on SPECKhttps://zbmath.org/1500.810212023-01-20T17:58:23.823708Z"Anand, Ravi"https://zbmath.org/authors/?q=ai:anand.ravi"Maitra, Arpita"https://zbmath.org/authors/?q=ai:maitra.arpita"Mukhopadhyay, Sourav"https://zbmath.org/authors/?q=ai:mukhopadhyay.souravSummary: In this work, all the versions of SPECK are evaluated against quantum adversary in terms of Grovers algorithm. We extensively study the resource requirements for quantum key search under the model of known plaintext attack and show that our estimation provides better result than the existing efforts. Further, for the first time, we explore differential cryptanalysis on SPECK in quantum framework that provides encouraging results. For both the cases, the quantum resources are evaluated in terms of several parameters, i.e., the T-depth of the circuits and the number of qubits required for the attacks. Experiments are performed in IBM-Q environment to support our claims.
For the entire collection see [Zbl 1490.94002].Strong privacy-preserving two-party scalar product quantum protocolhttps://zbmath.org/1500.810262023-01-20T17:58:23.823708Z"Shi, Run-hua"https://zbmath.org/authors/?q=ai:shi.runhua"Zhang, Mingwu"https://zbmath.org/authors/?q=ai:zhang.mingwuSummary: Under the assumption that the parties do not change their private inputs during the whole protocol execution, we present a probabilistic quantum protocol for secure two-party scalar product without the help of any third party, which can ensure the security of the strong privacy of two parties. Especially, the communication complexity of this protocol achieves \(O(1)\), and thus it is more suitable for applications with big data.Planar metamaterial analogue of electromagnetically induced transparency for a miniature refractive index sensorhttps://zbmath.org/1500.810732023-01-20T17:58:23.823708Z"Li, Rong"https://zbmath.org/authors/?q=ai:li.rong"Kong, Xiang-kun"https://zbmath.org/authors/?q=ai:kong.xiangkun"Liu, Shao-bin"https://zbmath.org/authors/?q=ai:liu.shaobin"Liu, Zhi-ming"https://zbmath.org/authors/?q=ai:liu.zhiming"Li, Yu-meng"https://zbmath.org/authors/?q=ai:li.yumengSummary: A planar metamaterial of analogue of Electromagnetically Induced Transparency (EIT-like) is theoretically and experimentally investigated. The EIT-like metamaterial unit cell, whose electrical size is only \(0.2\lambda_0\), consists of a split ring resonator (SRR) and a folded-line pair resonator (FLPR). For the TM wave incidence, a sharp transmitted peak with high quality factor can be observed. The EIT-like metamaterial is fabricated and measured. Good agreement is obtained between the simulation and measurement results. A miniature refractive index sensor based on the proposed metamaterial is simulated and exhibits high sensitivity and Figure of Merit (FOM).Variational convolutional neural networks classifiershttps://zbmath.org/1500.820122023-01-20T17:58:23.823708Z"Huang, Fangyu"https://zbmath.org/authors/?q=ai:huang.fangyu"Tan, Xiaoqing"https://zbmath.org/authors/?q=ai:tan.xiaoqing"Huang, Rui"https://zbmath.org/authors/?q=ai:huang.rui"Xu, Qingshan"https://zbmath.org/authors/?q=ai:xu.qingshanIn the introduction part of the article, we find a short review on significant literature related to NISQ (noisy intermediate scale quantum) devices, quantum computers, QML (quantum machine learning), QNN (quantum neural networks), HOC (hierarchical quantum classifiers), VQTN (variational quantum tensor networks), PQC (parameterized quantum circuits), CNN (convolutional neural networks) and its quantum version. The aim of the paper here is to present a VCNN (variational convolutional neural networks) algorithm intended to solve classification problems. The algorithm utilizes parametric quantum circuits. A résumé of the contribution is: starting from a CNN architecture, convolutional and pooling layers are replaced by PQC. The second section of the paper is devoted to a detailed description and presentation of the new Algorithm 1: The VCNN algorithm, together with data encoding, circuit architecture, learning process and complexity analysis. The third section contains the results on numerical simulations. The VCNN experiments have been done on the classical MNIST and Fashion MNIST datasets using -- according to this report -- a special TensorFlow Quantum platform. Performance results on binary classification, multi-class classification, three-class classification, four-class classification and resilience to noise are shown and discussed. In the last section, some conclusions, motivations and future trends are presented.
Reviewer: Claudia Simionescu-Badea (Wien)Notes on complexity growth rate, grand potential and partition functionhttps://zbmath.org/1500.830292023-01-20T17:58:23.823708Z"Sun, Wei"https://zbmath.org/authors/?q=ai:sun.wei|sun.fei|sun.wei.2|sun.wei.1|sun.wei.3"Ge, Xian-Hui"https://zbmath.org/authors/?q=ai:ge.xianhuiSummary: We examine the complexity/volume conjecture and further investigate the possible connections between complexity and partition function. The complexity/partition function relation is then utilized to study the complexity of the thermofield double state of extended SYK models for various conditions. The difference between the complexity/partition function relation with the complexity/action duality comes from the fact that partition function can be evaluated from more than one saddle point, so they differ at most at the non-perturbative level. We further analyze free energy and the growth rate of complexity in the neutral AdS-Vaidya black hole formed by collapsing an uncharged spherically symmetric thin shell of null fluid. The relation between the late-time complexity growth rate and black hole/SYK wormhole phase transition is also discussed. Finally, we check the Lloyd bound with our proposal.Machine-learning the classification of spacetimeshttps://zbmath.org/1500.830392023-01-20T17:58:23.823708Z"He, Yang-Hui"https://zbmath.org/authors/?q=ai:he.yang-hui"Ipiña, Juan Manuel Pérez"https://zbmath.org/authors/?q=ai:ipina.juan-manuel-perezSummary: On the long-established classification problems in general relativity we take a novel perspective by adopting fruitful techniques from machine learning and modern data-science. In particular, we model Petrov's classification of spacetimes, and show that a feed-forward neural network can achieve high degree of success. We also show how data visualization techniques with dimensionality reduction can help analyze the underlying patterns in the structure of the different types of spacetimes.On the approximability of robust network designhttps://zbmath.org/1500.900072023-01-20T17:58:23.823708Z"Al-Najjar, Yacine"https://zbmath.org/authors/?q=ai:al-najjar.yacine"Ben-Ameur, Walid"https://zbmath.org/authors/?q=ai:ben-ameur.walid"Leguay, Jérémie"https://zbmath.org/authors/?q=ai:leguay.jeremieSummary: Given the dynamic nature of traffic, we investigate the variant of robust network design where we have to determine the capacity to reserve on each link so that each demand vector belonging to a polyhedral set can be routed. The objective is either to minimize congestion or a linear cost. Routing is assumed to be fractional and dynamic (i.e., dependent on the current traffic vector). We first prove that the robust network design problem with minimum congestion cannot be approximated within any constant factor. Then, using the ETH conjecture, we get a \(\Omega(\frac{\log n}{\log\log n})\) lower bound for the approximability of this problem. This implies that the well-known \(O(\log n)\) approximation ratio established by \textit{H. Räcke} [in: Proceedings of the 40th annual ACM symposium on theory of computing, STOC 2008. Victoria, Canada, May 17--20, 2008. New York, NY: Association for Computing Machinery (ACM). 255--264 (2008; Zbl 1231.68051)] in 2008 is tight. Using Lagrange relaxation, we obtain a new proof of the \(O(\log n)\) approximation. An important consequence of the Lagrange-based reduction and our inapproximability results is that the robust network design problem with linear reservation cost cannot be approximated within any constant ratio. This answers a long-standing open question of \textit{C. Chekuri} [``Routing and network design with robustness to changing or uncertain traffic demands'', ACM SIGACT News 38, No. 3, 106--129 (2007; \url{doi:10.1145/1324215.1324236})]. We also give another proof of the result of \textit{N. Goyal} et al. [Lect. Notes Comput. Sci. 5757, 277--288 (2009; Zbl 1256.90046)] stating that the optimal linear cost under static routing can be \(\Omega(\log n)\) more expensive than the cost obtained under dynamic routing. Finally, we show that even if only two given paths are allowed for each commodity, the robust network design problem with minimum congestion or linear cost is hard to approximate within some constant.Dynamic Katz and related network measureshttps://zbmath.org/1500.900082023-01-20T17:58:23.823708Z"Arrigo, Francesca"https://zbmath.org/authors/?q=ai:arrigo.francesca"Higham, Desmond J."https://zbmath.org/authors/?q=ai:higham.desmond-j"Noferini, Vanni"https://zbmath.org/authors/?q=ai:noferini.vanni"Wood, Ryan"https://zbmath.org/authors/?q=ai:wood.ryanSummary: We study walk-based centrality measures for time-ordered network sequences. For the case of standard dynamic walk-counting, we show how to derive and compute centrality measures induced by analytic functions. We also prove that dynamic Katz centrality, based on the resolvent function, has the unique advantage of allowing computations to be performed entirely at the node level. We then consider two distinct types of backtracking and develop a framework for capturing dynamic walk combinatorics when either or both is disallowed.Optimisation of electrical network configuration: complexity and algorithms for ring topologieshttps://zbmath.org/1500.900102023-01-20T17:58:23.823708Z"Barth, Dominique"https://zbmath.org/authors/?q=ai:barth.dominique"Mautor, Thierry"https://zbmath.org/authors/?q=ai:mautor.thierry"de Moissac, Arnaud"https://zbmath.org/authors/?q=ai:de-moissac.arnaud"Watel, Dimitri"https://zbmath.org/authors/?q=ai:watel.dimitri"Weisser, Marc-Antoine"https://zbmath.org/authors/?q=ai:weisser.marc-antoineSummary: We consider power distribution networks containing source nodes producing electricity and nodes representing electricity consumers. These sources and these consumers are interconnected by a switched network. Configuring this network consists in deciding which switches are activated and the orientation of the links between these switches, so as to obtain a directed acyclic graph (DAG) from the producer nodes to the consumer nodes. This DAG is valid if the electric flow it induces satisfies the demand of each consumer without exceeding the production capacity of each source and the flow capacity of each switch. We show that the problem of deciding if such a valid DAG exists is NP-complete. In the case where such a valid DAG exists, we study the problem of determining a valid DAG that balances the ratio between the amount of electricity produced and the maximum production capacity for each source. We show that this minimization problem is also NP-complete in the general case but that it becomes polynomial in the case of ring network topologies.The Hilbert basis of the cut cone over the complete graph \(K_6\)https://zbmath.org/1500.900312023-01-20T17:58:23.823708Z"Laburthe, François"https://zbmath.org/authors/?q=ai:laburthe.francoisSummary: Given a polyhedral cone \(C\), generated by the integer vectors \(x_1,\dots, x_n\), the set of integer vectors of \(C\) is an additive semi-group, whose minimal set of generators (for linear cobinations with coefficients in \(\mathbb{Z}^+)\) is called the \textit{Hilbert basis} of \(C\). Integer points of \(C\) which are not in the integer cone \(\mathbb{Z}^+(x_1,\dots, x_n)\) are called \textit{quasi}-\(h\) \textit{points}.
The set of cuts on a graph forms a Hilbert basis for all graphs strictly smaller than \(K_6\), but not for \(K_6\).
Two results are proven here:
\begin{itemize}
\item The Hilbert basis for the cut cone over \(K_6\) is explicitly given: it consists of the 31 non-zero cuts and of the 15 vectors \(d^e\) defined for each edge \(e\) by: \(d_f^e = 2\) for \(f \neq e\) and \(d_e^e = 4\).
\item The quasi-\(h\) points for \(K_6\) are characterized: they are exactly the \(d^e + n \delta(v)\) for \(v\) non adjacent to \(e\) and \(n \in \mathbb{Z}^+\).
\end{itemize}
For the entire collection see [Zbl 0875.90002].Transverse wave: an impartial color-propagation game inspired by social influence and quantum nimhttps://zbmath.org/1500.910222023-01-20T17:58:23.823708Z"Burke, Kyle"https://zbmath.org/authors/?q=ai:burke.kyle-w"Ferland, Matthew"https://zbmath.org/authors/?q=ai:ferland.matthew"Teng, Shang-Hua"https://zbmath.org/authors/?q=ai:teng.shang-hua.1Summary: In this paper, we study \textsf{Transverse Wave}, a colorful, impartial combinatorial game played on a two-dimensional grid. We are drawn to this game because of its apparent simplicity, contrasting intractability, and intrinsic connection to two other combinatorial games, one about social influences and another inspired by quantum superpositions. More precisely, we show that \textsf{Transverse Wave} is at the intersection of two other games, the social-influence-derived \textsf{Friend Circlee} and superposition based \textsf{Demi-Quantum Nim}. \textsf{Transverse Wave} is also connected with Schaefer's logic game \textsf{Avoid True} from the 1970s. In addition to analyzing the mathematical structures and computational complexity of \textsf{Transverse Wave}, we provide a web-based version of the game. Furthermore, we formulate a basic network-influence game, called \textsf{Demographic Influence}, which simultaneously generalizes \textsf{Node-Kayles} and \textsf{Demi-Quantum Nim} (which in turn contains \textsf{Nim}, \textsf{Avoid True}, and \textsf{Transverse Wave} as particular cases). These connections illuminate a \textit{lattice order} of games, induced by special-case/generalization relationships, fundamental to both design and comparative analysis of combinatorial games.
For the entire collection see [Zbl 1495.91007].Neural information processing and computations of two-input synapseshttps://zbmath.org/1500.920082023-01-20T17:58:23.823708Z"Kim, Soon Ho"https://zbmath.org/authors/?q=ai:kim.soonho"Woo, Junhyuk"https://zbmath.org/authors/?q=ai:woo.junhyuk"Choi, Kiri"https://zbmath.org/authors/?q=ai:choi.kiri"Choi, Mooyoung"https://zbmath.org/authors/?q=ai:choi.mooyoung"Han, Kyungreem"https://zbmath.org/authors/?q=ai:han.kyungreemSummary: Information processing in artificial neural networks is largely dependent on the nature of neuron models. While commonly used models are designed for linear integration of synaptic inputs, accumulating experimental evidence suggests that biological neurons are capable of nonlinear computations for many converging synaptic inputs via homo- and heterosynaptic mechanisms. This nonlinear neuronal computation may play an important role in complex information processing at the neural circuit level. Here we characterize the dynamics and coding properties of neuron models on synaptic transmissions delivered from two hidden states. The neuronal information processing is influenced by the cooperative and competitive interactions among synapses and the coherence of the hidden states. Furthermore, we demonstrate that neuronal information processing under two-input synaptic transmission can be mapped to linearly nonseparable XOR as well as basic AND/OR operations. In particular, the mixtures of linear and nonlinear neuron models outperform the fashion-MNIST test compared to the neural networks consisting of only one type. This study provides a computational framework for assessing information processing of neuron and synapse models that may be beneficial for the design of brain-inspired artificial intelligence algorithms and neuromorphic systems.Deep large-scale multi-task learning network for gene expression inferencehttps://zbmath.org/1500.920282023-01-20T17:58:23.823708Z"Dizaji, Kamran Ghasedi"https://zbmath.org/authors/?q=ai:dizaji.kamran-ghasedi"Chen, Wei"https://zbmath.org/authors/?q=au:Chen, Wei"Huang, Heng"https://zbmath.org/authors/?q=ai:huang.hengSummary: Gene expressions profiling empowers many biological studies in various fields by comprehensive characterization of cellular status under different experimental conditions. Despite the recent advances in high-throughput technologies, profiling the whole-genome set is still challenging and expensive. Based on the fact that there is high correlation among the expression patterns of different genes, the above issue can be addressed by a cost-effective approach that collects only a small subset of genes, called landmark genes, as the representative of the entire genome set and estimates the remaining ones, called target genes, via the computational model. Several shallow and deep regression models have been presented in the literature for inferring the expressions of target genes. However, the shallow models suffer from underfitting due to their insufficient capacity in capturing the complex nature of gene expression data, and the existing deep models are prone to overfitting due to the lack of using the interrelations of target genes in the learning framework. To address these challenges, we formulate the gene expression inference as a multi-task learning problem and propose a novel deep multi-task learning algorithm with automatically learning the biological interrelations among target genes and utilizing such information to enhance the prediction. In particular, we employ a multi-layer sub-network with low dimensional latent variables for learning the interrelations among target genes (\textit{i.e.} distinct predictive tasks), and impose a seamless and easy to implement regularization on deep models. Unlike the conventional complicated multi-task learning methods, which can only deal with tens or hundreds of tasks, our proposed algorithm can effectively learn the interrelations from the large-scale \((\sim 10,000)\) tasks on the gene expression inference problem, and does not suffer from cost-prohibitive operations. Experimental results indicate the superiority of our method compared to the existing gene expression inference models and alternative multi-task learning algorithms on two large-scale datasets.
For the entire collection see [Zbl 1496.92005].Time series adjustment enhancement of hierarchical modeling of \textit{Arabidopsis thaliana} gene interactionshttps://zbmath.org/1500.920542023-01-20T17:58:23.823708Z"Allen, Edward E."https://zbmath.org/authors/?q=ai:allen.edward-e"Farrell, John"https://zbmath.org/authors/?q=ai:farrell.john.1"Harkey, Alexandria F."https://zbmath.org/authors/?q=ai:harkey.alexandria-f"John, David J."https://zbmath.org/authors/?q=ai:john.david-j"Muday, Gloria"https://zbmath.org/authors/?q=ai:muday.gloria"Norris, James L."https://zbmath.org/authors/?q=ai:norris.james-l-iii"Wu, Bo"https://zbmath.org/authors/?q=ai:wu.boSummary: Network models of gene interactions, using time course gene transcript abundance data, are computationally created using a genetic algorithm designed to incorporate hierarchical Bayesian methods with time series adjustments. The posterior probabilities of interaction between pairs of genes are based on likelihoods of directed acyclic graphs. This algorithm is applied to transcript abundance data collected from \textit{Arabidopsis thaliana} genes. This study extends the underlying statistical and mathematical theory of the Norris-Patton likelihood by including time series adjustments.
For the entire collection see [Zbl 1496.92004].Approximation algorithms for a genetic diagnostics problemhttps://zbmath.org/1500.920552023-01-20T17:58:23.823708Z"Kosaraju, S. Rao"https://zbmath.org/authors/?q=ai:kosaraju.s-rao"Schäffer, Alejandro A."https://zbmath.org/authors/?q=ai:schaffer.alejandro-a"Biesecker, Leslie G."https://zbmath.org/authors/?q=ai:biesecker.leslie-gSummary: We define and study a combinatorial problem called `weighted diagnostic cover' (WDC) that models the use of a laboratory technique called \textit{genotyping} in the diagnosis of a important class of chromosomal aberrations. An optimal solution to WDC would enable us to define a genetic assay that maximizes the diagnostic power for a specified cost of laboratory work. We develop approximation algorithms for WDC by making use of the well-known problem SET COVER for which the \textit{greedy} heuristic has been extensively studied. We prove worst-case performance bounds on the \textit{greedy} heuristic for WDC and for another heuristic we call \textit{directional greedy}. We implemented both heuristics. We also implemented a local search heuristic that takes the solutions obtained by \textit{greedy} and \textit{dir-greedy} and applies swaps until they are locally optimal. We report their performance on a real data set that is representative of the options that a clinical geneticist faces for the real diagnostic problem. Many open problems related to WDC remain, both of theoretical interest and practical importance.
For the entire collection see [Zbl 1492.68014].A 3.5-approximation algorithm for sorting by intergenic transpositionshttps://zbmath.org/1500.920572023-01-20T17:58:23.823708Z"Oliveira, Andre Rodrigues"https://zbmath.org/authors/?q=ai:oliveira.andre-rodrigues"Jean, Géraldine"https://zbmath.org/authors/?q=ai:jean.geraldine"Fertin, Guillaume"https://zbmath.org/authors/?q=ai:fertin.guillaume"Brito, Klairton Lima"https://zbmath.org/authors/?q=ai:brito.klairton-lima"Dias, Ulisses"https://zbmath.org/authors/?q=ai:dias.ulisses"Dias, Zanoni"https://zbmath.org/authors/?q=ai:dias.zanoniSummary: Genome rearrangements affect large stretches of genomes during evolution. One of the most studied genome rearrangement is the \textit{transposition}, which occurs when a sequence of genes is moved to another position inside the genome. Mathematical models have been used to estimate the evolutionary distance between two different genomes based on genome rearrangements. However, many of these models have focused only on the (order of the) genes of a genome, disregarding other important elements in it. Recently, researchers have shown that considering existing regions between each pair of genes, called \textit{intergenic regions}, can enhance the distance estimation in realistic data. In this work, we study the transposition distance between two genomes, but we also consider intergenic regions, a problem we name sorting permutations by intergenic transpositions (SbIT). We show that this problem is NP-hard and propose a 3.5-approximation algorithm for it.
For the entire collection see [Zbl 1496.92004].Heuristics for reversal distance between genomes with duplicated geneshttps://zbmath.org/1500.920582023-01-20T17:58:23.823708Z"Siqueira, Gabriel"https://zbmath.org/authors/?q=ai:siqueira.gabriel"Brito, Klairton Lima"https://zbmath.org/authors/?q=ai:brito.klairton-lima"Dias, Ulisses"https://zbmath.org/authors/?q=ai:dias.ulisses"Dias, Zanoni"https://zbmath.org/authors/?q=ai:dias.zanoniSummary: In comparative genomics, one goal is to find similarities between genomes of different organisms. Comparisons using genome features like genes, gene order, and regulatory sequences are carried out with this purpose in mind. Genome rearrangements are mutational events that affect large extensions of the genome. They are responsible for creating extant species with conserved genes in different positions across genomes. Close species -- from an evolutionary point of view -- tend to have the same set of genes or share most of them. When we consider gene order to compare two genomes, it is possible to use a parsimony criterion to estimate how close the species are. We are interested in the shortest sequence of genome rearrangements capable of transforming one genome into the other, which is named \textit{rearrangement distance}.
Reversal is one of the most studied genome rearrangements events. This event acts in a segment of the genome, inverting the position and possibly the orientation of genes in it. When the genome has no gene repetition, a common approach is to map it as a permutation such that each element represents a conserved block. When genomes have replicated genes, this mapping is usually performed using strings. The number of replicas depends on the organisms being compared, but in many scenarios, it tends to be small. In this work, we study the reversal distance between genomes with duplicated genes considering that the orientation of genes is unknown. We present three heuristics that use techniques like genetic algorithms and local search. We conduct experiments using a database of simulated genomes and compared our results with other algorithms from the literature.
For the entire collection see [Zbl 1496.92004].A randomized parallel algorithm for efficiently finding near-optimal universal hitting setshttps://zbmath.org/1500.920732023-01-20T17:58:23.823708Z"Ekim, Barış"https://zbmath.org/authors/?q=ai:ekim.baris"Berger, Bonnie"https://zbmath.org/authors/?q=ai:berger.bonnie"Orenstein, Yaron"https://zbmath.org/authors/?q=ai:orenstein.yaronSummary: As the volume of next generation sequencing data increases, an urgent need for algorithms to efficiently process the data arises. \textit{universal hitting sets} (UHS) were recently introduced as an alternative to the central idea of minimizers in sequence analysis with the hopes that they could more efficiently address common tasks such as computing hash functions for read overlap, sparse suffix arrays, and Bloom filters. A UHS is a set of \(k\)-mers that hit every sequence of length \(L\), and can thus serve as indices to \(L\)-long sequences. Unfortunately, methods for computing small UHSs are not yet practical for real-world sequencing instances due to their serial and deterministic nature, which leads to long runtimes and high memory demands when handling typical values of \(k\) (e.g. \(k > 13)\). To address this bottleneck, we present two algorithmic innovations to significantly decrease runtime while keeping memory usage low: (i) we leverage advanced theoretical and architectural techniques to parallelize and decrease memory usage in calculating \(k\)-mer hitting numbers; and (ii) we build upon techniques from randomized set cover to select universal \(k\)-mers much faster. We implemented these innovations in PASHA, the first randomized parallel algorithm for generating near-optimal UHSs, which newly handles \(k > 13\). We demonstrate empirically that PASHA produces sets only slightly larger than those of serial deterministic algorithms; moreover, the set size is provably guaranteed to be within a small constant factor of the optimal size. PASHA's runtime and memory-usage improvements are orders of magnitude faster than the current best algorithms. We expect our newly-practical construction of UHSs to be adopted in many high-throughput sequence analysis pipelines.
For the entire collection see [Zbl 1496.92005].A hybrid dynamical systems perspective on reinforcement learning for cyber-physical systems: vistas, open problems, and challengeshttps://zbmath.org/1500.930442023-01-20T17:58:23.823708Z"Poveda, Jorge I."https://zbmath.org/authors/?q=ai:poveda.jorge-i"Teel, Andrew R."https://zbmath.org/authors/?q=ai:teel.andrew-rSummary: The next generation of autonomous cyber-physical systems will integrate a variety of heterogeneous computation, communication, and control algorithms. This integration will lead to closed-loop systems with highly intertwined interactions between the digital world and the physical world. For these systems, designing robust and optimal data-driven control algorithms necessitates fundamental breakthroughs at the intersection of different areas such as adaptive and learning-based control, optimal and robust control, hybrid dynamical systems theory, and network control, to name just a few. Motivated by this necessity, control techniques inspired by ideas of reinforcement learning have emerged as a promising paradigm that could potentially integrate most of the key desirable features. However, while significant results in reinforcement learning have been developed during the last decades, the literature is still missing a systematic framework for the design and analysis of reinforcement learning-based controllers that can safely and systematically integrate the intrinsic continuous-time and discrete-time dynamics that emerge in cyber-physical systems. Motivated by this limitation, and by recent theoretical frameworks developed for the analysis of hybrid systems, in this chapter we explore some vistas and open problems that could potentially be addressed by merging tools from reinforcement learning and hybrid dynamical systems theory, and which could have significant implications for the development of the next generation of autonomous cyber-physical systems.
For the entire collection see [Zbl 1492.49001].Observation-assisted heuristic synthesis of covert attackers against unknown supervisorshttps://zbmath.org/1500.930682023-01-20T17:58:23.823708Z"Lin, Liyong"https://zbmath.org/authors/?q=ai:lin.liyong"Tai, Ruochen"https://zbmath.org/authors/?q=ai:tai.ruochen"Zhu, Yuting"https://zbmath.org/authors/?q=ai:zhu.yuting"Su, Rong"https://zbmath.org/authors/?q=ai:su.rongSummary: In this work, we address the problem of synthesis of covert attackers in the setup where the model of the plant is available, but the model of the supervisor is unknown, to the adversary. To compensate the lack of knowledge on the supervisor, we assume that the adversary has recorded a (prefix-closed) finite set of observations of the runs of the closed-loop system, which can be used for assisting the synthesis. We present a heuristic algorithm for the synthesis of covert damage-reachable attackers, based on the model of the plant and the (finite) set of observations, by a transformation into solving an instance of the partial-observation supervisor synthesis problem. The heuristic algorithm developed in this paper may allow the adversary to synthesize covert attackers without having to know the model of the supervisor, which could be hard to obtain in practice. For simplicity, we shall only consider covert attackers that are able to carry out sensor replacement attacks and actuator disablement attacks. The effectiveness of our approach is illustrated on a water tank example adapted from the literature.Tightly secure ring-LWE based key encapsulation with short ciphertextshttps://zbmath.org/1500.940162023-01-20T17:58:23.823708Z"Albrecht, Martin R."https://zbmath.org/authors/?q=ai:albrecht.martin-r"Orsini, Emmanuela"https://zbmath.org/authors/?q=ai:orsini.emmanuela"Paterson, Kenneth G."https://zbmath.org/authors/?q=ai:paterson.kenneth-g"Peer, Guy"https://zbmath.org/authors/?q=ai:peer.guy"Smart, Nigel P."https://zbmath.org/authors/?q=ai:smart.nigel-pSummary: We provide a tight security proof for an IND-CCA Ring-LWE based Key Encapsulation Mechanism that is derived from a generic construction of \textit{A. W. Dent} [Lect. Notes Comput. Sci. 2898, 133--151 (2003; Zbl 1123.94336)].
Such a tight reduction is not known for the generic construction. The resulting scheme has shorter ciphertexts than can be achieved with other generic constructions of Dent or by using the well-known
Fujisaki-Okamoto constructions .
Our tight security proof is obtained by reducing to the security of the underlying Ring-LWE problem, avoiding an intermediate reduction to a CPA-secure encryption scheme. The proof technique maybe of interest for other schemes based on LWE and Ring-LWE.
For the entire collection see [Zbl 1493.68010].Non-interactive provably secure attestations for arbitrary RSA prime generation algorithmshttps://zbmath.org/1500.940182023-01-20T17:58:23.823708Z"Benhamouda, Fabrice"https://zbmath.org/authors/?q=ai:benhamouda.fabrice"Ferradi, Houda"https://zbmath.org/authors/?q=ai:ferradi.houda"Géraud, Rémi"https://zbmath.org/authors/?q=ai:geraud.remi"Naccache, David"https://zbmath.org/authors/?q=ai:naccache.davidSummary: RSA public keys are central to many cryptographic applications; hence their validity is of primary concern to the scrupulous cryptographer. The most relevant properties of an RSA public key \((n, e)\) depend on the factors of \(n\): are they properly generated primes? are they large enough? is \(e\) co-prime with \(\phi (n)\)? etc. And of course, it is out of question to reveal \(n\)'s factors.
Generic non-interactive zero-knowledge (NIZK) proofs can be used to prove such properties. However, NIZK proofs are not practical at all. For some very specific properties, specialized proofs exist but such ad hoc proofs are naturally hard to generalize.
This paper proposes a new type of general-purpose compact non-interactive proofs, called attestations, allowing the key generator to convince any third party that \(n\) was properly generated. The proposed construction applies to any prime generation algorithm, and is provably secure in the Random Oracle Model.
As a typical implementation instance, for a 138-bit security, verifying or generating an attestation requires \(k=1024\) prime generations. For this instance, each processed message will later need to be signed or encrypted 14 times by the final users of the attested moduli.
For the entire collection see [Zbl 1493.68010].Security analysis of Even-Mansour structure hash functionshttps://zbmath.org/1500.940212023-01-20T17:58:23.823708Z"Chen, Shiwei"https://zbmath.org/authors/?q=ai:chen.shiwei"Cui, Ting"https://zbmath.org/authors/?q=ai:cui.ting"Jin, Chenhui"https://zbmath.org/authors/?q=ai:jin.chenhuiSummary: In this paper, we mainly focus on the security of Even-Mansour structure hash functions, including preimage attack resistance and multi-block collision attack resistance.
Firstly, we focus on the Even-Mansour structure hash function with two iterations. Basing on the permutation used in the Even-Mansour structure hash function we construct two new functions \(f_1\) and \(f_2\), and find the partial invariables of input-output in one function \(f_1\). Then using the partial invariables of input-output and the meet-in-the-middle techniques, we present a preimage attack on the Even-Mansour structure hash function with two iterations, with the time complexity of \({2}^{{a(2^a - 1)}} { + 2}^a + {2}^{n - 2a}\) functional operations of \(f_1\) or \(f_2\) and the memory is \({2}^a\) \(a\)-bit values, where \(a{2}^a \le n\) and \(n\) is the size of hash value.
Secondly, we extend the Even-Mansour structure hash function to the one with arbitrary iterations. Utilizing the property that the beginning and the ending of every iteration in the Even-Mansour structure both need XOR the message or the transform result of the message, we construct many chaining values with relations in each iteration, which makes that the number of the final chaining values is equal to the product of the number of output chaining values in each iteration, and thereby propose our multi-block collision attack on the Even-Mansour structure hash functions with the time complexity of \(t2^{\frac{s}{2t}}\) queries of F permutation and memory complexity of \(O(2^{s/2} )\), where \(t\) is the block number of collision message and \(s\) is the size of truncated hash value.
For the entire collection see [Zbl 1487.68013].A lattice-based certificateless public key encryption with equality test in standard modelhttps://zbmath.org/1500.940242023-01-20T17:58:23.823708Z"Dung Hoang Duong"https://zbmath.org/authors/?q=ai:dung-hoang-duong."Susilo, Willy"https://zbmath.org/authors/?q=ai:susilo.willy"Minh Kim Bui"https://zbmath.org/authors/?q=ai:minh-kim-bui."Thanh Xuan Khuc"https://zbmath.org/authors/?q=ai:thanh-xuan-khuc.Summary: Certificateless public key encryption \textsf{CL-PKE} solves the problems of establishing public-key infrastructure for traditional public key encryption and resolving key escrow for identity-based encryption. Equality test is an extremely useful property that enables the ability of checking whether two ciphertexts encrypting the same message. \textit{H. Qu} et al. [Inf. Sci. 462, 76--92 (2018; Zbl 1440.94075)] introduced the notion of certificateless public key encryption with equality test (\textsf{CL-PKEET}), together with four types of adversaries, that solves certificate manangement and key escrow problems of public key encryption with equality test (\textsf{PKEET}) and identity-based encryption with equality test (\textsf{IBEET}), and proposed a first \textsf{CL-PKEET} scheme based on Bilinear Diffie-Hellman assumption in random oracle model. In this paper, we propose the first lattice-based \textsf{CL-PKEET} in standard model whose security is reduced to the hardness of the learning with errors problem. In particular, we prove that our schemes are secure against two types of selective-identity adversaries introduced by Qu et al. [loc. cit.].
For the entire collection see [Zbl 1497.94002].Streebog compression function as PRF in secret-key settingshttps://zbmath.org/1500.940322023-01-20T17:58:23.823708Z"Kiryukhin, V. A."https://zbmath.org/authors/?q=ai:kiryukhin.v-aSummary: Security of the many keyed hash-based cryptographic constructions (such as HMAC) depends on the fact that the underlying compression function \(\mathsf{g}(H,M)\) is a pseudorandom function (PRF). This paper presents key-recovery algorithms for 7 rounds (of 12) of Streebog compression function. Two cases were considered, as a secret key can be used: the previous state \(H\) or the message block \(M\). The proposed methods implicitly show that Streebog compression function has a large security margin as PRF in the above-mentioned secret-key settings.Efficient arithmetic in (pseudo-)Mersenne prime order fieldshttps://zbmath.org/1500.940452023-01-20T17:58:23.823708Z"Nath, Kaushik"https://zbmath.org/authors/?q=ai:nath.kaushik"Sarkar, Palash"https://zbmath.org/authors/?q=ai:sarkar.palashSummary: Elliptic curve cryptography is based upon elliptic curves defined over finite fields. Operations over such elliptic curves require arithmetic over the underlying field. In particular, fast implementations of multiplication and squaring over the finite field are required for performing efficient elliptic curve cryptography. The present work considers the problem of obtaining efficient algorithms for field multiplication and squaring. From a theoretical point of view, we present a number of algorithms for multiplication/squaring and reduction which are appropriate for different settings. Our algorithms collect together and generalize ideas which are scattered across various papers and codes. At the same time, we also introduce new ideas to improve upon existing works. A key theoretical feature of our work is that we provide formal statements and detailed proofs of correctness of the different reduction algorithms that we describe. On the implementation aspect, a total of fourteen primes are considered, covering all previously proposed cryptographically relevant (pseudo-)Mersenne prime order fields at various security levels. For each of these fields, we provide 64-bit assembly implementations of the relevant multiplication and squaring algorithms targeted towards two different modern Intel architectures. We were able to find previous 64-bit implementations for six of the fourteen primes considered in this work. On the Haswell and Skylake processors of Intel, for all the six primes where previous implementations are available, our implementations outperform such previous implementations.Verifiable inner product encryption schemehttps://zbmath.org/1500.940482023-01-20T17:58:23.823708Z"Soroush, Najmeh"https://zbmath.org/authors/?q=ai:soroush.najmeh"Iovino, Vincenzo"https://zbmath.org/authors/?q=ai:iovino.vincenzo"Rial, Alfredo"https://zbmath.org/authors/?q=ai:rial.alfredo"Roenne, Peter B."https://zbmath.org/authors/?q=ai:ronne.peter-b"Ryan, Peter Y. A."https://zbmath.org/authors/?q=ai:ryan.peter-y-aSummary: In the standard setting of functional encryption (FE), we assume both the Central Authority (CA) and the encryptors to run their respective algorithms faithfully. \textit{S. Badrinarayanan} et al. [Lect. Notes Comput. Sci. 10032, 557--587 (2016; Zbl 1407.94078)] proposed the concept of verifiable FE, which essentially guarantees that dishonest encryptors and authorities, even when colluding together, are not able to generate ciphertexts and tokens that give ``inconsistent'' results. They also provide a compiler turning any perfectly correct FE into a verifiable FE, but do not give efficient constructions.
In this paper we improve on this situation by considering Inner-Product Encryption (IPE), which is a special case of functional encryption and a primitive that has attracted wide interest from both practitioners and researchers in the last decade. Specifically, we construct the first efficient verifiable IPE (VIPE) scheme according to the inner-product functionality of \textit{J. Katz} et al. [ibid. 4965, 146--162 (2008; Zbl 1149.94323)]. To instantiate the general construction of Badrinarayanan et al. we need to solve several additional challenges. In particular, we construct the first efficient perfectly correct IPE scheme. Our VIPE satisfies unconditional verifiability, whereas its privacy relies on the DLin assumption.
For the entire collection see [Zbl 1496.94004].On the stability of periodic binary sequences with zone restrictionhttps://zbmath.org/1500.940492023-01-20T17:58:23.823708Z"Su, Ming"https://zbmath.org/authors/?q=ai:su.ming"Wang, Qiang"https://zbmath.org/authors/?q=ai:wang.qiang.1|wang.qiangSummary: Traditional global stability measure for sequences is hard to determine because of large search space. We propose the \(k\)-error linear complexity with a zone restriction for measuring the local stability of sequences. For several classes of sequences, we demonstrate that the \(k\)-error linear complexity is identical to the \(k\)-error linear complexity within a zone, while the length of a zone is much smaller than the whole period when the \(k\)-error linear complexity is large. These sequences have periods \(2^n\), or \(2^v r\) (\(r\) odd prime and 2 is primitive modulo \(r)\), or \(2^v p_1^{s_1} \cdots p_n^{s_n}\) (\(p_i\) is an odd prime and 2 is primitive modulo \(p_i^2\), where \(1\le i \le n\)) respectively. In particular, we completely determine the spectrum of 1-error linear complexity with any zone length for an arbitrary \(2^n\)-periodic binary sequence.An improvement of a key exchange protocol relying on polynomial mapshttps://zbmath.org/1500.940512023-01-20T17:58:23.823708Z"Suzuki, Keita"https://zbmath.org/authors/?q=ai:suzuki.keita"Nuida, Koji"https://zbmath.org/authors/?q=ai:nuida.kojiSummary: \textit{K. Akiyama} et al. [ibid. 11, Article ID 1950003, 11 p. (2019; Zbl 1467.94023)] proposed a post-quantum key exchange protocol that is based on the hardness of solving a system of multivariate nonlinear polynomial equations but has a design strategy different from ordinary multivariate cryptography. Their protocol has two versions, an original one and a modified one, where the modified one has a trade-off that its security is strengthened while it has nonzero error probability in establishing a common key. In fact, the evaluation in their paper suggests that the probability of failing to establish a common key by the modified protocol with the proposed parameter set is impractically high. In this paper, we improve the success probability of Akiyama et al.'s modified key exchange protocol significantly while keeping the security, by restricting each component of the correct common key from the whole of the coefficient field to its small subset. We give theoretical and experimental evaluations showing that our proposed parameter set for our protocol is expected to achieve both failure probability \(2^{-120}\) and \(128\)-bit security level.Privacy-preserving and yet robust collaborative filtering recommender as a servicehttps://zbmath.org/1500.940522023-01-20T17:58:23.823708Z"Tang, Qiang"https://zbmath.org/authors/?q=ai:tang.qiangSummary: In this paper, we provide a general system structure for latent factor based collaborative filtering recommenders by formulating them into model training and prediction computing stages. Aiming at pragmatic solutions, we first show how to construct privacy-preserving and yet robust model training stage based on existing solutions. Then, we propose two cryptographic protocols to realize a privacy-preserving prediction computing stage, depending on whether or not an extra proxy is involved.
For the entire collection see [Zbl 1497.94002].Updatable all-but-one dual projective hashing and its applicationshttps://zbmath.org/1500.940612023-01-20T17:58:23.823708Z"Zhang, Kai"https://zbmath.org/authors/?q=ai:zhang.kai.5|zhang.kai.3|zhang.kai.4|zhang.kai.1|zhang.kai.2"Jiang, Zhe"https://zbmath.org/authors/?q=ai:jiang.zhe"Gong, Junqing"https://zbmath.org/authors/?q=ai:gong.junqing"Qian, Haifeng"https://zbmath.org/authors/?q=ai:qian.haifengSummary: Dual projective hashing is an extension of Cramer-Shoup projective hashing, which implies lossy trapdoor functions (LTDFs) and deterministic PKE schemes secure with respect to hard-to-invert auxiliary input. In this paper, we introduce the notion of updatable all-but-one dual projective hashing (UDPH) based on the all-but-one variant of dual projective hashing, which allows us to investigate the continuous leakage of invisible key update in the same context. In particular,
\par i) we give a general construction of leakage-resilient all-but-one LTDFs via UDPH, which yields high efficiency compared with existed direct leakage-resilient all-but-one LTDFs constructions based on MDDH and SXDH. Concretely, our generic framework can be instantiated with \(k\)-LIN, DCR, QR and LWE assumptions in the standard model.
\par ii) we present a modular framework for leakage-resilient deterministic PKEs with hard-to-invert auxiliary input, which is proven secure under the introduced continuous-leakage-resilient strong privacy indistinguishability-based security model of invisible key update. Compared with the known MDDH/SXDH-based schemes, our constructions can be instantiated with more widely-accepted assumptions including \(k\)-LIN, DCR, QR and LWE.
For the entire collection see [Zbl 1487.68013].An optimized inner product argument with more application scenarioshttps://zbmath.org/1500.940622023-01-20T17:58:23.823708Z"Zhang, Zongyang"https://zbmath.org/authors/?q=ai:zhang.zongyang"Zhou, Zibo"https://zbmath.org/authors/?q=ai:zhou.zibo"Li, Weihan"https://zbmath.org/authors/?q=ai:li.weihan"Tao, Hongyu"https://zbmath.org/authors/?q=ai:tao.hongyuSummary: The inner product argument is an effective tool to reduce communication complexity in many cryptographic protocols. \textit{J. Bootle} et al. [Lect. Notes Comput. Sci. 9666, 327--357 (2016; Zbl 1369.94520)] presented an inner product argument with a statement including two vector commitments to two vectors and the inner product of the two vectors equals to a public scalar. \textit{B. Bünz} et al. [ibid. 12551, 1--18 (2020; Zbl 1485.94132)] then presented an inner product argument with a statement including only one vector commitment to two vectors. In this paper, we first summarize the scenarios to use inner product arguments based on Bootle et al. [loc. cit.] and Bünz et al. [loc. cit.]. Then we propose and implement an improved inner product argument for the same statement as Bootle et al. [loc. cit.]. Our argument has a lower communication complexity of \(4\log_2n\) which improves by about 30\% when \(n=8192\). Moreover, as most existing inner product argument protocols have a recursive structure, we find the most appropriate recursive round that decides a better communication complexity.
For the entire collection see [Zbl 1487.68013].A pairing-less identity-based blind signature with message recovery scheme for cloud-assisted serviceshttps://zbmath.org/1500.940712023-01-20T17:58:23.823708Z"Kumar, Mahender"https://zbmath.org/authors/?q=ai:kumar.mahender"Chand, Satish"https://zbmath.org/authors/?q=ai:chand.satishSummary: The rapid growing big data enforces many organizations to shift their data and services like digital right management, e-payment, and e-voting systems to the cloud. In such cloud-assisted services, the blind signature scheme could be one of the cryptographic tools, which provides the integrity of data and user anonymity. It allows the user to ask the signer for signing on message without disclosing any information about the content to the signer. Since several blind signature schemes have been proposed, but due to the expensive computation and bandwidth cost, they are impractical for the cloud-assisted as well as Internet-based environment. In this paper, we propose a new provable secure identity-based blind signature scheme with message recovery (IDBS-MR) using the elliptic curve cryptography. The proposed IDBS-MR scheme does not transmit the message with the signature while the message is recovered during verification round; hence it has the least message-signature length. The security analysis shows that the proposed IDBS-MR scheme is secured against existential forgery attack under the adaptive chosen message and ID attacks (EF-ID-CMA) under the assumption of solving the ECDL problem, and random oracle model (ROM) and achieves blindness property. The performance analysis shows that our scheme is efficient as compared to related existing schemes.
For the entire collection see [Zbl 1497.94002].