Recent zbMATH articles in MSC 65Fhttps://zbmath.org/atom/cc/65F2024-02-28T19:32:02.718555ZWerkzeugPebbling game and alternative basis for high performance matrix multiplicationhttps://zbmath.org/1527.150012024-02-28T19:32:02.718555Z"Schwartz, Oded"https://zbmath.org/authors/?q=ai:schwartz.oded"Vaknin, Noa"https://zbmath.org/authors/?q=ai:vaknin.noaSummary: Matrix multiplication is one of the most extensively used kernels in scientific computing. Although subcubic algorithms exist, most high performance implementations are based on the classical \(\Theta(n^3)\) matrix multiplication. Designing an algorithm that obtains even modest improvements in performance over existing implementations, requires carefully addressing challenges such as reducing computation costs, communication costs, and memory footprint. We provide the first high performance general matrix-matrix multiplication that utilizes the alternative basis method on Strassen's algorithm. We reduce the basis transformation overheads and decrease the memory footprint of the bilinear phase by using the pebbling game optimization scheme, consequentially improving both arithmetic and communication costs. Our algorithm outperforms DGEMM on feasible matrix dimensions starting at \(n=96\). It obtains an increasing speedup of up to nearly \(\times 2\) speedup for larger matrix dimensions when running sequentially, and even larger speedups for certain matrix dimensions when running in parallel.Approximate completely positive semidefinite factorizations and their rankshttps://zbmath.org/1527.150082024-02-28T19:32:02.718555Z"Abbasi, Paria"https://zbmath.org/authors/?q=ai:abbasi.paria"Klingler, Andreas"https://zbmath.org/authors/?q=ai:klingler.andreas"Netzer, Tim"https://zbmath.org/authors/?q=ai:netzer.timSummary: In this paper we show the existence of approximate \textit{completely positive semidefinite (cpsd)} factorizations with a cpsd-rank bounded above (almost) independently from the cpsd-rank of the initial matrix. This is particularly relevant since the cpsd-rank of a matrix cannot, in general, be upper bounded by a function only depending on its size. For this purpose, we make use of the Approximate Carathéodory Theorem in order to construct an approximate matrix with a low-rank Gram representation. We then employ the Johnson-Lindenstrauss Lemma to improve to a logarithmic dependence of the cpsd-rank on the size.Localization sets for Pareto eigenvalues with applicationshttps://zbmath.org/1527.150152024-02-28T19:32:02.718555Z"He, Jun"https://zbmath.org/authors/?q=ai:he.jun"Liu, Yanmin"https://zbmath.org/authors/?q=ai:liu.yanmin"Shen, Xiaowei"https://zbmath.org/authors/?q=ai:shen.xiaoweiSummary: Two localization sets for Pareto eigenvalues of matrices are established to provide some checkable sufficient conditions for the copositivity of matrices, which also answer the open question (c) in [\textit{A. Seeger}, Linear Algebra Appl. 292, No. 1--3, 1--14 (1999; Zbl 1016.90067)]. Finally, numerical experiments are reported to show the efficiency of the proposed localization sets.Generalized matrix nearness problemshttps://zbmath.org/1527.150172024-02-28T19:32:02.718555Z"Li, Zihao"https://zbmath.org/authors/?q=ai:li.zihao"Lim, Lek-Heng"https://zbmath.org/authors/?q=ai:lim.lek-hengSummary: We show that the global minimum solution of \(\Vert A-BXC\Vert\) can be found in closed form with singular value decompositions and generalized singular value decompositions for a variety of constraints on \(X\) involving rank, norm, symmetry, two-sided product, and prescribed eigenvalue. This extends the solution of Friedland-Torokhti for the generalized rank-constrained approximation problem to other constraints and provides an alternative solution for rank constraint in terms of singular value decompositions. For more complicated constraints on \(X\) involving structures such as Toeplitz, Hankel, circulant, nonnegativity, stochasticity, positive semidefiniteness, prescribed eigenvector, etc., we prove that a simple iterative method is linearly and globally convergent to the global minimum solution.A note on numerical ranges of tensorshttps://zbmath.org/1527.150242024-02-28T19:32:02.718555Z"Chandra Rout, Nirmal"https://zbmath.org/authors/?q=ai:rout.nirmal-chandra"Panigrahy, Krushnachandra"https://zbmath.org/authors/?q=ai:panigrahy.krushnachandra"Mishra, Debasisha"https://zbmath.org/authors/?q=ai:mishra.debasisha\textit{R. Ke} et al. [Linear Algebra Appl. 508, 100--132 (2016; Zbl 1346.15025)] introduced tensor numerical ranges using tensor inner products and tensor norms via \(k\)-mode product. The authors study the numerical range and the numerical radius for even-order square tensors via the Einstein product, and establish the convexity of the numerical range. They also develop algorithms to compute the numerical ranges of tensors, thus designing very efficient algorithms for the calculation of their eigenvalues. Finally, they investigate some properties of the numerical range of the Moore-Penrose inverse of a tensor.
Reviewer: Mohammad Sal Moslehian (Mashhad)On an algorithm converging to hyperstochastic tensorshttps://zbmath.org/1527.150252024-02-28T19:32:02.718555Z"Hardt, Will"https://zbmath.org/authors/?q=ai:hardt.will"Yin, John Bohan"https://zbmath.org/authors/?q=ai:yin.john-bohanFrom the text: In this letter, we wish to clarify the scope of a procedure due to \textit{A. Shashua} et al. [in: Computer vision - ECCV 2006. Lecture notes in computer science, Vol. 3954. Berlin, Heidelberg: Springer; 595--608 (2006)] for generating hyperstochastic tensors. Specifically, we will explain that the convergence of their
algorithm to a hyperstochastic tensor has only been established for inputs which are nonnegative tensors with no zero slices and positive permanent, and we will show by example
that this convergence is not guaranteed without the assumption of positive permanent.Group-invariant tensor train networks for supervised learninghttps://zbmath.org/1527.150272024-02-28T19:32:02.718555Z"Sprangers, Brent"https://zbmath.org/authors/?q=ai:sprangers.brent"Vannieuwenhoven, Nick"https://zbmath.org/authors/?q=ai:vannieuwenhoven.nickSummary: Invariance has recently proven to be a powerful inductive bias in machine learning models. One such class of predictive or generative models are tensor networks. We introduce a new numerical algorithm to construct a basis of tensors that are invariant under the action of normal matrix representations of an arbitrary finite group. This method can be up to several orders of magnitude faster than previous approaches. The group-invariant tensors are then combined into a group-invariant tensor train network, which can be used as a supervised machine learning model. We applied this model to a protein binding classification problem, taking into account problem-specific invariances, and obtained prediction accuracy in line with state-of-the-art deep learning approaches.Semidefinite relaxation methods for tensor absolute value equationshttps://zbmath.org/1527.150282024-02-28T19:32:02.718555Z"Zhou, Anwa"https://zbmath.org/authors/?q=ai:zhou.anwa"Liu, Kun"https://zbmath.org/authors/?q=ai:liu.kun"Fan, Jinyan"https://zbmath.org/authors/?q=ai:fan.jinyanSummary: In this paper, we consider the tensor absolute value equations (TAVEs). When one tensor is row diagonal with odd order, we show that the TAVEs can be reduced to an algebraic equation; when it is row diagonal and nonsingular with even order, we prove that the TAVEs is equivalent to a polynomial complementary problem. When no tensor is row diagonal, we formulate the TAVEs equivalently as polynomial optimization problems in two different ways. Each of them can be solved by Lasserre's hierarchy of semidefinite relaxations. The finite convergence properties are also discussed. Numerical experiments show the efficiency of the proposed methods.Least squares Hermitian problem of a kind of quaternion tensor equationhttps://zbmath.org/1527.150352024-02-28T19:32:02.718555Z"Jiang, Hua"https://zbmath.org/authors/?q=ai:jiang.hua"Yuan, Shi-Fang"https://zbmath.org/authors/?q=ai:yuan.shifang"Cao, Yu-Zhe"https://zbmath.org/authors/?q=ai:cao.yu-zhe(no abstract)Introducing the class of semidoubly stochastic matrices: a novel scaling approach for rectangular matriceshttps://zbmath.org/1527.150392024-02-28T19:32:02.718555Z"Knight, Philip A."https://zbmath.org/authors/?q=ai:knight.philip-a"le Gorrec, Luce"https://zbmath.org/authors/?q=ai:le-gorrec.luce"Mouysset, Sandrine"https://zbmath.org/authors/?q=ai:mouysset.sandrine"Ruiz, Daniel"https://zbmath.org/authors/?q=ai:ruiz.danielSummary: It is easy to verify that if \(\mathbf{A}\) is a doubly stochastic matrix, then both its normal equations \(\mathbf{AA}^T\) and \(\mathbf{A}^T\mathbf{A}\) are also doubly stochastic, but the reciprocal is not true. In this paper, we introduce and analyze the complete class of nonnegative matrices whose normal equations are doubly stochastic. This class contains and extends the class of doubly stochastic matrices to the rectangular case. In particular, we characterize these matrices in terms of their row and column sums and provide results regarding their nonzero structure. We then consider the diagonal equivalence of any rectangular nonnegative matrix to a matrix of this new class, and we identify the properties for such a diagonal equivalence to exist. To this end, we present a scaling algorithm and establish the conditions for its convergence. We also provide numerical experiments to highlight the behavior of the algorithm in the general case.Perturbation upper bounds for singular subspaces with a kind of heteroskedastic noise and its application in clusteringhttps://zbmath.org/1527.620402024-02-28T19:32:02.718555Z"Zhang, Zhentao"https://zbmath.org/authors/?q=ai:zhang.zhentao"Wang, Jinru"https://zbmath.org/authors/?q=ai:wang.jinru(no abstract)A probabilistic linear solver based on a multilevel Monte Carlo methodhttps://zbmath.org/1527.650012024-02-28T19:32:02.718555Z"Acebrón, Juan A."https://zbmath.org/authors/?q=ai:acebron.juan-aSummary: We describe a new Monte Carlo method based on a multilevel method for computing the action of the resolvent matrix over a vector. The method is based on the numerical evaluation of the Laplace transform of the matrix exponential, which is computed efficiently using a multilevel Monte Carlo method. Essentially, it requires generating suitable random paths which evolve through the indices of the matrix according to the probability law of a continuous-time Markov chain governed by the associated Laplacian matrix. The convergence of the proposed multilevel method has been discussed, and several numerical examples were run to test the performance of the algorithm. These examples concern the computation of some metrics of interest in the analysis of complex networks, and the numerical solution of a boundary-value problem for an elliptic partial differential equation. In addition, the algorithm was conveniently parallelized, and the scalability analyzed and compared with the results of other existing Monte Carlo method for solving linear algebra systems.Error estimators and their analysis for CG, Bi-CG and GMREShttps://zbmath.org/1527.650202024-02-28T19:32:02.718555Z"Jain, P."https://zbmath.org/authors/?q=ai:jain.prashant-k|jain.pooja|jain.prabhav|jain.parag|jain.pramod-k|jain.priyanka|jain.parul|jain.priti|jain.pankaj.1|jain.pankaj|jain.padam-j|jain.pushp|jain.pragya|jain.pawan-kumar|jain.pragati|jain.pallavi|jain.pravin-c|jain.prateek|jain.praveen|jain.pushpa|jain.praving-c|jain.piyush|jain.pramila|jain.pushpendra-k|jain.payal|jain.prachi|jain.prabha|jain.paras|jain.prakash-c|jain.padam-c|jain.priyank|jain.praphula-kumar|jain.preeti|jain.pankhuri"Manglani, K."https://zbmath.org/authors/?q=ai:manglani.k"Venkatapathi, M."https://zbmath.org/authors/?q=ai:venkatapathi.murugesanSummary: The demands of accuracy in measurements and engineering models today render the condition number of problems larger. While a corresponding increase in the precision of floating point numbers ensured a stable computing, the uncertainty in convergence when using residue as a stopping criterion has increased. We present an analysis of the uncertainty in convergence when using relative residue as a stopping criterion for iterative solution of linear systems, and the resulting over/under computation for a given tolerance in error. This shows that error estimation is significant for an efficient or accurate solution even when the condition number of the matrix is not large. An \(\mathcal{O}(1)\) error estimator for iterations of the CG algorithm was proposed more than two decades ago. Recently, an \(\mathcal{O}(k^2)\) error estimator was described for the GMRES algorithm which allows for non-symmetric linear systems as well, where \(k\) is the iteration number. We suggest a minor modification in this GMRES error estimation for increased stability. In this work, we also propose an \(\mathcal{O}(n)\) error estimator for \(A\)-norm and \(l_2\)-norm of the error vector in Bi-CG algorithm. The robust performance of these estimates as a stopping criterion results in increased savings and accuracy in computation, as condition number and size of problems increase.An asynchronous parallel method for linear systemshttps://zbmath.org/1527.650212024-02-28T19:32:02.718555Z"You, Zhaoyong"https://zbmath.org/authors/?q=ai:you.zhaoyong"Wang, Chuanglong"https://zbmath.org/authors/?q=ai:wang.chuanglongSummary: In this paper, we present a new asynchronous parallel method for solving linear system of equations \(Ax=b\), where \(A\) is a nonsingular H-matrix. The method conbines usual asynchonous method
[\textit{G. M. Baudet}, J. Assoc. Comput. Mach. 25, 226--244 (1978; Zbl 0372.68015)]
and multisplitting of matrices
[\textit{D. P. O'Leary} and \textit{R. E. White}, SIAM J. Algebraic Discrete Methods 6, 630--640 (1985; Zbl 0582.65018)].
Convergence is established, rate of convergence is discussed, the region of relaxation parameter \(\omega\) is given.
For the entire collection see [Zbl 0847.00053].Faster deterministic pseudoinverse-free block extension of Motzkin method for large consistent linear systemshttps://zbmath.org/1527.650222024-02-28T19:32:02.718555Z"Zhao, Jing"https://zbmath.org/authors/?q=ai:zhao.jing.6"Zhang, Jianhua"https://zbmath.org/authors/?q=ai:zhang.jianhua.1Summary: Recently, a fast deterministic block Kaczmarz (FDBK) method which uses a greedy criterion of the row selections and contains pseudoinverse-free computation is presented. In this work, we introduce a maximum residual rule into FDBK and develop a new block Kaczmarz method which is also considered as a fast deterministic pseudoinverse-free block extension of Motzkin (FBEM) method. In addition, we prove that FBEM converges linearly to the unique least-norm solution of the linear systems. Furthermore, by incorporating the Polyak momentum technique into the FBEM iteration method, we establish an accelerated variant of FBEM (mFBEM) and show its global linear convergence. Numerical examples using artificial and real datasets demonstrate the effectiveness of FBEM as well as mFBEM.The sequential quadratic programming for symmetric Pareto eigenvalue complementarity problemhttps://zbmath.org/1527.650232024-02-28T19:32:02.718555Z"Zhu, Lin"https://zbmath.org/authors/?q=ai:zhu.lin"Lei, Yuan"https://zbmath.org/authors/?q=ai:lei.yuan"Xie, Jiaxin"https://zbmath.org/authors/?q=ai:xie.jiaxinSummary: A sequential quadratic programming (SQP) algorithm framework is proposed to solve the symmetric Pareto eigenvalue complementarity problem (EiCP). In the algorithm framework, the search direction can be constructed by solving a quadratic programming subproblem in each iteration, and the precise step size can be obtained to ensure the reduction of the penalty function. Moreover, the global convergence of this algorithm is discussed. In order to improve the computational efficiency, a simple SQP algorithm is recommended to the large symmetric Pareto EiCP. Various numerical results of the symmetric Pareto EiCPs are reported to illustrate the efficiency of the proposed algorithms.Sparse sampling Kaczmarz-Motzkin method with linear convergencehttps://zbmath.org/1527.650242024-02-28T19:32:02.718555Z"Yuan, Ziyang"https://zbmath.org/authors/?q=ai:yuan.ziyang"Zhang, Hui"https://zbmath.org/authors/?q=ai:zhang.hui.5"Wang, Hongxia"https://zbmath.org/authors/?q=ai:wang.hongxiaSummary: The randomized sparse Kaczmarz method was recently proposed to recover sparse solutions of linear systems. In this work, we introduce a greedy variant of the randomized sparse Kaczmarz method by employing the sampling Kaczmarz-Motzkin method and prove its linear convergence in expectation with respect to the Bregman distance in the noiseless and noisy cases. This greedy variant can be viewed as a unification of the sampling Kaczmarz-Motzkin method and the randomized sparse Kaczmarz method, and hence inherits the merits of these two methods. Numerically, we report a couple of experimental results to demonstrate its superiority.
{{\copyright} 2021 John Wiley \& Sons, Ltd.}Kernel-independent adaptive construction of \(\mathcal{H}^2\)-matrix approximationshttps://zbmath.org/1527.650252024-02-28T19:32:02.718555Z"Bauer, M."https://zbmath.org/authors/?q=ai:bauer.maximilian"Bebendorf, M."https://zbmath.org/authors/?q=ai:bebendorf.mario"Feist, B."https://zbmath.org/authors/?q=ai:feist.berndSummary: A method for the kernel-independent construction of \(\mathcal{H}^2\)-matrix approximations to non-local operators is proposed. Special attention is paid to the adaptive construction of nested bases. As a side result, new error estimates for adaptive cross approximation (ACA) are presented which have implications on the pivoting strategy of ACA.Fast gradient method for low-rank matrix estimationhttps://zbmath.org/1527.650262024-02-28T19:32:02.718555Z"Li, Hongyi"https://zbmath.org/authors/?q=ai:li.hongyi"Peng, Zhen"https://zbmath.org/authors/?q=ai:peng.zhen"Pan, Chengwei"https://zbmath.org/authors/?q=ai:pan.chengwei"Zhao, Di"https://zbmath.org/authors/?q=ai:zhao.diSummary: Projected gradient descent and its Riemannian variant belong to a typical class of methods for low-rank matrix estimation. This paper proposes a new Nesterov's Accelerated Riemannian Gradient algorithm using efficient orthographic retraction and tangent space projection. The subspace relationship between iterative and extrapolated sequences on the low-rank matrix manifold provides computational convenience. With perturbation analysis of truncated singular value decomposition and two retractions, we systematically analyze the local convergence of gradient algorithms and Nesterov's variants in the Euclidean and Riemannian settings. Theoretically, we estimate the exact rate of local linear convergence under different parameters using the spectral radius in a closed form and give the optimal convergence rate and the corresponding momentum parameter. When the parameter is unknown, the adaptive restart scheme can avoid the oscillation problem caused by high momentum, thus approaching the optimal convergence rate. Extensive numerical experiments confirm the estimations of convergence rate and demonstrate that the proposed algorithm is competitive with first-order methods for matrix completion and matrix sensing.On Bernoulli series approximation for the matrix cosinehttps://zbmath.org/1527.650272024-02-28T19:32:02.718555Z"Defez, Emilio"https://zbmath.org/authors/?q=ai:defez.emilio"Ibáñez, Javier"https://zbmath.org/authors/?q=ai:ibanez.jacinto-javier"Alonso, José M."https://zbmath.org/authors/?q=ai:alonso.jose-miguel"Alonso-Jordá, Pedro"https://zbmath.org/authors/?q=ai:alonso-jorda.pedroSummary: This paper presents a new series expansion based on Bernoulli matrix polynomials to approximate the matrix cosine function. An approximation based on this series is not a straightforward exercise since there exist different options to implement such a solution. We dive into these options and include a thorough comparative of performance and accuracy in the experimental results section that shows benefits and downsides of each one. Also, a comparison with the Padé approximation is included. The algorithms have been implemented in MATLAB and in CUDA for NVIDIA GPUs.
{{\copyright} 2020 John Wiley \& Sons, Ltd.}Analytical modeling of matrix-vector multiplication on multicore processorshttps://zbmath.org/1527.650282024-02-28T19:32:02.718555Z"Gareev, Roman A."https://zbmath.org/authors/?q=ai:gareev.roman-a"Akimova, Elena N."https://zbmath.org/authors/?q=ai:akimova.elena-nSummary: The efficiency of matrix-vector multiplication is of considerable importance. No current approaches can optimize this sufficiently well under severe time constraints. All major existing methods are based on either manual-tuning or auto-tuning and can therefore be time-consuming. We introduce an alternative model-driven approach, which is used to map the implementation of matrix-vector multiplication to a target architecture and analytically obtain its parameters. The approach yields the performance that is competitive with optimized Basic Linear Algebra Subprograms (BLAS)-like dense linear algebra libraries without the need for manual-tuning or auto-tuning. Our method provides competitive performance across hardware architectures and can be utilized to obtain single-threaded and multi-threaded implementations on multicore processors. We expect that this approach allows the community to progress from valuable engineering solutions to techniques with a broader application.
{{\copyright} 2021 John Wiley \& Sons, Ltd.}An improvement of the Kurchatov method by means of a parametric modificationhttps://zbmath.org/1527.650352024-02-28T19:32:02.718555Z"Hernández-Verón, Miguel A."https://zbmath.org/authors/?q=ai:hernandez-veron.miguel-angel"Yadav, Nisha"https://zbmath.org/authors/?q=ai:yadav.nisha"Magreñán, Á. Alberto"https://zbmath.org/authors/?q=ai:magrenan.angel-alberto"Martínez, Eulalia"https://zbmath.org/authors/?q=ai:martinez.eulalia"Singh, Sukhjit"https://zbmath.org/authors/?q=ai:singh.sukhjit.2(no abstract)A numerical solution for a quasi solution of the time-fractional stochastic backward parabolic equationhttps://zbmath.org/1527.650882024-02-28T19:32:02.718555Z"Nasiri, T."https://zbmath.org/authors/?q=ai:nasiri.tayebeh"Zakeri, A."https://zbmath.org/authors/?q=ai:zakeri.ali"Aminataei, A."https://zbmath.org/authors/?q=ai:aminataei.azimThe authors consider a time-fractional stochastic backward parabolic equation driven by standard Brownian motion. The time fractional derivative is taken in the Caputo sense. Using the minimization of a least-squares functional, stochastic variational formulation, Fréchet differentiability and utility theorems adopted directly from deterministic fractional backward equations, the existence and uniqueness of a quasi solution of the proposed problem are proved. The approximate solution is obtained thanks to a numerical technique based on 2D Chebyshev wavelets. The numerical scheme leads to an ill-posed equivalent system of linear equations and consequently the Levenberg-Marquardt regularization technique is used. The convergence analysis of the suggested numerical algorithm is investigated. A numerical example is presented to show the accuracy and efficiency of the Chebyshev wavelet method in solving the considered problem. The article is interesting and it merits to read.
Reviewer: Abdallah Bradji (Annaba)Local knot method for solving inverse Cauchy problems of Helmholtz equations on complicated two- and three-dimensional domainshttps://zbmath.org/1527.651192024-02-28T19:32:02.718555Z"Wang, Fajie"https://zbmath.org/authors/?q=ai:wang.fajie"Chen, Zengtao"https://zbmath.org/authors/?q=ai:chen.zengtao"Gong, Yanpeng"https://zbmath.org/authors/?q=ai:gong.yanpengSummary: This article presents a local knot method (LKM) to solve inverse Cauchy problems of Helmholtz equations in arbitrary 2D and 3D domains. The Moore-Penrose pseudoinverse using the truncated singular value decomposition is employed in the local approximation of supporting domain instead of the moving least squares method. The developed approach is a semi-analytical and local radial basis function collocation method using the non-singular general solution as the basis function. Like the traditional boundary knot method, the LKM is simple, accurate and easy-to-program in solving inverse Cauchy problems associated with Helmholtz equations. Unlike the boundary knot method, the new scheme can directly reconstruct the unknowns both inside the physical domain and along its boundary by solving a sparse linear system, and can achieve a satisfactory solution. Numerical experiments, involving the complicated geometry and the high noise level, confirm the accuracy and reliability of the proposed method for solving inverse Cauchy problems of Helmholtz equations.
{{\copyright} 2022 John Wiley \& Sons Ltd.}Convergence of the Hiptmair-Xu preconditioner for \(H(\mathbf{curl})\)-elliptic problems with jump coefficients. II: Main resultshttps://zbmath.org/1527.651272024-02-28T19:32:02.718555Z"Hu, Qiya"https://zbmath.org/authors/?q=ai:hu.qiyaIn this paper, finite element discretization of an \(H(\mathbf{curl},\Omega)\)-elliptic boundary value problem
\[
\mathbf{curl }\ (\alpha\,\ \mathbf{curl }\,\mathbf{u} )+\beta \mathbf{u} = \mathbf{f} \ \ \mbox{in}\; \; \Omega , \qquad \mathbf{u}\times \mathbf{n} = \mathbf{0} \; \; \mbox{on} \ \ \Omega,
\]
is considered on a bounded Lipschitz polyhedron \(\Omega\subset \mathbb{R}^3\), where \(\mathbf{f} \in (L^2(\Omega))^3\) is a given vector field, \(\alpha =\alpha(x)\) and \(\beta=\beta(x)\) are spatially varying, scalar-valued, positive coefficients in \(L^{\infty}(\Omega)\), that are piecewise constant with respect to a partition of \(\Omega\) into polyhedral Lipschitz sub-domains
A new concept, ``thorny vertex'' is introduced, associated with the distribution of jumps of the coefficient \(\alpha\) in order to reveal the essence of the problem of dealing with discontinuous coefficients. A new discrete regular decompositions is established for low-dimensional edge finite element functions in three dimensions and show that the decompositions, on quasi-uniform and shape regular sequences of meshes and in weighted norms involving the coefficients \(\alpha\) and \(\beta\) , are nearly stable with respect to the mesh size and uniformly stable with respect to large jumps of the coefficients under suitable geometric constraints on the coefficients. As an application of the proposed regular decompositions, it was shown that the preconditioned conjugate gradient (PCG) method with the Hiptmair-Xu (HX) preconditioner [\textit{R. Hiptmair} and \textit{J. Xu}, Numer. Anal. 45, 2483--2509, (2007; Zbl 1153.78006)] for solving the considered system has a nearly optimal convergence rate, that does not suffer much from large jumps of the coefficients \(\alpha\) and \(\beta\) across the interface between any two neighboring subdomains.
Reviewer: Bülent Karasözen (Ankara)Convergence analysis of BDDC preconditioners for composite DG discretizations of the cardiac cell-by-cell modelhttps://zbmath.org/1527.651382024-02-28T19:32:02.718555Z"Huynh, Ngoc Mai Monica"https://zbmath.org/authors/?q=ai:monica-huynh.ngoc-mai"Chegini, Fatemeh"https://zbmath.org/authors/?q=ai:chegini.fatemeh"Pavarino, Luca Franco"https://zbmath.org/authors/?q=ai:pavarino.luca-franco"Weiser, Martin"https://zbmath.org/authors/?q=ai:weiser.martin"Scacchi, Simone"https://zbmath.org/authors/?q=ai:scacchi.simoneSummary: A balancing domain decomposition by constraints (BDDC) preconditioner is constructed and analyzed for the solution of composite discontinuous Galerkin discretizations of reaction-diffusion systems of ordinary and partial differential equations arising in cardiac cell-by-cell models. The latter are different from the classical bidomain and monodomain cardiac models based on homogenized descriptions of the cardiac tissue at the macroscopic level, and therefore they allow the representation of individual cardiac cells, cell aggregates, damaged tissues, and nonuniform distributions of ion channels on the cell membrane. The resulting discrete cell-by-cell models have discontinuous global solutions across the cell boundaries, and hence the proposed BDDC preconditioner is based on appropriate dual and primal spaces with additional constraints which transfer information between cells (subdomains) without influencing the overall discontinuity of the global solution. A scalable convergence rate bound is proved for the resulting BDDC cell-by-cell preconditioned operator, while numerical tests validate this bound and investigate its dependence on the discretization parameters.FMM-\(\boldsymbol{\mathsf{LU}}\): a fast direct solver for multiscale boundary integral equations in three dimensionshttps://zbmath.org/1527.651422024-02-28T19:32:02.718555Z"Sushnikova, Daria"https://zbmath.org/authors/?q=ai:sushnikova.daria-a"Greengard, Leslie"https://zbmath.org/authors/?q=ai:greengard.leslie-f"O'Neil, Michael"https://zbmath.org/authors/?q=ai:oneil.michael"Rachh, Manas"https://zbmath.org/authors/?q=ai:rachh.manasSummary: We present a fast direct solver for boundary integral equations on complex surfaces in three dimensions using an extension of the recently introduced recursive strong skeletonization scheme. For problems that are not highly oscillatory, our algorithm computes an \(\boldsymbol{\mathsf{LU}}\)-like hierarchical factorization of the dense system matrix, permitting application of the inverse in \(O(n)\) time, where \(n\) is the number of unknowns on the surface. The factorization itself also scales linearly with the system size, albeit with a somewhat larger constant. The scheme is built on a level-restricted adaptive octree data structure, and therefore, it is compatible with highly nonuniform discretizations. Furthermore, the scheme is coupled with high-order accurate locally-corrected Nyström quadrature methods to integrate the singular and weakly-singular Green's functions used in the integral representations. Our method has immediate applications to a variety of problems in computational physics. We concentrate here on studying its performance in acoustic scattering (governed by the Helmholtz equation) at low to moderate frequencies, and provide rigorous justification for compression of submatrices via proxy surfaces.Connecting optimization with spectral analysis of tri-diagonal matriceshttps://zbmath.org/1527.901482024-02-28T19:32:02.718555Z"Lasserre, Jean B."https://zbmath.org/authors/?q=ai:lasserre.jean-bernardSummary: We show that the global minimum (resp. maximum) of a continuous function on a compact set can be approximated from above (resp. from below) by computing the smallest (rest. largest) eigenvalue of a hierarchy of \((r\times r)\) tri-diagonal matrices of increasing size. Equivalently it reduces to computing the smallest (resp. largest) root of a certain univariate degree-\(r\) orthonormal polynomial. This provides a strong connection between the fields of optimization, orthogonal polynomials, numerical analysis and linear algebra, via asymptotic spectral analysis of tri-diagonal symmetric matrices.