Recent zbMATH articles in MSC 65Fhttps://zbmath.org/atom/cc/65F2021-05-28T16:06:00+00:00WerkzeugVerified error bounds for real eigenvalues of real symmetric and persymmetric matrices.https://zbmath.org/1459.650472021-05-28T16:06:00+00:00"Li, Zhe"https://zbmath.org/authors/?q=ai:li.zhe"Wang, Xueqing"https://zbmath.org/authors/?q=ai:wang.xueqingSummary: This paper mainly investigates the verification of real eigenvalues of the real symmetric and persymmetric matrices. For a real symmetric or persymmetric matrix, we use \textbf{eig} code in Matlab to obtain its real eigenvalues on the basis of numerical computation and provide an algorithm to compute verified error bound such that there exists a perturbation matrix of the same type within the computed error bound whose exact real eigenvalues are the computed real eigenvalues.\(LU\) decomposition scheme for solving \(m\)-polar fuzzy system of linear equations.https://zbmath.org/1459.150322021-05-28T16:06:00+00:00"Koam, Ali N. A."https://zbmath.org/authors/?q=ai:koam.ali-n-a"Akram, Muhammad"https://zbmath.org/authors/?q=ai:akram.muhammad"Muhammad, Ghulam"https://zbmath.org/authors/?q=ai:muhammad.ghulam"Hussain, Nawab"https://zbmath.org/authors/?q=ai:hussain.nawabSummary: This paper presents a new scheme for solving \(m\)-polar fuzzy system of linear equations \((m\)-PFSLEs) by using \(LU\) decomposition method. We assume the coefficient matrix of the system is symmetric positive definite, and we discuss this point in detail with some numerical examples. Furthermore, we investigate the inconsistent \(m\times nm \)-polar fuzzy matrix equation \((m\)-PFME) and find the least square solution (LSS) of this system by using generalized inverse matrix theory. Moreover, we discuss the strong solution of \(m\)-polar fuzzy LSS of the inconsistent \(m\)-PFME. In the end, we present a numerical example to illustrate our approach.Column-oriented algebraic iterative methods for nonnegative constrained least squares problems.https://zbmath.org/1459.650432021-05-28T16:06:00+00:00"Nikazad, T."https://zbmath.org/authors/?q=ai:nikazad.touraj"Karimpour, M."https://zbmath.org/authors/?q=ai:karimpour.mehdiSummary: This paper considers different versions of block-column iterative (BCI) methods for solving nonnegative constrained linear least squares problems. We present the convergence analysis for a family of stationary BCI methods with nonnegativity constraints (BCI-NC), which is applicable to linear complementarity problems (LCP). We also consider the flagging idea for BCI methods, which allows saving computational work by skipping small updates. Also, we combine the BCI-NC algorithm and the flagging version of a nonstationary BCI method with nonnegativity constraints to derive a convergence analysis for the resulting method (BCI-NF). The performance of our algorithms is shown on ill-posed inverse problems taken from tomographic imaging. We compare the BCI-NF and BCI-NC algorithms with three recent algorithms: the inner-outer modulus method (Modulus-CG method), the modulus-based iterative method to Tikhonov regularization with nonnegativity constraint (Mod-TRN method), and nonnegative flexible CGLS (NN-FCGLS) method. Our algorithms are able to produce more stable results than the mentioned methods with competitive computational times.Fast hyperbolic mapping based on the hierarchical community structure in complex networks.https://zbmath.org/1459.650592021-05-28T16:06:00+00:00"Wang, Zuxi"https://zbmath.org/authors/?q=ai:wang.zuxi"Sun, Lingjie"https://zbmath.org/authors/?q=ai:sun.lingjie"Cai, Menglin"https://zbmath.org/authors/?q=ai:cai.menglin"Xie, Pengcheng"https://zbmath.org/authors/?q=ai:xie.pengchengHierarchical interpolative factorization preconditioner for parabolic equations.https://zbmath.org/1459.650382021-05-28T16:06:00+00:00"Feliu-Fabà, Jordi"https://zbmath.org/authors/?q=ai:feliu-faba.jordi"Ying, Lexing"https://zbmath.org/authors/?q=ai:ying.lexingSummary: This note proposes an efficient preconditioner for solving linear and semi-linear parabolic equations. With the Crank-Nicholson time stepping method, the algebraic system of equations at each time step is solved with the conjugate gradient method, preconditioned with hierarchical interpolative factorization. Stiffness matrices arising in the discretization of parabolic equations typically have large condition numbers, and therefore preconditioning becomes essential, especially for large time steps. We propose to use the hierarchical interpolative factorization as the preconditioning for the conjugate gradient iteration. Computed only once, the hierarchical interpolative factorization offers an efficient and accurate approximate inverse of the linear system. As a result, the preconditioned conjugate gradient iteration converges in a small number of iterations. Compared to other classical exact and approximate factorizations such as Cholesky or incomplete Cholesky, the hierarchical interpolative factorization can be computed in linear time and the application of its inverse has linear complexity. Numerical experiments demonstrate the performance of the method and the reduction of conjugate gradient iterations.Legendre decomposition for tensors.https://zbmath.org/1459.650552021-05-28T16:06:00+00:00"Sugiyama, Mahito"https://zbmath.org/authors/?q=ai:sugiyama.mahito"Nakahara, Hiroyuki"https://zbmath.org/authors/?q=ai:nakahara.hiroyuki"Tsuda, Koji"https://zbmath.org/authors/?q=ai:tsuda.kojiFinding a nonnegative solution to an M-tensor equation.https://zbmath.org/1459.150182021-05-28T16:06:00+00:00"Li, Dong-Hui"https://zbmath.org/authors/?q=ai:li.donghui|li.dong-hui"Guan, Hong-Bo"https://zbmath.org/authors/?q=ai:guan.hongbo"Wang, Xiao-Zhou"https://zbmath.org/authors/?q=ai:wang.xiaozhouSummary: We are concerned with the tensor equation with an M-tensor, which we call the M-tensor equation. We first derive a necessary and sufficient condition for an M-tensor equation to have nonnegative solutions. We then develop a monotone iterative method to find a nonnegative solution to an M-tensor equation. The method can be regarded as an approximation to Newton's method for solving the equation. At each iteration, we solve a system of linear equations. An advantage of the proposed method is that the coefficient matrices of the linear systems are independent of the iteration. We show that if the initial point is appropriately chosen, then the sequence of iterates generated by the method converges to a nonnegative solution of the M-tensor equation monotonically and linearly. At last, we do numerical experiments to test the proposed methods. The results show the efficiency of the proposed methods.Norm and trace estimation with random rank-one vectors.https://zbmath.org/1459.650532021-05-28T16:06:00+00:00"Bujanovic, Zvonimir"https://zbmath.org/authors/?q=ai:bujanovic.zvonimir"Kressner, Daniel"https://zbmath.org/authors/?q=ai:kressner.danielRandom matrices generating large growth in LU factorization with pivoting.https://zbmath.org/1459.650362021-05-28T16:06:00+00:00"Higham, Desmond J."https://zbmath.org/authors/?q=ai:higham.desmond-j"Higham, Nicholas J."https://zbmath.org/authors/?q=ai:higham.nicholas-j"Pranesh, Srikara"https://zbmath.org/authors/?q=ai:pranesh.srikaraFast approximation of the Gauss-Newton Hessian matrix for the multilayer perceptron.https://zbmath.org/1459.650372021-05-28T16:06:00+00:00"Chen, Chao"https://zbmath.org/authors/?q=ai:chen.chao"Reiz, Severin"https://zbmath.org/authors/?q=ai:reiz.severin"Yu, Chenhan D."https://zbmath.org/authors/?q=ai:yu.chenhan-d"Bungartz, Hans-Joachim"https://zbmath.org/authors/?q=ai:bungartz.hans-joachim"Biros, George"https://zbmath.org/authors/?q=ai:biros.georgeComponentwise perturbation analysis of the Schur decomposition of a matrix.https://zbmath.org/1459.650512021-05-28T16:06:00+00:00"Petkov, Petko H."https://zbmath.org/authors/?q=ai:petkov.petko-hrA comparison of limited-memory Krylov methods for Stieltjes functions of Hermitian matrices.https://zbmath.org/1459.650572021-05-28T16:06:00+00:00"Güttel, Stefan"https://zbmath.org/authors/?q=ai:guttel.stefan"Schweitzer, Marcel"https://zbmath.org/authors/?q=ai:schweitzer.marcelA new algorithm for solving large-scale generalized eigenvalue problem based on projection methods.https://zbmath.org/1459.650502021-05-28T16:06:00+00:00"Nedamani, F. Abbasi"https://zbmath.org/authors/?q=ai:nedamani.f-abbasi"Sheikhani, A. H. Refahi"https://zbmath.org/authors/?q=ai:sheikhani.amir-hosein-refahi|sheikhani.amirhossein-refahi"Najafi, H. Saberi"https://zbmath.org/authors/?q=ai:najafi.h-saberiSummary: In this paper, we consider four methods for determining certain eigenvalues and corresponding eigenvectors of large-scale generalized eigenvalue problems which are located in a certain region. In these methods, a small pencil that contains only the desired eigenvalue is derived using moments that have obtained via numerical integration. Our purpose is to improve the numerical stability of the moment-based method and compare its stability with three other methods. Numerical examples show that the block version of the moment-based (SS) method with the Rayleigh-Ritz procedure has higher numerical stability than respect to other methods.Numerical algorithms of the discrete coupled algebraic Riccati equation arising in optimal control systems.https://zbmath.org/1459.652372021-05-28T16:06:00+00:00"Wang, Li"https://zbmath.org/authors/?q=ai:wang.li.5|wang.li|wang.li.6|wang.li.2|wang.li.1|wang.li.3|wang.li.4Summary: The discrete coupled algebraic Riccati equation (DCARE) has wide applications in robust control, optimal control, and so on. In this paper, we present two iterative algorithms for solving the DCARE. The two iterative algorithms contain both the iterative solution in the last iterative step and the iterative solution in the current iterative step. And, for different initial value, the iterative sequences are increasing and bounded in one algorithm and decreasing and bounded in another. They are all monotonous and convergent. Numerical examples demonstrate the convergence effect of the presented algorithms.Cholesky decomposition of matrices over commutative semirings.https://zbmath.org/1459.160432021-05-28T16:06:00+00:00"Dolžan, David"https://zbmath.org/authors/?q=ai:dolzan.david"Oblak, Polona"https://zbmath.org/authors/?q=ai:oblak.polonaSummary: We prove that over a commutative semiring every symmetric strongly invertible matrix with nonnegative numerical range has a Cholesky decomposition.Generalized SOR-like iteration method for linear complementarity problem.https://zbmath.org/1459.902112021-05-28T16:06:00+00:00"Li, Cui-Xia"https://zbmath.org/authors/?q=ai:li.cuixia"Wu, Shi-Liang"https://zbmath.org/authors/?q=ai:wu.shiliangSummary: In this paper, we present a generalized SOR-like iteration method to solve the non-Hermitian positive definite linear complementarity problem (LCP), which is obtained by reformulating equivalently the implicit fixed-point equation of the LCP as a two-by-two block nonlinear equation. The convergence properties of the generalized SOR-like iteration method are discussed under certain conditions. Numerical experiments show that the generalized SOR-like method is efficient, compared with the SOR-like method and the modulus-based SOR method.Functions and eigenvectors of partially known matrices with applications to network analysis.https://zbmath.org/1459.650462021-05-28T16:06:00+00:00"Al Mugahwi, Mohammed"https://zbmath.org/authors/?q=ai:al-mugahwi.mohammed"De la Cruz Cabrera, Omar"https://zbmath.org/authors/?q=ai:de-la-cruz-cabrera.omar"Noschese, Silvia"https://zbmath.org/authors/?q=ai:noschese.silvia"Reichel, Lothar"https://zbmath.org/authors/?q=ai:reichel.lotharSummary: Matrix functions play an important role in applied mathematics. In network analysis, in particular, the exponential of the adjacency matrix associated with a network provides valuable information about connectivity, as well as about the relative importance or centrality of nodes. Another popular approach to rank the nodes of a network is to compute the left Perron vector of the adjacency matrix for the network. The present article addresses the problem of evaluating matrix functions, as well as computing an approximation to the left Perron vector, when only some of the columns and/or some of the rows of the adjacency matrix are known. Applications to network analysis are considered, when only some sampled columns and/or rows of the adjacency matrix that defines the network are available. A sampling scheme that takes the connectivity of the network into account is described. Computed examples illustrate the performance of the methods discussed.A generalization of Rohn's theorem on full-rank interval matrices.https://zbmath.org/1459.150032021-05-28T16:06:00+00:00"Rubei, Elena"https://zbmath.org/authors/?q=ai:rubei.elenaA \(p\times q\) interval matrix is a \(p\times q\) matrix whose entries are closed bounded nonempty intervals in \(\mathbb{R}\). A general (closed) interval matrix is a matrix whose entries are (closed) connected nonempty subsets of \(\mathbb{R}\). Let \(\mu\) be a \(p\times q\) general interval matrix with entries \(\mu _{i,j}\), \(1\leq i\leq p\), \(1\leq j\leq q\). We say that a \(p\times q\) real matrix \(A=(a_{i,j})\) is contained in \(\mu\) if \(a_{i,j}\in \mu _{i,j}\) for any pair \(i,j\). The matrix \(\mu\) has full rank when all matrices contained in \(\mu\) have rank equal to the minimum of \(\{p,q\}\).
\textit{J. Rohn} [Electron. J. Linear Algebra 18, 500--512 (2009; Zbl 1189.65088); Linear Algebra Appl. 126, 39--78 (1989; Zbl 0712.65029); Reliab. Comput. 2, No. 2, 167--171 (1996; Zbl 0855.65037)]
characterized full rank \(p\times q\) with \(p\geq q\) interval matrices.
The main result of this paper is a generalization of Rohn's results [loc. cit.] to general closed interval matrices. In the first part of the paper, the author presents definitions that are required for the statement of the main result, which is quite technical (Theorem 3.6). In the second part of the paper, some preliminary lemmas are first given and then the main theorem is stated and proved. To better illustrate the main result, the author gives three examples of general closed interval matrices where one of them is of full rank (it satisfies the conditions of Theorem 3.6) and two of them are not.
The author concludes the paper with an open problem, i.e., a characterization of full-rank interval matrices whose entries are (not necessarily closed) connected subsets of \(\mathbb{R}\).
Reviewer: Janko Marovt (Maribor)A parallel bio-inspired shortest path algorithm.https://zbmath.org/1459.680632021-05-28T16:06:00+00:00"Arslan, Hilal"https://zbmath.org/authors/?q=ai:arslan.hilal"Manguoglu, Murat"https://zbmath.org/authors/?q=ai:manguoglu.muratSummary: \textit{Physarum polycephalum} is an amoeba-like organism and is able to find the shortest path in a labyrinth. Inspired by \textit{P. polycephalum}, recently, a mathematical model and an algorithm (\textit{Physarum Solver}) was developed. There are, however, only sequential implementations of this algorithm. In this paper, a fast and efficient \textit{parallel Physarum Solver} is proposed. The proposed algorithm requires the solution of linear systems whose coefficient matrix is a symmetric M-matrix. The solution of the linear system is the most time consuming step of the Physarum Solver which is classically handled by direct solvers without taking advantage of the fact that the coefficient matrix is an M-matrix. However, direct solvers are infeasible for solving large real-world problems. In the proposed parallel Physarum Solver, an effective parallel iterative linear solver with a parallel preconditioner for M-matrices is used. The parallel scalability, solution time, and accuracy of the proposed algorithm are presented and compared to a state-of-the-art parallel implementation of \(\varDelta \)-stepping shortest path algorithm in the Parallel Boost Graph Library. Our implementation exhibits a remarkable parallel speedup with comparable accuracy for synthetic and real world applications.A note on the hyperbolic singular value decomposition without hyperexchange matrices.https://zbmath.org/1459.150112021-05-28T16:06:00+00:00"Shirokov, D. S."https://zbmath.org/authors/?q=ai:shirokov.dmitry-sSummary: We present a new formulation of the hyperbolic singular value decomposition (HSVD) for an arbitrary complex (or real) matrix without hyperexchange matrices and redundant invariant parameters. In our formulation, we use only the concept of pseudo-unitary (or pseudo-orthogonal) matrices. We show that computing the HSVD in the general case is reduced to calculation of eigenvalues, eigenvectors, and generalized eigenvectors of some auxiliary matrices. The new formulation is more natural and useful for some applications. It naturally includes the ordinary singular value decomposition.Acceleration of implicit schemes for large linear systems of differential-algebraic equations.https://zbmath.org/1459.651302021-05-28T16:06:00+00:00"Al Sayed Ali, Mouhamad"https://zbmath.org/authors/?q=ai:ali.mouhamad-al-sayed"Sadkane, Miloud"https://zbmath.org/authors/?q=ai:sadkane.miloudSummary: Implicit schemes for solving large-scale linear differential-algebraic systems with constant coefficients necessitate at each integration step the solution of a linear system, typically obtained by a Krylov subspace method such as GMRES. To accelerate the convergence, an approach is proposed that computes good initial guesses for each linear system to be solved in the implicit scheme. This approach requires, at each integration step, a small dimensional subspace where a good initial guess is found using the Petrov-Galerkin process. It is shown that the residual associated with the computed initial guess depends on the dimension of the subspace, the order of the implicit scheme, and the discretization stepsize. Several numerical illustrations are reported.An inverse eigenvalue problem for modified pseudo-Jacobi matrices.https://zbmath.org/1459.650522021-05-28T16:06:00+00:00"Xu, Wei-Ru"https://zbmath.org/authors/?q=ai:xu.weiru"Bebiano, Natália"https://zbmath.org/authors/?q=ai:bebiano.natalia"Chen, Guo-Liang"https://zbmath.org/authors/?q=ai:chen.guoliangSummary: In this paper, we investigate an inverse eigenvalue problem for matrices that are obtained from pseudo-Jacobi matrices by only modifying the \(( 1 , r )\)-th and \(( r , 1 )\)-th entries, \( 3 \leq r \leq n\). Necessary and sufficient conditions under which the problem is solvable are derived. Uniqueness results are presented and an algorithm to reconstruct the matrices from the given spectral data is proposed. Illustrative examples are provided.Exponential integrators for large-scale stiff Riccati differential equations.https://zbmath.org/1459.651002021-05-28T16:06:00+00:00"Li, Dongping"https://zbmath.org/authors/?q=ai:li.dongping"Zhang, Xiuying"https://zbmath.org/authors/?q=ai:zhang.xiuying"Liu, Renyun"https://zbmath.org/authors/?q=ai:liu.renyunSummary: Riccati differential equations arise in many different areas and are particularly important within the field of control theory. In this paper we consider numerical integration for large-scale systems of stiff Riccati differential equations. We show how to apply exponential Rosenbrock-type integrators to get approximate solutions. Two typical exponential integration schemes are considered. The implementation issues are addressed and some low-rank approximations are exploited based on high quality numerical algebra codes. Numerical comparisons demonstrate that the exponential integrators can obtain high accuracy and efficiency for solving large-scale systems of stiff Riccati differential equations.Worst-case complexity of cyclic coordinate descent: \(O(n^2)\) gap with randomized version.https://zbmath.org/1459.650442021-05-28T16:06:00+00:00"Sun, Ruoyu"https://zbmath.org/authors/?q=ai:sun.ruoyu"Ye, Yinyu"https://zbmath.org/authors/?q=ai:ye.yinyuSummary: This paper concerns the worst-case complexity of cyclic coordinate descent (C-CD) for minimizing a convex quadratic function, which is equivalent to Gauss-Seidel method, Kaczmarz method and projection onto convex sets (POCS) in this simple setting. We observe that the known provable complexity of C-CD can be \(\mathcal{O} (n^2)\) times slower than randomized coordinate descent (R-CD), but no example was proven to exhibit such a large gap. In this paper we show that the gap indeed exists. We prove that there exists an example for which C-CD takes at least \(\mathcal{O} (n^4 \kappa_\text{CD} \log \frac{1}{\epsilon})\) operations, where \(\kappa_\text{CD}\) is related to Demmel's condition number and it determines the convergence rate of R-CD. It implies that in the worst case C-CD can indeed be \(\mathcal{O}(n^2)\) times slower than R-CD, which has complexity \(\mathcal{O}(n^2 \kappa_\text{CD} \log \frac{1}{\epsilon})\). Note that for this example, the gap exists for any fixed update order, not just a particular order. An immediate consequence is that for Gauss-Seidel method, Kaczmarz method and POCS, there is also an \(\mathcal{O}(n^2)\) gap between the cyclic versions and randomized versions (for solving linear systems). One difficulty with the analysis is that the spectral radius of a non-symmetric iteration matrix does not necessarily constitute a lower bound for the convergence rate. Finally, we design some numerical experiments to show that the size of the off-diagonal entries is an important indicator of the practical performance of C-CD.Iterative filtering as a direct method for the decomposition of nonstationary signals.https://zbmath.org/1459.940312021-05-28T16:06:00+00:00"Cicone, Antonio"https://zbmath.org/authors/?q=ai:cicone.antonioSummary: The Iterative Filtering method is a technique developed recently for the decomposition and analysis of nonstationary and nonlinear signals. In this work, we propose two alternative formulations of the original algorithm which allows to transform the iterative filtering method into a direct technique, making the algorithm closer to an online algorithm. We present a few numerical examples to show the effectiveness of the proposed approaches.On the eigenvalues of spectral gaps of matrix-valued Schrödinger operators.https://zbmath.org/1459.650452021-05-28T16:06:00+00:00"Aljawi, Salma"https://zbmath.org/authors/?q=ai:aljawi.salma"Marletta, Marco"https://zbmath.org/authors/?q=ai:marletta.marcoSummary: This paper presents a method for calculating eigenvalues lying in the gaps of the essential spectrum of matrix-valued Schrödinger operators. The technique of dissipative perturbation allows eigenvalues of interest to move up the real axis in order to achieve approximations free from spectral pollution. Some results of the behaviour of the corresponding eigenvalues are obtained. The effectiveness of this procedure is illustrated by several numerical examples.An upwind finite volume method on non-orthogonal quadrilateral meshes for the convection diffusion equation in porous media.https://zbmath.org/1459.652132021-05-28T16:06:00+00:00"Buitrago Boret, Saúl E."https://zbmath.org/authors/?q=ai:buitrago.saul"Jones, Giselle G. Sosa"https://zbmath.org/authors/?q=ai:jones.giselle-g-sosa"Jiménez P., Oswaldo J."https://zbmath.org/authors/?q=ai:jimenez-p.oswaldo-jThe authors develop a numerical model based on the finite volume methods for the solution of the convection diffusion equation in 2D. The discretization of the physical domain is performed using non-rectangular grids formed only by quadrilaterals honoring the internal structures of a reservoir. Some interesting numerical tests are presented for steady and unsteady state cases. Different scenarios were considered varying boundary conditions (Dirichlet and Neumann type), source term, and diffusion constant fluid velocity. The results are consistent with the physical interpretation of each configuration. A comparison with an analytical solution shows a reasonable result based on the relative error in the \(L^2\)-norm.
Reviewer: Abdallah Bradji (Annaba)Fast multiscale contrast independent preconditioners for linear elastic topology optimization problems.https://zbmath.org/1459.741822021-05-28T16:06:00+00:00"Zambrano, Miguel"https://zbmath.org/authors/?q=ai:zambrano.miguel"Serrano, Sintya"https://zbmath.org/authors/?q=ai:serrano.sintya"Lazarov, Boyan S."https://zbmath.org/authors/?q=ai:lazarov.boyan-stefanov"Galvis, Juan"https://zbmath.org/authors/?q=ai:galvis.juanSummary: The goal of this work is to present a fast and viable approach for the numerical solution of the high-contrast state problems arising in topology optimization. The optimization process is iterative, and the gradients are obtained by an adjoint analysis, which requires the numerical solution of large high-contrast linear elastic problems with features spanning several length scales. The size of the discretized problems forces the utilization of iterative linear solvers with solution time dependent on the quality of the preconditioner. The lack of clear separation between the scales, as well as the high-contrast, imposes severe challenges on the standard preconditioning techniques. Thus, here we propose new methods for the high-contrast elasticity equation with performance independent of the high-contrast and the multi-scale structure of the elasticity problem. The solvers are based on two-levels domain decomposition techniques with a carefully constructed coarse level to deal with the high-contrast and multi-scale nature of the problem. The construction utilizes spectral equivalence between scalar diffusion and each displacement block of the elasticity problems and, in contrast to previous solutions proposed in the literature, is able to select the appropriate dimension of the coarse space automatically. The new methods inherit the advantages of domain decomposition techniques, such as easy parallelization and scalability. The presented numerical experiments demonstrate the excellent performance of the proposed methods.Krylov type methods for linear systems exploiting properties of the quadratic numerical range.https://zbmath.org/1459.650412021-05-28T16:06:00+00:00"Frommer, Andreas"https://zbmath.org/authors/?q=ai:frommer.andreas"Jacob, Brigit"https://zbmath.org/authors/?q=ai:jacob.brigit"Kahl, Kartsen"https://zbmath.org/authors/?q=ai:kahl.kartsen"Wyss, Christian"https://zbmath.org/authors/?q=ai:wyss.christian"Zwaan, Ian"https://zbmath.org/authors/?q=ai:zwaan.ian-nSummary: The quadratic numerical range \(W^2(A)\) is a subset of the standard numerical range of a linear operator, which still contains its spectrum. It arises naturally in operators that have a \(2 \times 2\) block structure, and it consists of at most two connected components, none of which necessarily convex. The quadratic numerical range can thus reveal spectral gaps, and it can in particular indicate that the spectrum of an operator is bounded away from \(0\).
We exploit this property in the finite-dimensional setting to derive Krylov subspace-type methods to solve the system \(Ax = b\), in which the iterates arise as solutions of low-dimensional models of the operator whose quadratic numerical range is contained in \(W^2(A)\). This implies that the iterates are always well-defined and that, as opposed to standard FOM, large variations in the approximation quality of consecutive iterates are avoided, although \(0\) lies within the convex hull of the spectrum. We also consider GMRES variants that are obtained in a similar spirit. We derive theoretical results on basic properties of these methods, review methods on how to compute the required bases in a stable manner, and present results of several numerical experiments illustrating improvements over standard FOM and GMRES.An efficient iterative method for solving large linear systems.https://zbmath.org/1459.650422021-05-28T16:06:00+00:00"Jamalian, Ali"https://zbmath.org/authors/?q=ai:jamalian.ali"Aminikhah, Hossein"https://zbmath.org/authors/?q=ai:aminikhah.hosseinSummary: This paper presents a new powerful iterative method for solving large and sparse linear systems. Using the idea of the Jaya method to the restarted generalized minimum residual (GMRES) method, we propose the Jaya-GMRES method. The Jaya-GMRES is an efficient solver, being based mainly on matrix-vector multiplications. Numerical results show that the Jaya-GMRES method has found more accurate solutions and converges much regular than the GMRES method.Hybrid matrix compression for high-frequency problems.https://zbmath.org/1459.350922021-05-28T16:06:00+00:00"Börm, Steffen"https://zbmath.org/authors/?q=ai:borm.steffen"Börst, Christina"https://zbmath.org/authors/?q=ai:borst.christinaComputing enclosures for the matrix exponential.https://zbmath.org/1459.650562021-05-28T16:06:00+00:00"Frommer, Andreas"https://zbmath.org/authors/?q=ai:frommer.andreas"Hashemi, Behnam"https://zbmath.org/authors/?q=ai:hashemi.behnamOn convergence rate of the randomized Gauss-Seidel method.https://zbmath.org/1459.650402021-05-28T16:06:00+00:00"Bai, Zhong-Zhi"https://zbmath.org/authors/?q=ai:bai.zhongzhi"Wang, Lu"https://zbmath.org/authors/?q=ai:wang.lu|wang.lu.1|wang.lu.3|wang.lu.4|wang.lu.2"Wu, Wen-Ting"https://zbmath.org/authors/?q=ai:wu.wentingSummary: The Gauss-Seidel and Kaczmarz methods are two classic iteration methods for solving systems of linear equations, which operate in column and row spaces, respectively. Utilizing the connections between these two methods and imitating the exact analysis of the mean-squared error for the randomized Kaczmarz method, we conduct an exact closed-form formula for the mean-squared residual of the iterate generated by the randomized Gauss-Seidel method. Based on this new formula, we further estimate an upper bound for the convergence rate of the randomized Gauss-Seidel method. Theoretical analysis and numerical experiments show that this bound measurably improves the existing ones. Moreover, these theoretical results are also extended to the more general extrapolated randomized Gauss-Seidel method.Least squares solution of the quaternion Sylvester tensor equation.https://zbmath.org/1459.150202021-05-28T16:06:00+00:00"Wang, Qing-Wen"https://zbmath.org/authors/?q=ai:wang.qingwen"Xu, Xiangjian"https://zbmath.org/authors/?q=ai:xu.xiangjian"Duan, Xuefeng"https://zbmath.org/authors/?q=ai:duan.xuefengSummary: This paper is concerned with the solution to the least squares problem for the quaternion Sylvester tensor equation. An iterative algorithm based on tensor format is presented to solve this problem. The convergence properties of the proposed iterative method are studied. We also consider the best approximate problem related to quaternion Sylvester tensor equation. Numerical examples are provided to confirm the theoretical results, which demonstrate that the proposed algorithm is effective and feasible for solving the least squares problem of the quaternion Sylvester tensor equation.Efficient preconditioning for noisy separable nonnegative matrix factorization problems by successive projection based low-rank approximations.https://zbmath.org/1459.650392021-05-28T16:06:00+00:00"Mizutani, Tomohiko"https://zbmath.org/authors/?q=ai:mizutani.tomohiko"Tanaka, Mirai"https://zbmath.org/authors/?q=ai:tanaka.miraiSummary: The successive projection algorithm (SPA) can quickly solve a nonnegative matrix factorization problem under a separability assumption. Even if noise is added to the problem, SPA is robust as long as the perturbations caused by the noise are small. In particular, robustness against noise should be high when handling the problems arising from real applications. The preconditioner proposed by \textit{N. Gillis} and \textit{S. A. Vavasis} [SIAM J. Optim. 25, No. 1, 677--698 (2015; Zbl 1316.15015)] makes it possible to enhance the noise robustness of SPA. Meanwhile, an additional computational cost is required. The construction of the preconditioner contains a step to compute the top-\(k\) truncated singular value decomposition of an input matrix. It is known that the decomposition provides the best rank-\(k\) approximation to the input matrix; in other words, a matrix with the smallest approximation error among all matrices of rank less than \(k\). This step is an obstacle to an efficient implementation of the preconditioned SPA. To address the cost issue, we propose a modification of the algorithm for constructing the preconditioner. Although the original algorithm uses the best rank-\(k\) approximation, instead of it, our modification uses an alternative. Ideally, this alternative should have high approximation accuracy and low computational cost. To ensure this, our modification employs a rank-\(k\) approximation produced by an SPA based algorithm. We analyze the accuracy of the approximation and evaluate the computational cost of the algorithm. We then present an empirical study revealing the actual performance of the SPA based rank-\(k\) approximation algorithm and the modified preconditioned SPA.Single snapshot DOA estimation by minimizing the fraction function in sparse recovery.https://zbmath.org/1459.940482021-05-28T16:06:00+00:00"Wang, Changlong"https://zbmath.org/authors/?q=ai:wang.changlong"Che, Jibin"https://zbmath.org/authors/?q=ai:che.jibin"Zhou, Feng"https://zbmath.org/authors/?q=ai:zhou.feng"Hou, Jinyong"https://zbmath.org/authors/?q=ai:hou.jinyong"Li, Chen"https://zbmath.org/authors/?q=ai:li.chen.1|li.chenSummary: Sparse recovery is one of the most important methods for single snapshot DOA estimation. Due to fact that the original \(l_0\)-minimization problem is a NP-hard problem, we design a new alternative fraction function to solve DOA estimation problem. First, we discuss the theoretical guarantee about the new alternative model for solving DOA estimation problem. The equivalence between the alternative model and the original model is proved. Second, we present the optimal property about this new model and a fixed point algorithm with convergence conclusion are given. Finally, some simulation experiments are provided to demonstrate the effectiveness of the new algorithm compared with the classic sparse recovery method.An iterative algorithm for solving a class of generalized coupled Sylvester-transpose matrix equations over bisymmetric or skew-anti-symmetric matrices.https://zbmath.org/1459.650542021-05-28T16:06:00+00:00"Yan, Tongxin"https://zbmath.org/authors/?q=ai:yan.tongxin"Ma, Changfeng"https://zbmath.org/authors/?q=ai:ma.changfengSummary: This paper presents an iterative algorithm to solve a class of generalized coupled Sylvester-transpose matrix equations over bisymmetric or skew-anti-symmetric matrices. When the matrix equations are consistent, the bisymmetric or skew-anti-symmetric solutions can be obtained within finite iteration steps in the absence of round-off errors for any initial bisymmetric or skew-anti-symmetric matrices by the proposed iterative algorithm. In addition, we can obtain the least norm solution by choosing the special initial matrices. Finally, numerical examples are given to demonstrate the iterative algorithm is quite efficient. The merit of our method is that it is easy to implement.An SDP method for copositivity of partially symmetric tensors.https://zbmath.org/1459.650582021-05-28T16:06:00+00:00"Wang, Chunyan"https://zbmath.org/authors/?q=ai:wang.chunyan"Chen, Haibin"https://zbmath.org/authors/?q=ai:chen.haibin"Che, Haitao"https://zbmath.org/authors/?q=ai:che.haitaoSummary: In this paper, we consider the problem of detecting the copositivity of partially symmetric rectangular tensors. We first propose a semidefinite relaxation algorithm for detecting the copositivity of partially symmetric rectangular tensors. Then, the convergence of the proposed algorithm is given, and it shows that we can always catch the copositivity of given partially symmetric tensors. Several preliminary numerical results confirm our theoretical findings.On the solution of the nonsymmetric T-Riccati equation.https://zbmath.org/1459.150152021-05-28T16:06:00+00:00"Benner, Peter"https://zbmath.org/authors/?q=ai:benner.peter"Palitta, Davide"https://zbmath.org/authors/?q=ai:palitta.davideSummary: The nonsymmetric T-Riccati equation is a quadratic matrix equation where the linear part corresponds to the so-called T-Sylvester or T-Lyapunov operator that has previously been studied in the literature. It has applications in macroeconomics and policy dynamics. So far, it presents an unexplored problem in numerical analysis, and both theoretical results and computational methods are lacking in the literature. In this paper we provide some sufficient conditions for the existence and uniqueness of a nonnegative minimal solution, namely the solution with component-wise minimal entries. Moreover, the efficient computation of such a solution is analyzed. Both the small-scale and large-scale settings are addressed, and Newton-Kleinman-like methods are derived. The convergence of these procedures to the minimal solution is proven, and several numerical results illustrate the computational efficiency of the proposed methods.Properties of the nonnegative solution set of multi-linear equations.https://zbmath.org/1459.150052021-05-28T16:06:00+00:00"Xu, Yang"https://zbmath.org/authors/?q=ai:xu.yang|xu.yang.3|xu.yang.2|xu.yang.1"Gu, Weizhe"https://zbmath.org/authors/?q=ai:gu.weizhe"Huang, Zheng-Hai"https://zbmath.org/authors/?q=ai:huang.zheng-haiSummary: Multi-linear equations and tensor complementarity problems are two hot topics in recent years. It is known that the nonnegative solution set of multi-linear equations is a subset of the solution set of the corresponding tensor complementarity problem. In this paper, we first investigate the existence and uniqueness of nonnegative (positive) solution to multi-linear equations induced by some triangular tensors; and then, we discuss the non-existence of nonnegative solution to multi-linear equations induces by \(B\) (\(B_0\)) tensors or strictly diagonally dominant tensors. In addition, we also investigate the boundedness of the nonnegative solution set of multi-linear equations with some structured tensors. The obtained properties of the nonnegative solution set of multi-linear equations give some characteristics on the solution set of tensor complementarity problems.Numerical determination for solving the symmetric eigenvector problem using genetic algorithm.https://zbmath.org/1459.650492021-05-28T16:06:00+00:00"Navarro-González, F. J."https://zbmath.org/authors/?q=ai:navarro-gonzalez.f-j"Compañ, P."https://zbmath.org/authors/?q=ai:compan.patricia"Satorre, R."https://zbmath.org/authors/?q=ai:satorre.r"Villacampa, Yolanda"https://zbmath.org/authors/?q=ai:villacampa.yolandaSummary: The eigenvalues and eigenvectors of a matrix have many applications in engineering and science. For example they are important in studying and solving structural problems, in the treatment of signal or image processing, in the study of quantum mechanics and in certain physical problems. It is therefore essential to analyze methodologies to obtain the eigenvectors and eigenvalues of symmetric and Hermitian matrices. In this paper the authors present a methodology for obtaining the eigenvectors and eigenvalues of a symmetric or Hermitian matrix using a genetic algorithm. Unlike other methodologies, the process is centred in searching the eigenvectors and calculating the eigenvalues afterwards. In the search of the eigenvectors a genetic-based algorithm is used. Genetic algorithms are indicated when the search space is extended, unknown or with an intricate geometry. Also, the target vector space can be either real or complex, allowing in this way a wider field of application for the proposed method. The algorithm is tested comparing the results with those obtained by other methods or with the values previously known. So, seven applications are included: a real symmetric matrix corresponding to a vibrating system, a complex Hermitian matrix and an important application of the diagonalization problem (Coope matrix) corresponding to quantum mechanics examples, a physical problem in which data are analysed to reduce the number of variables, a comparison with the power method and the studies of a degenerate and an ill-conditioned matrix.On eigenstructure of \(q\)-Bernstein operators.https://zbmath.org/1459.650482021-05-28T16:06:00+00:00"Naaz, Ambreen"https://zbmath.org/authors/?q=ai:naaz.ambreen"Mursaleen, M."https://zbmath.org/authors/?q=ai:mursaleen.mohammad|mursaleen.mohammad-ayman|mursaleen.momammadSummary: The quantum analogue of Bernstein operators \(\mathcal{B}_{m,q}\) reproduce the linear polynomials which are therefore eigenfunctions corresponding to the eigenvalue \(1\), \(\forall q>0\). In this article the rest of eigenstructure of \(q\)-Bernstein operators and the distinct behaviour of zeros of eigenfunctions for cases (i) \(1>q>0\), and (ii) \(q>1\) are discussed. Graphical analysis for some eigenfunctions and their roots are presented with the help of MATLAB. Also, matrix representation for diagonalisation of \(q\)-Bernstein operators is discussed.