FIG 7 - uploaded by Ronald B. Morgan
Content may be subject to copyright.
Restarting deflated BiCGSTAB at different points.  

Restarting deflated BiCGSTAB at different points.  

Source publication
Article
Full-text available
A new approach is discussed for solving large nonsymmetric systems of linear equations with multiple right-hand sides. The first system is solved with a deflated GMRES method that generates eigenvector information at the same time that the linear equations are solved. Subsequent systems are solved by combining an iterative method with a projection...

Similar publications

Article
Full-text available
A new approach is discussed for solving large nonsymmetric systems of linear equations with multiple right-hand sides. The first system is solved with a deflated GMRES method that generates eigenvector information at the same time that the linear equations are solved. Subsequent systems are solved by combining restarted GMRES with a projection over...

Citations

... In the method, basis vectors generated in the Arnoldi process in the restart period are used to determine the subspace used for deflation 14 . Morgan et al. also introduced some variants of the GMRES-DR method which includes an application to the flexible GMRES method 15,16 . Carpenter described five major methods to specify the subspace (enrichment vectors) in the context of solvers based on the GMRES method 17 . ...
... Moreover, for simplicity, we assume that = = 20 and = 30. From (15) and (23), for these settings, the computational cost for the setup is comparable with that for ten ICCG iterations. Because it is not rare that the number of iterations exceeds several hundred for a practical engineering problem and is typically not small, the setup cost can be amortized in the following solution steps. ...
Article
Full-text available
In this article, we focus on solving a sequence of linear systems that have identical (or similar) coefficient matrices. For this type of problem, we investigate subspace correction (SC) and deflation methods, which use an auxiliary matrix (subspace) to accelerate the convergence of the iterative method. In practical simulations, these acceleration methods typically work well when the range of the auxiliary matrix contains eigenspaces corresponding to small eigenvalues of the coefficient matrix. We develop a new algebraic auxiliary matrix construction method based on error vector sampling in which eigenvectors with small eigenvalues are efficiently identified in the solution process. We use the generated auxiliary matrix for convergence acceleration in the following solution step. Numerical tests confirm that both SC and deflation methods with the auxiliary matrix can accelerate the solution process of the iterative solver. Furthermore, we examine the applicability of our technique to the estimation of the condition number of the coefficient matrix. We also present the algorithm of the preconditioned conjugate gradient method with condition number estimation.
... The small local volume on coarser grids cripples the performance on GPUs already now. Our approach to address this issue is to reformulate the problem such that a large number of degrees of freedom are processed in parallel, i.e., we solve multiple right-hand sides (MRHS) simultaneously [6,7]. ...
... In the method, basis vectors generated in the Arnoldi process in the restart period are used to determine the subspace used for deflation 14 . Morgan et al. also introduced some variants of the GMRES-DR method which includes an application to the flexible GMRES method 15,16 . Carpenter described five major methods to specify the subspace (enrichment vectors) in the context of solvers based on the GMRES method 17 . ...
... Moreover, for simplicity, we assume that = = 20 and = 30. From (15) and (23), for these settings, the computational cost for the setup is comparable with that for ten ICCG iterations. Because it is not rare that the number of iterations exceeds several hundred for a practical engineering problem and is typically not small, the setup cost can be amortized in the following solution steps. ...
Preprint
Full-text available
In this paper, we focus on solving a sequence of linear systems with an identical (or similar) coefficient matrix. For this type of problems, we investigate the subspace correction and deflation methods, which use an auxiliary matrix (subspace) to accelerate the convergence of the iterative method. In practical simulations, these acceleration methods typically work well when the range of the auxiliary matrix contains eigenspaces corresponding to small eigenvalues of the coefficient matrix. We have developed a new algebraic auxiliary matrix construction method based on error vector sampling, in which eigenvectors with small eigenvalues are efficiently identified in a solution process. The generated auxiliary matrix is used for the convergence acceleration in the following solution step. Numerical tests confirm that both subspace correction and deflation methods with the auxiliary matrix can accelerate the solution process of the iterative solver. Furthermore, we examine the applicability of our technique to the estimation of the condition number of the coefficient matrix. The algorithm of preconditioned conjugate gradient (PCG) method with the condition number estimation is also shown.
... The preferred method of MG in LQCD is to use it as a preconditioner for an outer Krylov solver [8]. Because every iteration of the outer Krylov solver represents a new right hand side for the MG preconditioner, deflation with projection methods [9,10] can be efficiently employed on the coarsest level. We demonstrate the effect that deflation on the coarsest level has by comparing to MG without coarse grid deflation, and the effect that this deflation has for multiple right hand sides. ...
Preprint
Lattice QCD solvers encounter critical slowing down for fine lattice spacings and small quark mass. Traditional matrix eigenvalue deflation is one approach to mitigating this problem. However, to improve scaling we study the effects of deflating on the coarse grid in a hierarchy of three grids for adaptive mutigrid applications of the two dimensional Schwinger model. We compare deflation at the fine and coarse levels with other non deflated methods. We find the inclusion of a partial solve on the intermediate grid allows for a low tolerance deflated solve on the coarse grid. We find very good scaling in lattice size near critical mass when we deflate at the coarse level using the GMRES-DR and GMRES-Proj algorithms.
... It would have been desirable to compute the deflation space from the search spaces built by the iterative methods for solving these linear systems. This idea has been explored effectively for Lattice QCD in the past [16,22,1]. However such methods are not suitable for our current problem for the following reasons. ...
Article
Full-text available
Many fields require computing the trace of the inverse of a large, sparse matrix. The typical method used for such computations is the Hutchinson method which is a Monte Carlo (MC) averaging over matrix quadratures. To improve its convergence, several variance reductions techniques have been proposed. In this paper, we study the effects of deflating the near null singular value space. We make two main contributions. First, we analyze the variance of the Hutchinson method as a function of the deflated singular values and vectors. Although this provides good intuition in general, by assuming additionally that the singular vectors are random unitary matrices, we arrive at concise formulas for the deflated variance that include only the variance and mean of the singular values. We make the remarkable observation that deflation may increase variance for Hermitian matrices but not for non-Hermitian ones. This is a rare, if not unique, property where non-Hermitian matrices outperform Hermitian ones. The theory can be used as a model for predicting the benefits of deflation. Second, we use deflation in the context of a large scale application of "disconnected diagrams" in Lattice QCD. On lattices, Hierarchical Probing (HP) has previously provided an order of magnitude of variance reduction over MC by removing "error" from neighboring nodes of increasing distance in the lattice. Although deflation used directly on MC yields a limited improvement of 30% in our problem, when combined with HP they reduce variance by a factor of over 150 compared to MC. For this, we pre-computated 1000 smallest singular values of an ill-conditioned matrix of size 25 million. Using PRIMME and a domain-specific Algebraic Multigrid preconditioner, we perform one of the largest eigenvalue computations in Lattice QCD at a fraction of the cost of our trace computation.
... The first one is based on reusing information from the first solve to speed-up convergence of the subsequent systems. Using a projector, spectral information from the first solve can spectrally deflate at lower cost a GMRes method[10]. It is expected to attain near asymptotic convergence right from the beginning, removing transient plateaus. ...
... Instead, a simpler method was tried. As explained in[10], a projection onto the small harmonic Ritz vectors at each restart of a standard GMRes (i.e. non deflated) could be sufficient to attain near-asymptotic convergence right from the start, and also exhibit good deflation properties at restart (no significant change of the convergence rate). ...
... For the projection and the GMRes method to interweave properly, the projection has to be carefully crafted. The projection, dubbed MinRes in[10], is explained in algorithm 3. The MinRes projection should be used in the following way. A first right-hand side is solved using deflated GMRes. ...
... The process is referred to as deflation. Variants of deflated Krylov solvers for the primary linear systems have been fully studied in the literature [2,3,[6][7][8][9][10][11][12][13][14]. However, deflation for dual linear systems has not yet been fully investigated [15,16]. ...
Article
Full-text available
Most calculations in model reduction involve the solutions of a sequence of dual linear systems with multiple right-hand sides. To solve such systems efficiently, a new deflated BiCG method is explored in this paper. The proposed algorithm uses harmonic Ritz vectors to approximate left and right invariant subspaces inexpensively via small descenting direction vectors found by subsequent runs of deflated BiCG and then derives the deflated subspaces for the next pair of dual linear systems. This process leads to faster convergence for the next pair of systems. Numerical examples illustrate the effectiveness of the proposed method.
... Example 15. We experiment with a lattice quantum chromodynamics (QCD) [14] [1] [28] [2] matrix of size n = 2, 654, 208 from a 24 3 x 32 lattice. It is developed using the quenched Wilson gauge at β = 6.0. ...
Article
Full-text available
We look at solving large nonsymmetric systems of linear equations using polynomial preconditioned Krylov methods. We give a simple way to find the polynomial. It is shown that polynomial preconditioning can significantly improve restarted GMRES for difficult problems, and the reasons for this are examined. Stability is discussed, and algorithms are given for increased stability. Next, we apply polynomial preconditioning to GMRES with deflated restarting. It is shown that this is worthwhile for sparse matrices and for problems with many small eigenvalues. Multiple right-hand sides are also considered.
... There are other algorithms in the literature for solving systems with multiple right-hand sides using deflation. We mention in particular Lanczos with deflated restarting (Lan-DR) [23,37], GMRes with deflated restarting (GMRes-DR and GMRes-Proj) for the nonsymmetric case [17,38,24], and Recycled Krylov methods [18,19]. The algorithms we propose are different in several ways. ...
... For systems with multiple right-hand sides, the computed eigenvectors from the first system are used to deflate Restarted GMRes for the following systems. Because it is expensive to deflate these k vectors at every step of GMRes-DR(m,k), they are used in the GMRes-Proj method [38]. In GMRes-Proj, cycles of GMRes(m ′ ) are alternated with a minimum residual projection over these k eigenvectors. ...
Article
The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems and then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.
... Some approaches involve the computation of the low-lying eigenvectors of γ 5 D and use these to deflate the Dirac operator, see e.g. ref. [1,2]. These eigenvectors can also be used to reduce the noise of the signal with a technique known in the literature [3,4] as low mode averaging (LMA). ...
... refs. [1,2]. In this case the overhead of LMA is negligible. ...