Siegfried M. Rump

Siegfried M. Rump
Technische Universität Hamburg | TUHH · Institute for Reliable Computing

About

241
Publications
19,968
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
5,424
Citations

Publications

Publications (241)
Article
Full-text available
Verification methods compute intervals which contain the solution of a given problem with mathematical rigour. In order to compare the quality of intervals some measure is desirable. We identify some anticipated properties and propose a method avoiding drawbacks of previous definitions.
Article
Full-text available
Recently Oishi published a paper allowing lower bounds for the minimum singular value of coefficient matrices of linearized Galerkin equations, which in turn arise in the computation of periodic solutions of nonlinear delay differential equations with some smooth nonlinearity. The coefficient matrix of linearized Galerkin equations may be large, so...
Article
We show how an IEEE-754 conformant precision- p base- β arithmetic can be implemented based on some binary floating-point and/or integer arithmetic. This includes the four basic operations and square root subject to the five IEEE-754 rounding modes, namely the nearest roundings with roundTiesToEven and roundTiesToAway, the directed roundings downwa...
Article
Full-text available
The numerical computation of the Euclidean norm of a vector is perfectly well conditioned with favorite a priori error estimates. Recently there is interest in computing a faithfully rounded approximation which means that there is no other floating-point number between the computed and the true real result. Hence the result is either the rounded to...
Article
Full-text available
There are simple algorithms to compute the predecessor, successor, unit in the first place, unit in the last place etc. in binary arithmetic. In this note equally simple algorithms for computing the unit in the first place and the unit in the last place in precision- p base- $$\beta $$ β arithmetic with $$p \geqslant 1$$ p ⩾ 1 and with $$\beta \geq...
Article
Let an irreducible nonnegative matrix A and a positive vector x be given. Assume αx≤Ax≤βx for some 0<α≤β∈R. Then, by Perron-Frobenius theory, α and β are lower and upper bounds for the Perron root of A. As for the Perron vector x⁎, only bounds for the ratio γ:=maxi,j⁡xi⁎/xj⁎ are known, but no error bounds against some given vector x. In this note w...
Article
Full-text available
Let A be a real n×n\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n\times n$$\end{document} matrix and z,b∈Rn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepac...
Article
Let a strongly stable norm ∥⋅∥ on the set Mn of complex n-by-n matrices be given, which means that ∥Ak∥≤∥A∥k for all A∈Mn and all k=1,2,…. Furthermore, let f(x)=∑k=0∞ckxk be a power series with nonnegative coefficients ck≥0 and radius of convergence R>0. If ∥I∥>1, we additionally suppose that c0=f(0)=0. We aim to characterize those A with ∥A∥<R, wh...
Article
Let a norm on the set Mn of real or complex n-by-n matrices be given. We investigate the question of finding the largest constants αn and βn such that for each A∈Mn the average of the norms of its (n−1)-by-(n−1) principal submatrices is at least αn times the norm of A, and such that the maximum of the norms of those principal submatrices is at leas...
Preprint
Let $A$ be a real $n\times n$ matrix and $z,b\in \mathbb R^n$. The piecewise linear equation system $z-A\vert z\vert = b$ is called an \textit{absolute value equation}. We consider two solvers for this problem, one direct, one semi-iterative, and extend their previously known ranges of convergence.
Preprint
Full-text available
2 Electronic structure calculations, in particular the computation of the ground state 3 energy, lead to challenging problems in optimization. These problems are of enormous 4 importance in quantum chemistry for calculations of properties of solids and molecules. 5 Minimization methods for computing the ground state energy can be developed by em-6...
Article
For an \(m \times n\) matrix A, the mathematical property that the rank of A is equal to r for \(0< r < \min (m,n)\) is an ill-posed problem. In this note we show that, regardless of this circumstance, it is possible to solve the strongly related problem of computing a nearby matrix with at least rank deficiency k in a mathematically rigorous way a...
Article
We present a pair arithmetic for the four basic operations and square root. It can be regarded as a simplified, more-efficient double-double arithmetic. The central assumption on the underlying arithmetic is the first standard model for error analysis for operations on a discrete set of real numbers. Neither do we require a floating-point grid nor...
Article
We discuss several methods to compute a verified inclusion of the determinant of a real or complex, point or interval matrix. For point matrices, large condition number 1015, and large dimension (n=1000) still highly accurate inclusions are computed. For real interval matrices we show that any vertex may be a unique extreme point. For wide radii we...
Article
We derive verified error bounds for approximate solutions of dense linear systems. There are verification methods using an approximate inverse of a coefficient matrix as a preconditioner, where the preconditioned coefficient matrix is likely to be anH-matrix (also known as a generalized diagonally dominant matrix). We focus on two inclusion methods...
Article
Each connected component of the Gershgorin circles of a matrix contains exactly as many eigenvalues as circles are involved. Thus, the Minkowski (set) product of all circles contains the determinant if all circles are disjoint. In [S.M. Rump. Bounds for the determinant by Gershgorin circles. Linear Algebra and its Applications, 563:215--219, 2019.]...
Article
Full-text available
Let L be a lower triangular \(n\times n\)-Toeplitz matrix with first column \((\mu ,\alpha ,\beta ,\alpha ,\beta ,\ldots )^T\), where \(\mu ,\alpha ,\beta \ge 0\) fulfill \(\alpha -\beta \in [0,1)\) and \(\alpha \in [1, \mu + 3]\). Furthermore let D be the diagonal matrix with diagonal entries \(1,2,\ldots ,n\). We prove that the smallest singular...
Article
Full-text available
Let \(D_R\), \(D_r\), \(D_S\), \(D_s\) be complex disks with common center 1 and radii R, r, S, s, respectively. We consider the Minkowski products \(A := D_R D_r\) and \(B := D_S D_s\) and give necessary and sufficient conditions for A being a subset or superset of B. Partially, this extends to n-fold disk products \(D_1\ldots D_n\), \(n>2\). It i...
Article
Each connected component of the Gershgorin circles of a matrix contains exactly as many eigenvalues as circles are involved. Thus, the Minkowski (set) product of all circles contains the determinant if all circles are disjoint. In [S.M. Rump. Bounds for the determinant by Gershgorin circles. Linear Algebra and its Applications, 563:215--219, 2019.]...
Article
Many bounds for the determinant det⁡(I+E) of a perturbed identity matrix are known. Mostly, the upper bound is inferior to the classical Hadamard bound. In this note we give simple and efficiently computable relative bounds differing by ‖E‖F3, where the upper bound is usually better than Hadamard's bound.
Article
Each connected component of the Gershgorin circles of a matrix contains exactly as many eigenvalues as circles are involved. Thus the power set product of all circles is an inclusion of the determinant if all circles are disjoint. We prove that statement to be true for real matrices, even if their circles overlap.
Article
Recently Brent et al. presented new estimates for the determinant of a real perturbation I+E of the identity matrix. They give a lower and an upper bound depending on the maximum absolute value of the diagonal and the off-diagonal elements of E, and show that either bound is sharp. Their bounds will always include 1, and the difference of the bound...
Article
Standard Wilkinson-type error estimates of floating-point algorithms that are solely based on the first or second standard model typically involve a factor γk := ku/(1 - ku), where u denotes the relative rounding error unit of a floating-point number system. Using specific properties of floating-point grids it was shown that often γk can be replace...
Article
This paper gives details on how to obtain mathematically rigorous results for global unconstrained and equality constrained optimization problems, as well as for finding all roots of a nonlinear function within a box. When trying to produce mathematically rigorous results for such problems of global nature, the main issue is to mathematically verif...
Article
Full-text available
Let A be a real n × n matrix and z, b ∈ R n. The piecewise linear equation system z − A|z| = b is called an absolute value equation. It is equivalent to the general linear complementarity problem, and thus NP hard in general. Concerning the latter problem, three solvers are presented: One direct, one semi-iterative and one discrete variant of dampe...
Article
Standard Wilkinson-type error estimates of floating-point algorithms involve a factor \(\gamma _k:=k\mathbf {u}/(1-k\mathbf {u})\) for \(\mathbf {u}\) denoting the relative rounding error unit of a floating-point number system. Recently, it was shown that, for many standard algorithms such as matrix multiplication, LU- or Cholesky decomposition, \(...
Technical Report
Full-text available
Electronic structure calculations, in particular the computation of the ground state energy, lead to challenging problems in optimization. These problems are of enormous importance in quantum chemistry for calculations of properties of solids and molecules. Minimization methods for computing the ground state energy can be developed by employing a v...
Article
Suppose an m-digit floating-point arithmetic in base β ≥ 2 following the IEEE754 arithmetic standard is available. We show how a k-digit arithmetic with k < mcan be inherited solely using m-digit operations. This includes the rounding into kdigits, the four basic operations and the square root, all for even or odd base β. In particular, we characte...
Article
It seems to be of recurring interest in the literature to give alternative proofs for the fact that the determinant of a symplectic matrix is one. We state four short and elementary proofs for symplectic matrices over general fields. Two of them seem to be new.
Article
We discuss several methods to simulate interval arithmetic operations using floating-point operations with fixed rounding mode. In particular we present formulas using only rounding to nearest and using only chop rounding (towards zero). The latter was the default and only rounding on GPU (Graphics Processing Unit) and cell processors, which in tur...
Article
Um es vorweg zu nehmen, ungenaue numerische Resultate sind selten; zu selten, sich immer darum kümmern zu müssen, aber nicht selten genug, sie zu ignorieren. Im folgenden sollen die Grenzen und Möglichkeiten von Gleitkommaarithmetik und von numerischen Verfahren untersucht und Eigenschaften und Fakten verdeutlicht werden, insbesondere anhand einige...
Book
This book constitutes the thoroughly refereed post-conference proceedings of the 6th International Conference on Mathematical Aspects of Computer and Information Sciences, MACIS 2015, held in Berlin, Germany, in November 2015. The 48 revised papers presented together with 7 invited papers were carefully reviewed and selected from numerous submissio...
Article
This paper is concerned with floating-point filters for a two dimensional orientation problem which is a basic problem in the field of computational geometry. If this problem is only approximately solved by floating-point arithmetic, then an incorrect result may be obtained due to accumulation of rounding errors. A floating- point filter can quick...
Article
Standard error estimates in numerical linear algebra are often of the form γk|R||S| where R,S are known matrices and γk:=ku/(1-u) with u denoting the relative rounding error unit. Recently we showed that for a number of standard problems γk can be replaced by ku for any order of computation and without restriction on the dimension. Such problems in...
Article
Full-text available
Improved componentwise error bounds for approximate solutions of linear systems are derived in the case where the coefficient of a given linear system is an H-matrix. One of the error bounds presented in this paper proves to be tighter than the existing error bound, which is effective especially for ill-conditioned cases. Numerical experiments are...
Article
Affine arithmetic is a well-known tool to reduce the wrapping effect of ordinary interval arithmetic. We discuss several improvements both in theory and in terms of practical implementation. In particular details of INTLAB's affine arithmetic toolbox are presented. Computational examples demonstrate advantages and weaknesses of the approach.
Article
Given a linear system Ax = b and some vector x, the backward error characterizes the smallest relative perturbation of A, b such that × is a solution of the perturbed system. If the input matrix has some structure such as being symmetric or Toeplitz, perturbations may be restricted to perturbations within the same class of structured matrices. For...
Article
The result of a floating-point operation is usually defined to be the floating-point number nearest to the exact real result together with a tie-breaking rule. This is called the first standard model of floating-point arithmetic, and the analysis of numerical algorithms is often solely based on that. In addition, a second standard model is used spe...
Article
Let \(\mathbf{u}\) denote the relative rounding error of some floating-point format. Recently it has been shown that for a number of standard Wilkinson-type bounds the typical factors \(\gamma _k:=k\mathbf{u}/(1-k\mathbf{u})\) can be improved into \(k\mathbf{u}\), and that the bounds are valid without restriction on \(k\). Problems include summatio...
Article
An algorithm is presented for computing verified and accurate bounds for the value of the gamma function over the entire real double precision floating-point range. It means that for every double precision floating-point number x except the poles -k for 0 ≤ k ∈ N the true value of Γ(x) is included within an almost maximally accurate interval with f...
Article
Recently Miyajima presented algorithms to compute componentwise verified error bounds for the solution of full-rank least squares problems and underdetermined linear systems. In this paper we derive simpler and improved componentwise error bounds which are based on equalities for the error of a given approximate solution. Equalities are not improva...
Article
Assuming standard floating-point arithmetic (in base beta, precision p) and barring underflow and overflow, classical rounding error analysis of the LU or Cholesky factorization of an n x n matrix A provides backward error bounds of the form vertical bar Delta A vertical bar <= gamma(n)vertical bar(L) over cap vertical bar vertical bar(U) over cap...
Article
Rounding error analyses of numerical algorithms are most often carried out via repeated applications of the so-called standard models of floating-point arithmetic. Given a round-to-nearest function $\fl$ and barring underflow and overflow, such models bound the relative errors $E_1(t) = |t-\fl(t)|/|t|$ and $E_2(t) = |t-\fl(t)|/|\fl(t)|$ by the unit...
Conference Paper
As for a matrix A we examine two problems: (a) To find the upper and the lower bound of Cond2(A) in terms of two coefficients p1 and pn-1 (see Section 2) of the characteristic polynomial of AAT, and (b) proof of existence of a matrix A having considerably larger condition number than that obtained in the previous papers. The connection between (a)...
Article
Given two floating-point vectors x, y of dimension n and assuming rounding to nearest, we show that if no underflow or overflow occurs, any evaluation order for an inner product returns a floating-point number (r) over cap such that vertical bar(r) over cap - x(T)y vertical bar <= nu vertical bar x vertical bar(T)vertical bar y vertical bar with u...
Article
We investigate how extra-precise accumulation of dot products can be used to solve ill-conditioned linear systems accurately. For a given pp-bit working precision, extra-precise evaluation of a dot product means that the products and summation are executed in 2p2p-bit precision, and that the final result is rounded into the pp-bit working precision...
Article
In Part I and this Part II of our paper we investigate how extra-precise evaluation of dot products can be used to solve ill-conditioned linear systems rigorously and accurately. In Part I only rounding to nearest is used. In this Part II we improve the results significantly by permitting directed rounding. Linear systems with tolerances in the dat...
Article
Full-text available
This paper is concerned with accurate numerical algorithms for matrix multiplication. Recently, an error-free transformation from a product of two floating-point matrices into an unevaluated sum of floating-point matrices has been developed by the authors. Combining this technique and accurate summation algorithms, new algorithms for accurate matri...
Article
To my knowledge all definitions of interval arithmetic start with real endpoints and prove properties. Then, for practical use, the definition is specialized to finitely many endpoints, where many of the mathematical properties are no longer valid. There seems no treatment how to choose this finite set of endpoints to preserve as many mathematical...
Article
Several methods for the multiplication of point and/or interval matrices with interval result are discussed. Some are based on new priori estimates of the error of floating-point matrix products. The amount of overestimation including all rounding errors is analyzed. In particular, algorithms for conversion of infimum-supremum to midpoint-radius re...
Article
Full-text available
We improve the well-known Wilkinson-type estimates for the error of standard floating-point recursive summation and dot product by up to a factor 2. The bounds are valid when computed in rounding to nearest, no higher order terms are necessary, and they are best possible. For summation there is no restriction on the number of summands. The proofs a...
Article
We discuss several methods for real interval matrix multiplication. First, earlier studies of fast algorithms for interval matrix multiplication are introduced: naive interval arithmetic, interval arithmetic by midpoint–radius form by Oishi–Rump and its fast variant by Ogita–Oishi. Next, three new and fast algorithms are developed. The proposed alg...
Article
New algorithms are presented for computing verified error bounds for least squares problems and underdetermined linear systems. In contrast to previous approaches the new methods do not rely on normal equations and are applicable to sparse matrices. Computational results demonstrate that the new methods are faster than existing ones.
Article
This paper is concerned with accurate matrix multiplication in floating-point arithmetic. Recently, an accurate summation algorithm was developed by Rump et al. (SIAM J Sci Comput 31(1):189–224, 2008). The key technique of their method is a fast error-free splitting of floating-point numbers. Using this technique, we first develop an error-free tra...
Article
Full-text available
The singular value decomposition and spectral norm of a matrix are ubiquitous in numerical analysis. They are extensively used in proofs, but usually it is not necessary to compute them. However, there are some important applications in the realm of verified error bounds for the solution of ordinary and partial differential equations where reasonab...
Article
In this paper we study the generation of an ill-conditioned integer matrix A=[aij] with |aij|≤µ for some given constant µ. Let n be the order of A. We first give some upper bounds of the condition number of A in terms of n and µ. We next propose new methods to generate extremely ill-conditioned integer matrices. These methods are superior to the we...
Article
Full-text available
We present a model problem for global optimization in a specified number of un-knowns. We give constraint and unconstraint formulations. The problem arose from structured condition numbers for linear systems of equations with Toeplitz matrix. We present a simple algorithm using additional information on the problem to find local minimizers which pr...
Article
Full-text available
Mathematical analysis of nonlinear problems encounters various difficulties. Thus, computer assisted proofs for nonlinear problems have attracted attention of many researchers. Especially, numerical computations with result verification is shown to be quite useful for such computer assisted proofs. The special section focuses on this topic. This se...
Article
Full-text available
Given a vector p i of floating-point numbers with exact sum s, we present a new algorithm with the following property: Either the result is a faithful rounding of s, or otherwise the result has a relative error not larger than eps K cond (p i) for K to be specified. The statements are also true in the presence of underflow, the computing time does...
Article
Full-text available
It is well known that it is an ill-posed prob- lem to decide whether a function has a multiple root. For example, an arbitrarily small perturbation of a real polyno- mial may change a double real root into two distinct real or complex roots. In this paper we describe a computational method for the verified computation of a complex disc to contain e...
Conference Paper
Methods will be discussed on how to compute accurate and reliable results in pure floating-point arithmetic. In particular, verification methods with INTLAB and error-free transformations will be presented in some detail.
Article
It is well known that it is an ill-posed problem to decide whether a function has a multiple root. Even for a univariate polynomial an arbitrary small perturbation of a polynomial coe-cient may change the answer from yes to no. Let a system of nonlinear equations be given. In this paper we describe an algorithm for computing verifled and narrow err...
Article
Full-text available
A fast method for enclosing all eigenpairs in symmetric positive def-inite generalized eigenvalue problems is proposed. Firstly theorems on verifying all eigenvalues are presented. Next a theorem on verifying all eigenvectors is presented. The proposed method is developed based on these theorems. Numerical results are presented showing the efficien...
Conference Paper
A classical mathematical proof is constructed using pencil and paper. However, there are many ways in which computers may be used in a mathematical proof. But ‘proof by computer’, or even the use of computers in the course of a proof, is not so readily accepted (the December 2008 issue of the Notices of the American Mathematical Society is devoted...
Chapter
Full-text available
Recently it was shown that the ratio between the normwise Toeplitz structured condition number of a linear system and the general unstructured condition number has a finite lower bound. However, the bound was not explicit, and nothing was known about the quality of the bound. In this note we derive an explicit lower bound only depending on the dime...
Article
Full-text available
A standard for the notation of the most used quantities and operators in interval analysis is proposed.
Article
Full-text available
Let ann xn matrixA of floating-point numbers in some format be given. Denote the relative rounding error unit of the given format by eps. AssumeA to be extremely ill-conditioned, that is cond(A) ≫ eps−1. In about 1984 I developed an algorithm to calculate an approximate inverse ofA solely using the given floating-point format. The key is a multipli...
Article
Full-text available
This paper treats a linear equation Av = b,Av = b, where A Î \mathbbFn n A \in {\mathbb{F}}^{n \times n} and b Î \mathbbFn b \in {\mathbb{F}}^n . Here, \mathbbF{\mathbb{F}} is a set of floating point numbers. Letu be the unit round-off of the working precision and κ(A)=‖A‖∞‖A −1‖∞ be the condition number of the problem. In this paper, ill-cond...
Article
Full-text available
This paper is concerned with a robust geometric predicate for the 2D orientation problem. Recently, a fast and accurate floating-point summation algorithm is investigated by Rump, Ogita and Oishi, which provably outputs a result faithfully rounded from the exact value of the summation of floating-point numbers. We optimize their algorithm for apply...
Article
We give simple and efficient methods to compute and/or estimate the predecessor and successor of a floating-point number using only floating-point operations in rounding to nearest. This may be used to simulate interval operations, in which case the quality in terms of the diameter of the result is significantly improved compared to existing approa...
Article
We present two new algorithms FastAccSum and FastPrecSum, one to compute a faithful rounding of the sum of floating-point numbers and the other for a result "as if" computed in K-fold precision. Faithful rounding means the computed result either is one of the immediate floating-point neighbors of the exact result or is equal to the exact sum if thi...
Article
The concept of error-free transformations is rather old, but gained much attention in recent years. Quite a number of algorithms, most prominently algorithms to compute the sum or dot product of vectors of ∞oating-point numbers, were developed recently using error-free transformations. The appeal is that problems such as the sum of ∞oating-point nu...
Article
Parallel algorithms for accurate summation and dot product are proposed. They are parallelized versions of fast and accurate algorithms of calculating sum and dot product using error-free transformations which are recently proposed by Ogita et al. [T. Ogita, S.M. Rump, S. Oishi, Accurate sum and dot product, SIAM J. Sci. Comput. 26 (6) (2005) 1955–...
Article
Full-text available
We first refine the analysis of error-free vector transformations presented in Part I [ibid. 31, No. 1, 189–224 (2008; Zbl 1185.65082)]. Based on that we present an algorithm for calculating the rounded-to-nearest result of s:=∑p i for a given vector of floating-point numbers p i , as well as algorithms for directed rounding. A special algorithm fo...
Article
Full-text available
This paper is concerned with an accurate computation of matrix multiplication. Recently, an accu-rate summation algorithm was developed by Rump, Ogita and Oishi. One of the key techniques of their method is a new type of error-free splitting. To use this strategy, we investigate a method of obtaining an accurate result of ma-trix multiplication by...
Article
Full-text available
Given a vector of floating-point numbers with exact sum s, we present an algorithm for calculating a faithful rounding of s, i.e. the result is one of the immediate floating-point neighbors of s. If the sum s is a floating-point number, we prove that this is the result of our algorithm. The algorithm adapts to the condition number of the sum, i.e....
Article
Full-text available
Abstract—This paper shows a generalization of Rump’s method, which generates a class of matrices with extremely large condition number.
Article
Full-text available
Abstract—This paper treats a linear equation Av = b; where A ∈ F,1∥∞ be the condition number,of the problem. In this paper, ill-conditioned problems with 1 < u•(A) < ∞ are considered and an iterative refinement algo- rithm for the problems is proposed. In this paper, the forward and backward stability will be shown for this iterative refinement alg...
Article
In this paper, the problem of inverting regular matrices with arbitrarily large condi- tion number is treated in double precision defined by IEEE 754 floating point stan- dard. In about 1984, Rump derived a method for inverting arbitrarily ill-conditioned matrices. The method requires the possibility to calculate a dot product in higher precision....
Article
Recent development of Java's optimization techniques makes Java one of the most useful programming languages for numerical computations. This paper proposes a numerical method of obtaining verified approximate solutions of linear systems. Usual methods for verified computations use switches of rounding modes defined in IEEE standard 754. However, s...
Article
Validated solution of a problem means to compute error bounds for a solution in finite precision. This includes the proof of existence of a solution. The computed error bounds are to be correct including all possible effects of rounding errors. The fastest known validation algorithm for the solution of a system of linear equations requires twice th...
Article
Full-text available
This paper is concerned with an accurate computation of matrix multiplication, where components of matrices are represented by summation of floating-point numbers. Recently, an accurate summation algorithm is de-veloped by the latter three of the authors. In this paper, it is specialized to dot product. Using this, a fast implementa-tion of accurat...
Article
Full-text available
If standard-precision computations do not lead to the desired accuracy, then it is reasonable to increase precision until we reach this accuracy. What is the optimal way of increasing precision? One possibility is to choose a constant q > 1, so that if the precision which requires the time t did not lead to a success, we select the next precision t...
Article
Full-text available
We present a computational, simple and fast sufficient criterion to verify positive definiteness of a symmetric or Hermitian matrix. The criterion uses only standard floating-point operations in rounding to nearest, it is rigorous, it takes into account all possible computational and rounding errors, and is also valid in the presence of underflow....

Network

Cited By