Figure 1 - uploaded by Jie Sun
Content may be subject to copyright.
Nonzero block structure of A.

Nonzero block structure of A.

Source publication
Article
Full-text available
The predictor–corrector interior-point path-following algorithm is promising in solving multistage convex programming problems. Among many other general good features of this algorithm, especially attractive is that the algorithm allows possibility to parallelise the major computations. The dynamic structure of the multistage problems specifies a b...

Context in source publication

Context 1
... figure 1 the nonzero structure of matrix A is displayed. Each of the blocks P i , X i and W i correspond to one dot whereas Y i and S i correspond to a two diagonal dots in a 2 by 2 block and K i corresponds to two vertical dots. ...

Similar publications

Article
Full-text available
We review the continuous and fully-discrete problem setting of an elasto-plastic Cosserat model, which is an enhanced continuum model with independent rotational degrees of freedom. The continuous model is defined by a variational inequality, and for the discrete model a corresponding non-smooth nonlinear variational problem is derived which can be...
Article
Full-text available
Recently a new parallel ODE solver implementing a “parallelism across the steps” has been proposed (Amodio and Brugnano, 1997; Brugnano and Trigiante, 1998). In the mentioned references, the attention was devoted to some essential features of the parallel method, which are already present in the case where it is used to approximate linear continuou...
Article
Full-text available
For Hamiltonian systems with non-canonical structure matrices, a new family of fourth-order energy-preserving integrators is presented. The integrators take a form of a combination of Runge–Kutta methods and continuous-stage Runge–Kutta methods and feature a set of free parameters that offer greater flexibility and efficiency. Specifically, we demo...
Preprint
Full-text available
In this paper we use a Modified Newton's method based on the Continuous analog of Newton's method and high precision arithmetic for a general numerical search of periodic orbits for the planar three-body problem. We consider relatively short periods and a relatively coarse search-grid. As a result, we found 123 periodic solutions belonging to 105 n...
Article
Full-text available
This paper presents the Newton‐Ruphson approach using new equations to the power‐flow analysis in steady‐state for multiterminal DC‐AC systems. A flexible and practical choice of per‐unit system is used to formulate the DC network and converter equations. A converter is represented by Norton's equations of a current source in parallel with the comm...

Citations

... The third formulation is the deterministic equivalence, which is the extensive formulation of a stochastic program that forms an equivalent large one-stage problem containing all constraints and all scenarios. The methods proposed in [3,6,7,11,19,21,25,36] are based on this formulation. Despite widespread applications of different classes of nonsymmetric optimization problems [13,14,16,18,24,26,28,29,31,34,35,39,40,42], they have been studied narrowly in comparison to symmetric optimization problems [5,33], such as linear programming, second-order cone programming (see for example [1,22]) and semidefinite programming (see for example [23,38]). ...
... Let dx γ := ∂x γ /∂γ and similarly for all other variables. By differentiating the left-hand side equation in (19) with respect to γ, we get ...
... which, by using the right-hand side equation in (19) and putting µ(z) := γµ 0 (here µ 0 is obtained from (16) by setting (x,s) = (x 0 ,s 0 )), can be written as ...
Article
Full-text available
We consider a stochastic convex optimization problem over nonsymmetric cones with discrete support. This class of optimization problems has not been studied yet. By using a logarithmically homogeneous self-concordant barrier function, we present a homogeneous predictor-corrector interior-point algorithm for solving stochastic nonsymmetric conic optimization problems. We also derive an iteration bound for the proposed algorithm. Our main result is that we uniquely combine a nonsymmetric algorithm with efficient methods for computing the predictor and corrector directions. Finally, we describe a realistic application and present computational results for instances of the stochastic facility location problem formulated as a stochastic nonsymmetric convex conic optimization problem.
... In the linear case this leads to a block angular constraint matrix. Interior point methods have been specialized to solve this formulation directly by specializing the linear algebra resulting from the block angular structure (see, [3], [6], [11], [13], [21]). While the block angular structure of the extensive formulation has been successfully exploited, a Benders' decomposition method has some advantages . ...
Article
We study the two-stage stochastic convex optimization problem whose first- and second-stage feasible regions admit a self-concordant barrier. We show that the barrier recourse functions and the composite barrier functions for this problem form self-concordant families. These results are used to develop prototype primal interior point decomposition algorithms that are more suitable for a heterogeneous distributed computing environment. We show that the worst case iteration complexity of the proposed algorithms is the same as that for the short- and long-step primal interior algorithms applied to the extensive formulation of this problem. The generality of our results allows the possibility of using barriers other than the standard log-barrier in an algorithmic framework.
... The first possibility is to formulate its deterministic equivalent. Subsequently we can exploit the structure of the problem to perform linear algebra in parallel Hegland et al. [6]. The second approach is to formulate the problem using nonanticipativity constraints, and subsequently relax these constraints by Lagrangian dual, the progressive hedging method Rockafellar and Wets [14], and methods proposed in Ruszczynskii [16] and Mulvey and Ruszczynskii [10]. ...
Book
We consider barrier problems associated with two and multistage stochastic convex optimization problems. We show that the barrier recourse functions at any stage form a self- concordant family with respect to the barrier parameter. We also show that the complexity value of the first stage problem increases additively with the number of stages and scenarios. We use these results to propose a prototype primal interior point decomposition algorithm for the two-stage and multistage stochastic convex optimization problems admitting self-concordant barriers.
... However, it turns out that the amount of floating point operations in this approach is higher than in the original factorisation. For more details on this, see [43,3,40]. ...
Article
Full-text available
Parallel computing enables the analysis of very large data sets using large collections of flexible models with many variables. The computational methods are based on ideas from computational linear algebra and can draw on the extensive research on parallel algorithms in this area. Many algorithms for the direct and iterative solution of penalised least squares problems and for updating can be applied. Both methods for dense and sparse problems are applicable. An important property of the algorithms is their scalability, i.e., their ability to solve larger problems in the same time using hardware which grows linearly with the problem size. While in most cases large granularity parallelism is to be preferred, it turns out that even smaller granularity parallelism can be exploited effectively in the problems considered.
Article
Full-text available
In this paper, we propose a distributed algorithm for solving large-scale separable convex problems using Lagrangian dual decomposition and the interior-point framework. By adding self-concordant barrier terms to the ordinary Lagrangian, we prove under mild assumptions that the corresponding family of augmented dual functions is self-concordant. This makes it possible to efficiently use the Newton method for tracing the central path. We show that the new algorithm is globally convergent and highly parallelizable and thus it is suitable for solving large-scale separable convex problems.
Article
Full-text available
Many practical large-scale optimization problems are not only sparse, but also display some form of block-structure such as primal or dual block angular structure. Often these structures are nested: each block of the coarse top level structure is block-structured itself. Problems with these characteristics appear frequently in stochastic programming but also in other areas such as telecommunication network modelling. We present a linear algebra library tailored for problems with such structure that is used inside an interior point solver for convex quadratic programming problems. Due to its object-oriented design it can be used to exploit virtually any nested block structure arising in practical problems, eliminating the need for highly specialised linear algebra modules needing to be written for every type of problem separately. Through a careful implementation we achieve almost automatic parallelisation of the linear algebra. The efficiency of the approach is illustrated on several problems arising in the financial planning, namely in the asset and liability management. The problems are modelled as multistage decision processes and by nature lead to nested block-structured problems. By taking the variance of the random variables into account the problems become non-separable quadratic programs. A reformulation of the problem is proposed which reduces density of matrices involved and by these means significantly simplifies its solution by an interior point method. The object-oriented parallel solver achieves high efficiency by careful exploitation of the block sparsity of these problems. As a result a problem with over 50 million decision variables is solved in just over 2 hours on a parallel computer with 16 processors. The approach is by nature scalable and the parallel implementation achieves nearly perfect speed-ups on a range of problems.