Fig 2 - uploaded by F.-Javier Heredia
Content may be subject to copyright.
Positive and negative planes.

Positive and negative planes.

Source publication
Article
Full-text available
One of the main drawbacks of the subgradient method is the tuning process to determine the sequence of steplengths. In this paper, the radar subgradient method, a heuristic method designed to compute a tuning-free subgradient steplength, is geometrically motivated and algebraically deduced. The unit commitment problem, which arises in the electrica...

Contexts in source publication

Context 1
... 2. Positive and Negative Planes. In Figure 2, we repeat the computation of a radar step for the one-dimensional example; i.e., we compute the intersection of all the supporting planes with SP n , we pro- ject these intersections on the subgradient direction, and we take λ n+1 as the closest projection to λ n . Now, the difference is that we do not take into account SP n−1 because its slope has the same sign as the slope of SP n . ...
Context 2
... by Figure 2 and the above proposition, we say that the supporting plane SP k defined by the point (λ k , q k ) and the subgradient s k , k < n, is a positive plane relative to λ n if r nk has a positive slope, that is, if m nk > 0. If m nk is not greater than zero, we say that SP k is a negative plane relative to λ n . ...

Similar publications

Article
Full-text available
This paper presents an innovative expert usability evaluation method called DEPTH (usability evaluation method based on DEsign PaTterns & Heuristics criteria). DEPTH is a method for performing expert-based scenario driven heuristic usability evaluation of e-sites. DEPTH focuses on the functionality of e-sites and emphasizes on usability characteris...

Citations

... Where gi is the gradient or sub gradient value of the function at position xi, hi is the current step length. For convergence, hi must satisfy several rules when the functions are non-differential [15] . ...
... Different ways to select the current point are given by Nesterov (Nesterov, 2004, pp. 149) or Beltran and Heredia (2005). Since not only the contribution to the overarching constraint but also value, gradient, and Hessian information about objective and active constraints are used, the scheme converges very fast and provably also for non-convex problems. ...
Article
Full-text available
Chemical production sites usually consist of plants that are owned by different companies or business units but are tightly connected by streams of materials and carriers of energy. Distributed optimization, where each entity optimizes its objective and the transfer prices of energy and materials are adapted by a coordinator, is a promising approach to this kind of problems, as confidentiality of internal data can be preserved. In this contribution, we propose an extension of the widely used subgradient methods for inequality constrained distributed QPs, which we call analytical extrapolation (AE). Therein, the analytical structure of the dual function is exploited to speed up convergence. Two strategies for handling changing sets of active constraints are presented. We investigate the performance of our algorithm on test problems, where different problem parameters are varied, and show that the performance of our algorithm is in most cases significantly better than that of other methods.
... There is a factor α in the data that can be used as a multiplier in the objective function, it is usually taken as 5, but was tested for our approach in Table 1 both with values 1 and 5. OV is the optimal value and Gap (%) is the relative percentage error between the lower bound LB and OV. The results in the column RLT1 + LR + ILP are taken from (Park 2014) using the subgradient method proposed by Beltran and Heredia in 2005, and the other results from (Pessoa and al. 2010) using the volume algorithm. As can be seen, the duality gap for RLT1 + LR + ILP is 0, the lower bound is exact and proves the optimality of the best feasible solution found. ...
Article
Full-text available
The Reformulation Linearization Technique (RLT) of Sherali and Adams (Manag Sci 32(10):1274–1290, 1986; SIAM J Discrete Math 3(3):411–430, 1990), when applied to a pure 0–1 quadratic optimization problem with linear constraints (P), constructs a hierarchy of LP (i.e., continuous and linear) models of increasing sizes. These provide monotonically improving continuous bounds on the optimal value of (P) as the level, i.e., the stage in the process, increases. When the level reaches the dimension of the original solution space, the last model provides an LP bound equal to the IP optimum. In practice, unfortunately, the problem size increases so rapidly that for large instances, even computing bounds for RLT models of level k (called RLTk) for small k may be challenging. Their size and their complexity increase drastically with k. To our knowledge, only results for bounds of levels 1, 2, and 3 have been reported in the literature. We are proposing, for certain quadratic problem types, a way of producing stronger bounds than continuous RLT1 bounds in a fraction of the time it would take to compute continuous RLT2 bounds. The approach consists in applying a specific decomposable Lagrangean relaxation to a specially constructed RLT1-type linear 0–1 model. If the overall Lagrangean problem does not have the integrality property, and if it can be solved as a 0–1 rather than a continuous problem, one may be able to obtain 0–1 RLT1 bounds of roughly the same quality as standard continuous RLT2 bounds, but in a fraction of the time and with much smaller storage requirements. If one actually decomposes the Lagrangean relaxation model, this two-step procedure, reformulation plus decomposed Lagrangean relaxation, will produce linear 0–1 Lagrangean subproblems with a dimension no larger than that of the original model. We first present numerical results for the Crossdock Door Assignment Problem, a special case of the Generalized Quadratic Assignment Problem. These show that just solving one Lagrangean relaxation problem in 0–1 variables produces a bound close to or better than the standard continuous RLT2 bound (when available) but much faster, especially for larger instances, even if one does not actually decompose the Lagrangean problem. We then present numerical results for the 0–1 quadratic knapsack problem, for which no RLT2 bounds are available to our knowledge, but we show that solving an initial Lagrangean relaxation of a specific 0–1 RLT1 decomposable model drastically improves the quality of the bounds. In both cases, solving the fully decomposed rather than the decomposable Lagrangean problem to optimality will make it feasible to compute such bounds for instances much too large for computing the standard continuous RLT2 bounds.
... Some of the significant and widely used differentiability properties of subdifferential can be found in [1][2][3]. Subdifferential is widely used for solving non-smooth optimization problems, some of them can be listed as follows [3][4][5][6][7][8][9][10][11][12][13][14][15]. Of course, in addition to subgradient, there are several concepts, such as quasidifferential, discrete gradient and codifferential, and strategies, namely smoothing and scalarization techniques to develop optimization methods in the literature. ...
Article
Full-text available
In this study, *-directional derivative and *-subgradient are defined using the multiplicative derivative, making a new contribution to non-Newtonian calculus for use in non-smooth analysis. As for directional derivative and subgradient, which are used in the non-smooth optimization theory, basic definitions and preliminary facts related to optimization theory are stated and proved, and the *-subgradient concept is illustrated by providing some examples, such as absolute value and exponential functions. In addition, necessary and sufficient optimality conditions are obtained for convex problems.
... In addition, it is designed for single-core computing; it is not useful in multi-core computing. Beltran and Heredia (2005) proposes the radar subgradient algorithm, which is a variant of the subgradient algorithm including a procedure for finding an effective learning rate by using a line search at each iteration. The line search method used in Beltran and Heredia (2005) is inspired by the cutting-plane method and works out a learning rate with the first-order information. ...
... Beltran and Heredia (2005) proposes the radar subgradient algorithm, which is a variant of the subgradient algorithm including a procedure for finding an effective learning rate by using a line search at each iteration. The line search method used in Beltran and Heredia (2005) is inspired by the cutting-plane method and works out a learning rate with the first-order information. However, this algorithm deals with the whole objective function and cannot use a part of the objective function at each iteration. ...
... This implies that it cannot be used in applications that give information to the algorithm through a data stream. In addition, the line search method used in Beltran and Heredia (2005) may fail and is distinct from the line search proposed in this paper. Hence, combining this line search method with the one we propose may have a complementary effect when the properties of the optimization problem are disadvantageous to one of the algorithms. ...
Article
Full-text available
The existing machine learning algorithms for minimizing the convex function over a closed convex set suffer from slow convergence because their learning rates must be determined before running them. This paper proposes two machine learning algorithms incorporating the line search method, which automatically and algorithmically finds appropriate learning rates at run-time. One algorithm is based on the incremental subgradient algorithm, which sequentially and cyclically uses each of the parts of the objective function; the other is based on the parallel subgradient algorithm, which uses parts independently in parallel. These algorithms can be applied to constrained nonsmooth convex optimization problems appearing in tasks of learning support vector machines without adjusting the learning rates precisely. The proposed line search method can determine learning rates to satisfy weaker conditions than the ones used in the existing machine learning algorithms. This implies that the two algorithms are generalizations of the existing incremental and parallel subgradient algorithms for solving constrained nonsmooth convex optimization problems. We show that they generate sequences that converge to a solution of the constrained nonsmooth convex optimization problem under certain conditions. The main contribution of this paper is the provision of three kinds of experiment showing that the two algorithms can solve concrete experimental problems faster than the existing algorithms. First, we show that the proposed algorithms have performance advantages over the existing ones in solving a test problem. Second, we compare the proposed algorithms with a different algorithm Pegasos, which is designed to learn with a support vector machine efficiently, in terms of prediction accuracy, value of the objective function, and computational time. Finally, we use one of our algorithms to train a multilayer neural network and discuss its applicability to deep learning.
... The subgradient method is one of the simplest methods for solving such problems. It was originally developed by N. Shor and then was modified by many authors (see [1][2][3] and more recent papers [4][5][6][7][8]). Its convergence was proved only for convex problems. ...
Article
In this paper, we introduce a new method for solving nonconvex nonsmooth optimization problems. It uses quasisecants, which are subgradients computed in some neighborhood of a point. The proposed method contains simple procedures for finding descent directions and for solving line search subproblems. The convergence of the method is studied and preliminary results of numerical experiments are presented. The comparison of the proposed method with the subgradient and the proximal bundle methods is demonstrated using results of numerical experiments.
... In this paper we study the convergence and performance of an improved version of the so-called radar method [BH05], a procedure intended to maximize a onedimensional piecewise linear concave (OPLC) function. ...
... Our starting point is the radar method developed in [BH05]. In that paper an approximate line search is performed by optimizing an approximation to the OPLC function. ...
... See [BH05] for more details. The radar method can be summarized as follows: ...
Article
The maximization of one-dimensional piecewise linear concave (OPLC) functions arises in the line search associated with the maximization of piecewise linear concave functions (e.g. Kelley cutting plane method). The OPLC line search is usually done by the next-break-point method, where one goes from break point to break point up to the optimum. If the number of break points is large this method will be computationally expensive. One can also use some classical derivative-free line search method as for example the golden section method. Such methods do not take advantage of the OPLC geometry. As an alternative, we propose an improved version of the so-called radar method, which maximizes an OPLC function by maximizing successive outer approximations. We prove superlinear and finite convergence of the radar method. Furthermore, our computational test shows that the radar method is highly effective independently from the number of break points.
Article
Full-text available
The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers.
Article
The Kelley cutting plane method is one of the methods commonly used to optimize the dual function in the Lagrangian relaxation scheme. Usually the Kelley cutting plane method uses the simplex method as the optimization engine. It is well known that the simplex method leaves the current vertex, follows an ascending edge and stops at the nearest vertex. What would happen if one would continue the line search up to the best point instead? As a possible answer, we propose the face simplex method, which freely explores the polyhedral surface by following the Rosen’s gradient projection combined with a global line search on the whole surface. Furthermore, to avoid the zig-zagging of the gradient projection, we propose a conjugate gradient version of the face simplex method. For our preliminary numerical tests we have implemented this method in Matlab. This implementation clearly outperforms basic Matlab implementations of the simplex method. In the case of state-of-the-art simplex implementations in C or similar, our Matlab implementation is only competitive for the case of many cutting planes.