Article

A two-stage stochastic programming model for transportation network protection

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Network protection against natural and human-caused,hazards has become,a topical research theme in engineering and social sciences. This paper focuses on the problem ofallocating,limited retrofit resources over multiple highway ,bridges to improve ,the resilience and robustness of the entire transportation system in question. The main modeling challenges,in network ,retrofit problems ,are to capture ,the interdependencies among individual transportation facilities and to cope ,with the extremely high uncertainty in the decision environment. In this paper, we model the network retrofit problem as a two-stage stochastic programming ,problem ,that optimizes a mean-risk ,objective of the ,system loss. This formulation hedges well against uncertainty, but also imposes computational challenges due to involvement ,of integer ,decision variables and increased dimension ,of the ,problem. Anefficient algorithm is developed, via extending the well-known L-shaped method using generalized benders decomposition, to efficiently handle the binary integer variables in the first stage and the nonlinear recourse in the ,second stage of the ,model ,formulation. The proposed,modeling ,and solution methods ,are general and can be applied ,to other ,network design problems as well.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... One stream of the works pays attention to pre-event planning or immediate post-event actions for best performance of emergency responses or recoveries (Chen, Miller-Hooks 2012;Dixit et al. 2016;Huang et al. 2007;Miller-Hooks et al. 2012;Mohaymany, Pirnazar 2007;). Another stream of the researches focuses on hardening the in-operation network with the implementation of protecting measures ahead of disruptions Liu et al. 2009;Lu et al. 2018). ...
... It seeks for what exact magnitude the downside investment returns exceeding the VaR could be lessen to. The planning of uncertain transportation network embedding CVaR can lay a hedge around extreme scenarios costs (Lei et al. 2018;Liu et al. 2009;Lu et al. 2018). ...
... The network users are treated independently and cannot benefit from merely changing their own routing decisions in equilibrium condition. Since theories of modelling individual routing behaviours still need more practicability validations , the system optimization principle, which holds that all flow assignments are also under centralized directing and control, is adopted in this paper, by which we can estimate a lower bound that the cargo flow cost is possible to achieve at the system level (Liu et al. 2009). When elements like freight demands and network capacity keep stable, the planning of the HPFN is a normal facility location problem for minimal overall value of the fixed cost and the variable cost. ...
Article
Full-text available
Many previous cases have shown that port operations are susceptible to disruptive events. This paper proposes 2-stage Stochastic Programming (SP) for port users to reliably plan the hinterland-port intermodal freight network with consideration of risk aversion in cost. Probabilistic disruptions of intermodal terminals are considered as scenario-specific. In the 1st stage, intermodal paths are selected to obtain proper network capacities. In the 2nd stage, cargo flows are assigned for each disruption scenario on the planed network. The 2-stage model is firstly formulated in a risk-neutral environment to achieve the minimum expectation of total cost. Then, the Mean-Risk (MR) framework is adopted by incorporating a risk measure tool called Conditional Value-at-Risk (CVaR) into the expectation model, so as to reduce the cost of worst-case disruption scenarios. Benders’ Decomposition (BD) is introduced to efficiently solve the exponential many problem. Some numerical experiments are performed under different risk aversion parameters. With this study, network planners can decide network capacities with reasonable redundancies to improve the freight reliability in a cost-effective way. The proposed method provides a simple approach for the planners to quantify their risk appetites in cost and to impose them in the planning process, hence to trade-off the Expected Cost (EC) and the worst-case cost.
... Thus, we can say that a stochastic programming problem is multi-state if the decisions depend on realizations that occur over time. Since its introduction in the seminal papers by Dantzig [2] and Beale [3], the stochastic framework has been widely studied and applied in diverse areas, such as transportation [4], inventory management [5], drug supply [6], and industrial processes [7], among others. ...
... Equalities presented in (3) ensure that customer i is supplied by exactly facility j, called assignment constraints. Constraints given in (4) ensure that if customer i is supplied by facility j, then j must be opened, which are often called varying upper bounds. Inequalities stated in (5) model the customers' orders of preference [26]. ...
... 3: Store the instance in the database. 4: Detect the instance of the 2S-SPLPO with a sensor in the database and dynamically build the mathematical model. 5: Call the ADA algorithm to solve the mathematical model with the sensor. ...
Article
Full-text available
Healthcare service centers must be sited in strategic locations that meet the immediate needs of patients. The current situation due to the COVID-19 pandemic makes this problem particularly relevant. Assume that each center corresponds to an assigned place for vaccination and that each center uses one or more vaccine brands/laboratories. Then, each patient could choose a center instead of another, because she/he may prefer the vaccine from a more reliable laboratory. This defines an order of preference that might depend on each patient who may not want to be vaccinated in a center where there are only her/his non-preferred vaccine brands. In countries where the vaccination process is considered successful, the order assigned by each patient to the vaccination centers is defined by incentives that local governments give to their population. These same incentives for foreign citizens are seen as a strategic decision to generate income from tourism. The simple plant/center location problem (SPLP) is a combinatorial approach that has been extensively studied. However, a less-known natural extension of it with order (SPLPO) has not been explored in the same depth. In this case, the size of the instances that can be solved is limited. The SPLPO considers an order of preference that patients have over a set of facilities to meet their demands. This order adds a new set of constraints in its formulation that increases the complexity of the problem to obtain an optimal solution. In this paper, we propose a new two-stage stochastic formulation for the SPLPO (2S-SPLPO) that mimics the mentioned pandemic situation, where the order of preference is treated as a random vector. We carry out computational experiments on simulated 2S-SPLPO instances to evaluate the performance of the new proposal. We apply an algorithm based on Lagrangian relaxation that has been shown to be efficient for large instances of the SPLPO. A potential application of this new algorithm to COVID-19 vaccination is discussed and explored based on sensor-related data. Two further algorithms are proposed to store the patient’s records in a data warehouse and generate 2S-SPLPO instances using sensors.
... In the disaster management literature that incorporates both pre-and post-disaster operations, some papers investigate pre-disaster transportation network mitigation, mainly from the perspective of retrofitting network components. Assuming that a network link can be retrofitted to endure all disaster scenarios, Liu et al. [22] propose a two-stage mean-risk stochastic program where repair and travel delay costs are taken into consideration. Instead of assuming system-optimal traffic flows, Fan and Liu [23] extend Liu et al. [22] to consider user equilibrium traffic flows. ...
... Assuming that a network link can be retrofitted to endure all disaster scenarios, Liu et al. [22] propose a two-stage mean-risk stochastic program where repair and travel delay costs are taken into consideration. Instead of assuming system-optimal traffic flows, Fan and Liu [23] extend Liu et al. [22] to consider user equilibrium traffic flows. Both Liu et al. [22] and Fan and Liu [23] incorporate post-disaster traffic congestion impacts through using the Bureau of Public Roads (BPR) function. ...
... Instead of assuming system-optimal traffic flows, Fan and Liu [23] extend Liu et al. [22] to consider user equilibrium traffic flows. Both Liu et al. [22] and Fan and Liu [23] incorporate post-disaster traffic congestion impacts through using the Bureau of Public Roads (BPR) function. Considering that a link's survival probability can be increased through pre-disaster investment, Peeta et al. [24] propose a two-stage stochastic programming model with the objective of minimizing the expected total traversal cost for multiple origin-destination pairs. ...
Article
How to conduct effective and efficient emergency supply planning is a challenging task. In this paper, we tackle a general emergency supply planning problem. The problem not only integrates the decisions of transportation network mitigation and emergency supply pre-positioning before disasters, but also considers post-disaster dynamic transportation planning with traffic congestion effects incorporated. We formulate this problem as a two-stage stochastic programming model, which aims to minimize the expected total cost related to various disaster mitigation, preparedness, and response decisions. A variant of the model is optimally solved by applying a generalized Benders decomposition algorithm, which significantly outperforms state-of-the-art global optimization solvers. Finally, a case study for a hurricane threat in the southeastern U.S. is conducted to demonstrate the advantages of our model and to illustrate insights on the optimal network mitigation and pre-positioning plan as well as the transportation plan. It is shown that considering traffic congestion effects and dynamic transportation plans brings about spatial and temporal flexibility for achieving better emergency supply plans.
... In fact, the reconstruction of nations post-conflict is now recognized as a key element in achieving global stability, security and eradication of poverty in the 21 st century (Barakat, 2005). Therefore, transportation network protection against natural and human-caused hazards has become a topical research theme in engineering and social sciences (Liu et al., 2008). ...
... Mitigation of the adverse impacts of natural disasters/armed conflicts has been investigated in a number of research studies. Existing research in this important area focused on: (1) measuring the performance of damaged transportation networks in post-disaster environments (Chang and Nojima, 1998;Chang and Nojima, 2001;Chen and Eguchi, 2003;Nojima and Sugito, 2000); (2) analyzing recovery planning strategies and developing post event recovery planning models (Farris and Wilkerson, 2001;Kozin and Zhou, 1990;Lambert et al, 1999;Opricovic and Tzeng' 2002); (3) evaluating pre-disaster mitigation policies and developing pre-event mitigation planning models (Gunes and Kovel, 2000;Masri and Moore, 1995); (4) investigating the role of public agencies in post-disaster environments (Kovel and Kangari, 1995;Lambert and Patterson, 2002;Wallied, et al., 2009); (5) analyzing post-disaster procurement methods for reconstruction (Le Masurier, et al., 2006); (6) developing loss estimation tools for disaster response (Huyck et al., 2006); (7) allocating available funds to transportation network recovery projects (Kaarlaftis, et al., 2007) and (8) analyze the link between structural damage and economic impact (Chang and Nojima, 1998;Wallied et al., 2009). Despite of the significant contributions of the above research studies, there is no reported research that focused on the impact weighting of the affecting factors which is the level of importance for each affecting factor (i.e., how much the percentage of effect for each factor on the road recovery priority. ...
... Such a network design problem is commonly formulated as a bi-level optimization problem, where the upper-level and lower-level sub-problems optimizes the network design objective and traffic flow conditions, respectively. Network design under uncertainty (Sumalee et al., 2009) further utilizes bi-level stochastic program (Liu et al., 2009;Unnikrishnan & Waller, 2009) to take into account stochastic demand and/or capacity. On the other hand, network design for UAM operations has become an active research stream in recent years. ...
... Robust approaches are introduced to handle decision making under environments of extreme uncertainty and focuses on the worst-case scenario, which is usually more conservative than techniques focusing on expectation. Common formulations for network design under uncertainty include bi-level stochastic program (Liu et al., 2009;Miller-Hooks et al., 2012;Unnikrishnan & Waller, 2009), expected value model, mean-variance model, chance-constrained model, and minimax model . ...
Preprint
Full-text available
Urban Air Mobility (UAM), as envisioned by researchers and practitioners, will be achieved through the use of highly automated aircraft that operate and transport passengers and cargo at low altitudes within urban and suburban areas. To operate in complex urban environment, precise air traffic management, in particular the management of traffic overflows due to operational disruptions will be critical to ensuring system safety and efficiency. To this end, we propose a methodology for the design of UAM networks with reserve capacity, i.e., a design where alternative landing options and flight corridors are explicitly considered as a means of improving contingency management and reducing risk. Similar redundancy considerations are incorporated in the design of many critical infrastructures, yet remain unexploited in the air transportation literature. In our methodology, we first model how disruptions to a given on-demand UAM network might impact on the nominal traffic flow and how this flow might be re-accommodated on an extended network with reserve capacity. Then, through an optimization problem, we select the locations and capacities for the backup vertiports with the maximal expected throughput of the extended network over all possible disruption scenarios, while the throughput is the maximal amount of flights that the network can accommodate per unit of time. We show that we can obtain the solution for the corresponding bi-level and bi-linear optimization problem by solving a mixed-integer linear program. We demonstrate our methodology in the case study using networks from Milwaukee, Atlanta, and Dallas--Fort Worth metropolitan areas and show how the throughput and flexibility of the UAM networks with reserve capacity can outcompete those without.
... Zeng et al. (2012) subway bus Kepaptsoglou and Karlafits (2009) multimodal car-share Tyndall (2009) The multicommodity flow problem was first used to model disaster relief planning in Haghani and Oh (1996) as a large-scale deterministic time-space network, which shares common methodologies with transit disruption planning. The two-stage stochastic programming approach has been used extensively in pre-disaster relief network planning (Barbarosoǧlu and Arda, 2004;Liu et al., 2009;Rawls and Turnquist, 2010;Peeta et al., 2010;Noyan, 2012;Hong et al., 2015;Klibi et al., 2018;Elçi and Noyan, 2018). A more comprehensive account of two-stage stochastic problems in disaster relief network planning can be found in Grass and Fischer (2016). ...
... Common solution methods used in these problems include the L-shaped method for two-stage stochastic programming (e.g. Liu et al., 2009;Rawls and Turnquist, 2010;Miller-Hooks et al., 2012) as well as Monte Carlo-based sample average approximation of the disruption scenarios (e.g. Chen and Yang, 2004;Peeta et al., 2010;Miller-Hooks et al., 2012;Chow and Regan, 2014). ...
Article
Full-text available
We propose a new mechanism to design risk-pooling contracts between operators to facilitate horizontal cooperation to mitigate those costs and improve service resilience during disruptions. We formulate a novel two-stage stochastic multicommodity flow model to determine the cost savings of a coalition under different disruption scenarios and solve it using L-shaped method along with sample average approximation. Computational tests of the L-shaped method against deterministic equivalent method with sample average approximation are conducted for network instances with up to 64 nodes, 10 OD pairs, and 1024 scenarios. The results demonstrate that the solution algorithm only becomes computationally effective for larger size instances (above 64 nodes) and that SAA maintains a close approximation. The proposed model is applied to a regional multi-operator network in the Randstad area of the Netherlands, for four operators, 40 origin-destination pairs, and over 1400 links where disruption data is available. Using the proposed method, we identify stable cost allocations among four operating agencies that could yield a 66% improvement in overall network performance over not having any risk-pooling contract in place. Furthermore, the model allows policymakers to evaluate the sensitivity of any one operator's bargaining power to different network structures and disruption scenario distributions, as we illustrate for the HTM operator in Randstad.
... Consequently, the expectation is computed as the product of the probability P (s, d) and the cost Q(q, x, s, d) for each disaster scenario (s ∈ S and d ∈ D). It is noteworthy that, in some situations where the occurrence of disasters follows continuous probability distributions, Monte Carlo sampling techniques may be employed to generate a finite collection of discrete randomized disaster scenarios (Liu et al., 2009;Birge and Louveaux, 2011). Drawing on insights from disaster studies, quantifying the probabilities associated with the occurrence of specific disaster sizes is feasible (Field, 2012). ...
Preprint
Full-text available
Emergency Mobility Facilities (EMFs) possess the capability to dynamically relocate, providing effective responses to fluctuations in emergent demand patterns across temporal and spatial dimensions. This study proposes a two-stage stochastic programming model that integrates the EMF allocation problem and the road network design problem for disaster preparedness. The model takes into account uncertainties arising from emergency demand and road network congestion levels under various sizes and timings of disaster occurrences. The first-stage decision involves determining the fleet size of EMFs and identifying which road links' travel time to reduce. The second-stage decision pertains to the routing and schedule of each EMF for each disaster scenario. Due to the intricate nature of this problem, the resulting model takes the form of a non-convex mixed-integer nonlinear program (MINLP). This poses computational challenges due to the inclusion of bilinear terms, implicit expressions, and the double-layered structure in the second-stage submodel, along with integer decision variables. To efficiently solve the model, a comprehensive set of techniques is applied. This includes employing linearization techniques, converting the second-stage submodel into a single-stage equivalent, transforming an integer variable into multiple binary variables, and utilizing other methods to equivalently reformulate the model into a mixed-integer linear programming problem (MILP). These transformations render the model amenable to solution by the integer L-shaped method. A simplified example clarifies the solution procedures of the model and algorithm, establishing the theoretical foundation for their practical implementation. Subsequently, to empirically demonstrate the practicality of the proposed model and algorithm, a real-world case study is conducted, effectively validating their utility.
... A stochastic model will give an LP the optimal solution, regardless of the variables' uncertainty level. Different formulations deal with uncertainty, such as two-stage and multi-stage stochastic programming models [10]. This study focuses specifically on the two-stage stochastic programming approach. ...
Article
Full-text available
The solar salt mining process involves collecting salt water from the sea and trapping it in an interlinked network of shallow ponds where the sun evaporates most of the water. As soon as the brine reaches its saturation point, salt crystallisation occurs. Several factors influence this mining process, each with a degree of variability that results in a risk factor owing to uncertainty. This study proposes a two-stage stochastic programming model, called ‘a recourse model’, that maximises salt crystallisation while providing solutions that are hedges against uncertainty. First, the theoretical background on optimisation theory is provided, followed by an overview of the mining processes. Second, the recourse model is verified and validated using historical data and comparing the results with its deterministic counterpart. The main contribution of this study is the formulation of the recourse model and the value that this approach adds when dealing with uncertainty in any decision-making process.
... So-called feasibility cuts are implemented within decomposition algorithms such as the L-shaped method to exclude those solutions from the feasible set. See Angulo et al. (2016) and Dentcheva and Martinez (2012) for theoretical developments; and, Liu et al. (2009) andNoyan (2012) for applications of those concepts. ...
... In this regard, extensive resilience-enhancing studies have been conducted to identify optimal resilience investment strategies with budgetary limitations (Mera and Balijepalli, 2020). These strategies include expanding the capacity of critical links in preparation phases Miller-Hooks et al., 2012), retrofitting vulnerable components of traffic networks in mitigation phases (Chang et al., 2012;Fotuhi and Huynh, 2017;Liu et al., 2009;Zhang and Wang, 2016), reconfiguring relief or routes in response phases (Abadi and Ioannou, 2014;Donovan and Work, 2017;Dunn and Wilkinson, 2016;Jin et al., 2014;Wang et al., 2010), and optimizing the recovery schedule for affected links (Bocchini and Frangopol, 2012;Chen and Miller-Hooks, 2012;Nair et al., 2010;Zhang et al., 2017;Zhang and Miller-Hooks, 2015). ...
... The goal of two-stage stochastic programming in the form of (1) is to optimize the combination of objective functions from both stages. This type of problems has numerous practical applications, such as portfolio selection (Shapiro et al. (2021); Dantzig and Infanger (1993)), manufacturing resource allocation (Birge and Louveaux (2011)), inventory control (Dillon et al. (2017)), supply chain management (Marufuzzaman et al. (2014)), disaster management (Noyan (2012)), water resource management (Huang and Loucks (2000)), and transportation network protection (Liu et al. (2009)), just to name a few. In many applications, the objective function in the first stage, F (x), is a linear function, and the problem in the second stage Q(x, y(ξ); ξ) can be formulated as linear programming, in which case, the problem is called two-stage stochastic linear programming (two-stage SLP). ...
Preprint
Full-text available
In this paper, we design, analyze, and implement a variant of the two-loop L-shaped algorithms for solving two-stage stochastic programming problems that arise from important application areas including revenue management and power systems. We consider the setting in which it is intractable to compute exact objective function and (sub)gradient information, and instead, only estimates of objective function and (sub)gradient values are available. Under common assumptions including fixed recourse and bounded (sub)gradients, the algorithm generates a sequence of iterates that converge to a neighborhood of optimality, where the radius of the convergence neighborhood depends on the level of the inexactness of objective function estimates. The number of outer and inner iterations needed to find an approximate optimal iterate is provided. Finally, we show a sample complexity result for the algorithm with a Polyak-type step-size policy that can be extended to analyze other situations. We also present a numerical study that verifies our theoretical results and demonstrates the superior empirical performance of our proposed algorithms over classic solvers.
... Optimization models are primarily used to address two issues in traffic network resilience: solving traffic assignment problems, such as user equilibrium (UE) or system optimum (SO) problems, and optimizing the utility of resources for mitigation, preparedness, response, and recovery. Liu et al. constructed a stochastic optimization model of traffic networks based on resilience, using traffic flow, link capacity, time constraints, and recovery speed ranges as constraints; repair start time and progress as decision variables; and minimum maintenance and flow costs and maximum resilience as objectives [43]. Many researchers also use topological indicators corresponding to the topology model to measure resilience. ...
Article
Full-text available
With the continuous development of public transportation, the impact of unexpected events on the operation of bus networks has become increasingly severe due to the growing demand for public transportation and passenger volume. To accurately assess the impact of unexpected events on the operation of bus networks and scientifically evaluate their resilience, this paper proposes a framework for analyzing the resilience of bus networks. With the aim of providing scientific evidence to enhance the reliability of public transportation networks, this framework can be used to determine the resilience of bus networks to unexpected events. The main contributions of this framework include three aspects: 1. Construction of the CRITIC–entropy weighting model for screening and calculating key indicators of the resilience of the bus network; 2. Use of resilience cycle theory to construct a model for analyzing the resilience of bus routes, and design a set of resilience quantification factors to calculate the resilience of bus routes; 3. Use of complex network theory to construct a model for analyzing the resilience of the bus network, by taking the bus route resilience obtained in the second step as the edge weight to calculate the resilience of the bus network. This paper takes the Beijing public transit system as an example and uses real data to verify the accuracy, scientificity, and feasibility of the proposed framework for analyzing the resilience of public transit networks to sudden events. The resilience analysis framework constructed in this paper has improved the existing research on transportation network resilience in theoretical aspects. Furthermore, the results outputted by this framework can provide a decision-making basis for network adjustment and disaster recovery for the management departments of public transportation networks in practical applications.
... Diwekar, 2005;U. M. Diwekar, 2003;Kheawhom & Hirao, 2004;Widyasari & Mawengkang, 2012) and financial domains (Bastin et al., 2010;Simanjuntak et al., 2006), fisheries (Agustin et al., 2018;Albornoz & Canales, 2006) and transportation network domains (Liu et al., 2009). The problem model of S-MINLP, as suggested in this research, can be expressed in the subsequent manner. ...
Article
Full-text available
Stochastic programming is a methodology utilized for the purpose of achieving optimal planning and decision-making outcomes when faced with uncertain data. The subject of investigation pertains to a stochastic optimization problem wherein the results of stochastic data are not disclosed during runtime, and the optimization of the decision does not necessitate foresight into forthcoming outcomes. This establishes a strong correlation with the imperative need for immediate optimization in uncertain data settings, enabling effective decision-making in the present moment. The present study introduces a novel methodology for achieving global optimization of the model for nonlinear mixed-stochastic programming problem. The present study centers on stochastic problems that are two-staged and entail non-linearities in both the objective function and constraints. The first stage variables are discrete in nature, whereas the second stage variables are a combination of continuous and mixed types. Scenario-based representations are utilized for formulating problems. The fundamental approach to address the non-linear mixed-stochastic programming problem involves converting the model into a deterministic non-linear mixed-count program that is equivalent in form. The feasibility of this proposition stems from the discrete distribution assumption of uncertainty, which can be represented by a limited set of scenarios. The magnitude of the model size will increase significantly due to the quantity of scenarios and time horizons involved. The utilization of filtered probability space in conjunction with data mining techniques will be employed for the purpose of scenario generation. The methodology employed for addressing nonlinear mixed-integer programming problems of significant scale involves elevating the value of a non-basic variable beyond its boundaries in order to compel a basis variable to attain a cumulative value. Subsequently, the problem is simplified by maintaining a constant count variable and modifying it incrementally in discrete intervals to achieve an optimal solution at a global level.
... Another direction is to study the RUFLP by assuming correlated disruption. Liu et al. (2009) andShen et al. (2011) formulate scenario-based models, which incorporate the correlated disruptions by properly defined scenarios. However, their models suffer from poor numerical efficiency, especially when the number of scenarios increases. ...
Article
Full-text available
This paper studies the reliable uncapacitated facility location problem in which facilities are subject to uncertain disruptions. A two-stage distributionally robust model is formulated, which optimizes the facility location decisions so as to minimize the fixed facility location cost and the expected transportation cost of serving customers under the worst-case disruption distribution. The model is formulated in a general form, where the uncertain joint distribution of disruptions is partially characterized and is allowed to have any prespecified dependency structure. This model extends several related models in the literature, including the stochastic one with explicitly given disruption distribution and the robust one with moment information on disruptions. An efficient cutting plane algorithm is proposed to solve this model, where the separation problem is solved respectively by a polynomial-time algorithm in the stochastic case and by a column generation approach in the robust case. Extensive numerical study shows that the proposed cutting plane algorithm not only outperforms the best-known algorithm in the literature for the stochastic problem under independent disruptions but also efficiently solves the robust problem under correlated disruptions. The practical performance of the robust models is verified in a simulation based on historical typhoon data in China. The numerical results further indicate that the robust model with even a small amount of information on disruption correlation can mitigate the conservativeness and improve the location decision significantly. Summary of Contribution: In this paper, we study the reliable uncapacitated facility location problem under uncertain facility disruptions. The problem is formulated as a two-stage distributionally robust model, which generalizes several related models in the literature, including the stochastic one with explicitly given disruption distribution and the robust one with moment information on disruptions. To solve this generalized model, we propose a cutting plane algorithm, where the separation problem is solved respectively by a polynomial-time algorithm in the stochastic case and by a column generation approach in the robust case. The efficiency and effectiveness of the proposed algorithm are validated through extensive numerical experiments. We also conduct a data-driven simulation based on historical typhoon data in China to verify the practical performance of the proposed robust model. The numerical results further reveal insights into the value of information on disruption correlation in improving the robust location decisions.
... The optimal solutions are obtained using Gurobi solver. This modeling approach can be found in other applications, such as transportation network protection problems (Liu et al., 2009), biodiesel supply chains problems (Marufuzzaman et al., 2014), emergency system management problems (Moreno et al., 2018;Wang 2020). However, the multi-vehicle multi-compartment IRP mentioned above focuses on the situation where the demands of the customers are deterministic. ...
Article
Full-text available
The inventory routing problem (IRP) arises in the joint practices of vendor-managed inventory (VMI) and vehicle routing problem (VRP), aiming to simultaneously optimize the distribution, inventory and vehicle routes. This paper studies the multi-vehicle multi-compartment inventory routing problem with stochastic demands (MCIRPSD) in the context of fuel delivery. The problem with maximum-to-level (ML) replenishment policy is modeled as a two-stage stochastic programming model with the purpose of minimizing the total cost, in which the inventory management and routing decisions are made in the first stage while the corresponding resource actions are implemented in the second stage. An acceleration strategy is incorporated into the exact single-cut Benders decomposition algorithm and its multi-cut version respectively to solve the MCIRPSD on the small instances. Two-phase heuristic approaches based on the single-cut decomposition algorithm and its multi-cut version are developed to deal with the MCIRPSD on the medium and large-scale instances. Comparing the performance of the proposed algorithms with the Gurobi solver within limited time, the average objective value obtained by the proposed algorithm has decreased more than 7.30% for the medium and large instances, which demonstrates the effectiveness of our algorithms. The impacts of the instance features on the results are further analyzed, and some managerial insights are concluded for the manager.
... To improve the pre-disaster resilience, some articles focus on the allocation of limited renovation resources over transport networks with the objective of improving its resilience and robustness in such a way that potential disruptions incur minimal negative impact (Jin et al., 2014). Liu et al. (2009) study the improvement of resilience and robustness of transportation systems over multiple highway bridges, minimizing future seismic losses at the same time. The renovation goal of Du and Peeta (2014) is to enhance network survivability and minimize the expected post-disaster response time. ...
Thesis
Full-text available
In addition to operating close to their maximum capacity, transport networks, and especially the urban ones, are subject to various disruptions induced by human, technical or natural factors, which often generate loss of performance, damages and high maintenance costs. Introduced in the 70's, the notion of resilience represents the ability of a system to maintain an acceptable level of performance in presence of a disruption. Modeling and quantifying the resilience of multimodal, large-scale, urban transport networks is expected to allow cities guaranteeing higher-quality of service and seamless mobility, even in the presence of disruptions and major, predictable events. The research presented in this dissertation is motivated by the need of proper defining the resilience of the transport network in order to understand their vulnerabilities. Such indication aims at improving the functioning of the network under disruption and anticipating the loss of performance by means of a resilient-oriented transport network design. In the literature, two major approaches aim at quantifying the network resilience. On the one hand, the topological approach, based on graph theory, which characterizes the static components of transport resilience, as issued from the redundancy of the network and its connectivity. On the other hand, the dynamic approach, which takes into account the traffic dynamics and leverages traffic theory for quantifying resilience induced by the network users behaviors and the transport network performances. The combination of the static and the dynamic approaches for resilience characterization is promising and provides deeper insights in the properties of a network, both in terms of its topology and performance. Centrality measures, aiming at ranking the importance of the graph components and issued from graph theory, are mainly analyzed to characterize the transport networks in static settings. By computing them on dynamic weighted graphs that capture traffic conditions and by adapting their formulation to consider the users’ demand, we are able to jointly consider network topology and traffic dynamics in resilience characterization. To emulate the impact of disruptions, both simulated and real data are considered. A stress test methodology, mostly used in the bank and nuclear sectors, which aims at simulating the worst scenarios in order to analyze the impact and the reaction of the network, is developed to observe the transport network behavior. Finally, we develop a methodology, quick-to-compute, which aims at prioritizing the construction of some new transport mode lines, by maximizing the performance improvement in a resilience context. We also propose an algorithm for the optimal deployment of a disruption-adapted park-and-ride system.
... For a general introduction to the L-shaped method and (generalized) Benders decomposition, see, e.g., Benders (1962). Recent advances in Benders decomposition for solving mixed-integer (stochastic) optimization problems are presented in Carøe and Tind (1998), Liu et al. (2009), Fischetti et al. (2016. The concept has been extended to robust optimization (Takriti and Ahmed 2004). ...
Article
Full-text available
Problem definition: International humanitarian organizations (IHOs) prepare a detailed annual allocation plan for operations that are conducted in the countries they serve. The annual plan is strongly affected by the available financial budget. The budget of IHOs is derived from donations, which are typically limited, uncertain, and to a large extent earmarked for specific countries or programs. These factors, together with the specific utility function of IHOs, render budgeting for IHOs a challenging managerial problem. In this paper, we develop an approach to optimize budget allocation plans for each country of operations. Academic/practical relevance: The current research provides a better understanding of the budgeting problem in IHOs given the increasing interest of the operations management community for nonprofit operations. Methodology: We model the problem as a two-stage stochastic optimization model with a concave utility function and identify a number of analytical properties for the problem. We develop an efficient generalized Benders decomposition algorithm as well as a fast heuristic. Results: Using data from the International Committee of the Red Cross, our results indicate 21.3% improvement in the IHO’s utility by adopting stochastic programming instead of the expected value solution. Moreover, our solution approach is computationally more efficient than other approaches. Managerial implications: Our analysis highlights the importance of nonearmarked donations for the overall performance of IHOs. We also find that putting pressure on IHOs to fulfill the targeted missions (e.g., by donors or media) results in lower beneficiaries’ welfare. Moreover, the IHOs benefit from negative correlation among donations. Finally, our findings indicate that, if donors allow the IHO to allocate unused earmarked donations to other delegations, the performance of the IHO improves significantly.
... Barbaroso ǧlu and Arda [8] introduced for the first time a two-stage SP model for both the pre-disaster and post-disaster stage where the supply capacities and demands are considered as random variables captured by a set of scenarios. Liu et al. [15] modeled the network retrofit problem as a two-stage SP to optimize a mean-risk objective of the system losses. Fan and Liu [16] formulated a two-stage SP with equilibrium constraints on pre-disaster transportation network protection problems against uncertain future disasters. ...
Article
The storage and distribution of medical supplies are important parts of epidemic prevention and control. This paper first proposes a new nonsmooth two-stage stochastic equilibrium model of medical supplies in epidemic management. The first stage addresses the storage in the pre-disaster phase, and the second stage focuses on the dynamic distribution by enrolling competitions among multiple hospitals over a period of time in the post-disaster phase. The uncertainties are the numbers of infected people treated in multiple hospitals during the period of time, which are time-varying around a nominal distribution predicted by historical experience. The two-stage stochastic equilibrium model is further approximated and transformed to a monotone two-stage stochastic variational inequality (SVI) model that is computationally tractable, with the aid of a smooth approximation technique. We employ the progressive hedging method (PHM) to solve a case study in the city of Wuhan in China suffered from the COVID-19 pandemic. Numerical results are presented to demonstrate the effectiveness of the proposed model in planning the storage and dynamic distribution of medical supplies in epidemic management.
... The early OR papers discussing protection models in a probabilistic environment consider investment decisions to strengthen infrastructure links in transportation networks. As an example, Liu et al. (2009) proposes a two-stage stochastic programming problem to minimise the total expected physical and social losses caused by potential disruptions. Fan and Liu (2010) analyse the problem of allocating limited retrofit resources to highway bridges so as to reduce structural losses and travel delays. ...
Article
Supply chains, as vital systems to the well-being of countries and economies, require systematic approaches to reduce their vulnerability. In this paper, we propose a non linear optimisation model to determine an effective distribution of protective resources among facilities in service and supply systems so as to reduce the probability of failure to which facilities are exposed in case of external disruptions. The failure probability of protected assets depends on the level of protection investments and the ultimate goal is to minimise the expected facility-customer transport or travel costs to provide goods and services. A linear version of the model is obtained by exploiting a specialised network flow structure. Furthermore, an efficient GRASP solution algorithm is developed to benchmark the linearised model and resolve numerical difficulties. The applicability of the proposed model is demonstrated using the Toronto hospital network. Protection measures within this context correspond to capacity expansion investments and reduce the likelihood that hospitals are unable to satisfy patient demand during periods of high hospitalisation (e.g. during a pandemic). Managerial insights on the protection resource distribution are discussed and a comparison between probabilistic and worst-case disruptions is provided.
... A common first-stage decision is the in-advance storage of relief items (Davis et al., 2013;Lodree Jr et al., 2012) or locating facilities (Elçi and Noyan, 2018;Li et al., 2011). A few authors use other first-stage decisions such as the retrofitting of roads (Peeta et al., 2010), buildings (Zolfaghari and Peyghaleh, 2015) or bridges (Liu et al., 2009). Prevalent second-stage decisions are the transport of commodities in the aftermath of a disaster (Rezaei-Malek et al., 2016;Tofighi et al., 2016) or an evacuation plan (Li et al., 2011(Li et al., , 2012. ...
Article
Full-text available
In this paper, we will shed light on when to pack and use 3D-printers in disaster response operations. For that, we introduce a new type of problem, which we call the two-stage stochastic 3D-printing knapsack problem. We provide a two-stage stochastic programming formulation for this problem, for which both the first and the second stage are NP-hard integer linear programs. We reformulate this formulation to an equivalent integer linear program, which can be efficiently solved by standard solvers. Our numerical results illustrate that for most situations using a 3D-printer is beneficial. Only in extreme circumstances, where the quality of printed items is extremely low, the size of the 3D-printer is extremely large compared to the knapsack size, when there is no time to print the items, or when demand for items is low, packing no 3D-printers is the best option.
... Calvert and Snelder (2018) base their methodology on volatility in traffic flow. Liu et al. (2009) compute travel costs from the total travel time incurred by all the vehicles in the system, with the stipulation that all demand must be met in each disruption scenario. Fan and Liu (2010) compute travel costs assuming that user equilibrium is fully achieved, and only penalize solutions containing unsatisfied demand. ...
Article
In this study, we measure the resilience of the road networks in two Mediterranean regions: Valencia (Spain) and Sardinia (Italy). We apply a framework that is able to monitor the deterioration in territorial accessibility of the two systems in response to the cumulative elimination of sections. Road sections are removed according to different elimination types: random order, deterministic order of criticality, and deterministic order in areas at high risk of flooding. The results show that the Sardinian network is more resilient than the Valencian network, despite its poorer quality. We demonstrate that the framework can integrate climate change considerations in the resilience assessment. As the framework identifies the most critical sections, the method can be adopted as a support system by transport planners and policy makers.
... Most of these studies assume the condition F (x) < +∞ for x ∈ X , which is referred to as the relatively complete recourse condition in the context of two-stage stochastic programming. As an important class of (1.1), two-stage stochastic programing has applications in transportation planning [4,25], disaster management [28], water recourse management [19] and inventory management [15]. However, in many real-word applications, relatively complete recourse assumption becomes restrictive and there has been a growing literature studying two-stage stochastic programming without this assumption, i.e. ...
Preprint
We investigate the feasibility of sample average approximation (SAA) for general stochastic optimization problems, including two-stage stochastic programming without the relatively complete recourse assumption. Instead of analyzing problems with specific structures, we utilize results from the \textit{Vapnik-Chervonenkis} (VC) dimension and \textit{Probably Approximately Correct} learning to provide a general framework that offers explicit feasibility bounds for SAA solutions under minimal structural or distributional assumption. We show that, as long as the hypothesis class formed by the feasbible region has a finite VC dimension, the infeasibility of SAA solutions decreases exponentially with computable rates and explicitly identifiable accompanying constants. We demonstrate how our bounds apply more generally and competitively compared to existing results.
Article
Full-text available
This paper tackles a complex logistics challenge of disaster management, encompassing warehouse location, pre-disaster inventory planning, routing, and post-disaster relief supply delivery. We establish an iterative process for optimizing relief distribution to shelters. Adaptable warehouse inventory reallocation responds to fluctuating demands, guided by a two-phase mathematical programming approach. In the first phase, a two-stage stochastic programming (TSSP) model determines optimal warehouse and shelter locations and inventory levels. In the subsequent phase, we introduce a mixed-integer programming (MIP) model to minimize the overall delivery time by making routing decisions. To streamline the process, we introduce a novel enumeration algorithm that trims down route options by considering unavailable links, effectively transforming the MIP model into an assignment-based model. This innovation results in a noticeable 74% reduction in solution time. Further efficiency is achieved by developing a branch-and-cut algorithm for swift MIP resolution. A real-world case study confirms the practicality of our approach.
Article
In a cloud, protection through backup of virtual machines contained in physical machines (PMs) reduces damage to users due to failure of PMs, such as hardware malfunctions. However, from a resource cost perspective, it is necessary to reduce the amount of capacity for backup while limiting the probability of unsuccessful protection. Existing studies suggest the method of sharing backup capacity among primary resources, but the amount of capacity reduction required to protect is limited. This paper proposes a backup resource allocation model with two-stage probabilistic protection to minimize the total required backup capacity for multiple simultaneous failures of PMs. In probabilistic protection, backup resources are allocated so that the probability of backup failure does not exceed a given survivability parameter which represents the acceptable probability of backup failure. Probabilistic protection which achieves efficient sharing of backup capacity enables flexible allocation and reduces the required backup capacity. In order to increase the flexibility of backup capacity allocation, the proposed model extends the probabilistic protection to two stages. By dividing the protection into two stages, the weight of probability between stages can be adjusted, enabling more effective capacity sharing. Since it is uncertain which primary PMs fail, we apply robust optimization to the probabilistic protection. By using a table that takes into account the survivability parameter and the failure probability of PMs, the proposed model is formulated as a mixed integer linear programming problem. We prove the NP-hardness of considered problem. A heuristic is introduced to solve the optimization problem. The proposed model can reduce the total required backup capacity compared to the models with dedicated protection and one-stage probabilistic protection. The model can also provide protection in a range of survivability parameters that the model with one-stage probabilistic protection cannot satisfy.
Article
As an emerging concept of a system's ability to resist disasters, transportation system resilience (TSR) has attracted the attention of scholars. In this study, a bibliometric analysis of 303 academic publications from the Web of Science related to TSR is presented to portray the intellectual landscape by visualizing the evolution of the collaboration network, the co-citation network, and keywords co-occurrence. The results showed that the number of publications in the field of TSR exploded after 2014 and attracted multi-disciplinary and cross-disciplinary attention from all over the world. Collaborative efforts among authors in this field tend to confine in small groups, and a large-scale collaborative network has yet to be established. This study also identifies the influential journals, institutions, scholars, and literature and summarizes the hot research topics and future research directions. In addition, it offers valuable references for researchers interested in TSR, and put forward recommendations on the emphasis and orientations of future studies.
Article
Full-text available
Seismology is among the intrinsic sciences that strictly affect human lives. Many research efforts are presented in the literature aiming at achieving risk mitigation and disaster management. More particularly, modern technologies have been employed in such a pivot. However, the day-to-day challenges and complexities of such natural science that face the stack holders still need more reliable and intelligent solutions. The solution can depend on a partial or integrated system of modern technologies. In this paper, we extensively survey the co-related modern technologies aiming to gather the major efforts exerted in this regard. It also outlines the desirability of seismology to modern technologies. Then, we present a detailed analysis of remote sensing and data communication networks (DCNs), which are considered the backend of seismic networks. Furthermore, for seismology, we depict both classical and non-classical approaches based on DCN principles, such as optical fiber-based acoustic sensors, social media, and the internet of things (IoT). Following that, a comprehensive description of the various optimization techniques utilized for seismic wave analysis and for prolonging network lifetime is offered. A description of the important functions that artificial intelligence (AI) can play in different fields of seismology is also included. Finally, we present some recommendations for stack holders to prevent natural calamities and preserve human lives.
Article
The transportation network resiliency plays a significant role in reducing the devastating impacts of a disaster. This paper develops a mathematical model to improve the resilience of road–bridge transportation networks in the recovery phase. The model consists of two objective functions. The first objective function minimises total recovery time, and the second one prioritises damaged bridges for restoration to increase network performance level in an early stage by maximising the skewness of the recovery trajectory. This paper considers disrupted bridges as repair projects and schedules a network of recovery tasks with precedence relationships to restore each bridge while considering resource constraints. Thus, it is similar to the resource–constraint multi-project scheduling problem. A genetic algorithm is applied to solve the model for a large-scale hypothetical road–bridge transport network, and the highway network of Shelby County, TN, the USA, following a seismic hazard. Different recovery task networks are considered to restore disrupted bridges. Three communities with rich, average and poor resources are compared to evaluate the impact of different community investment structures on the recovery process and network recovery time. The results show that the proposed restoration plan leads to achieving the best recovery trajectory and shortest total recovery time.
Article
The communication network is the critical infrastructure and backbone of modern society. On the other hand, regional failures, including natural disasters and malicious attacks, pose a major threat to its capacity and quality of service. Therefore, it is important to design a communication network that is resilient enough to withstand disasters. A key factor of network resilience is diversity, as multiple unrelated elements prevent them from sharing the same fate. This paper focuses on the geodiversity of the network and takes it as the resilience metric for the network under geographically highly correlated regional failures. The communication networks with high geodiversity are designed by shielding critical components, where the shielded ones can survive disasters. One of the attractive advantages of the approach is its implementability, more than just a theoretical technique. Meanwhile, we develop an integer nonlinear programming (INLP) model to obtain the minimum cost shielding for communication networks with desired geodiversity. Since the problem is difficult to solve in polynomial time, a genetic-based heuristic algorithm is applied to get a near-optimal solution in a shorter time. Finally, simulation experiments are performed to illustrate our methodology and its effectiveness, and the results highlight the collective role of network elements.
Chapter
This work is devoted to an analysis of exact penalty functions and optimality conditions for nonsmooth two-stage stochastic programming problems. To this end, we first study the co/quasidifferentiability of the expectation of nonsmooth random integrands and obtain explicit formulae for its co and quasidifferential under some natural assumptions on the integrand. Then, we analyse exact penalty functions for a variational reformulation of two-stage stochastic programming problems and obtain sufficient conditions for the global exactness of these functions with two different penalty terms. In the end of the chapter, we combine our results on the co/quasidifferentiability of the expectation of nonsmooth random integrands and exact penalty functions to derive optimality conditions for nonsmooth two-stage stochastic programming problems in terms of codifferentials.
Article
Pre-disaster planning and management activities may have significant effects on reducing post-disaster damages. In this article, a two-stage stochastic programming model is provided to design a resilient rescue network assuming that the demands for relief items and the network functionality after the disaster are affected by uncertainty. Locations and capacities of relief centers, the inventory of relief items, and strengthening vulnerable arcs of the network are among the main decisions that must be taken before the disaster. Servicing the affected points is decided after the disaster, and the risk of not satisfying demands is controlled by using the conditional-value-at-risk measure. Since the direct resolution of the model is intractable and time-consuming over actual large-sized instances, an improved Benders decomposition algorithm based on the problem structure is proposed to overcome this difficulty. Computational results highlight the effectiveness of the proposed method compared to the existing approaches.
Article
Since disruptive events can cause negative impacts on a city's regular traffic order and economic activities, it is crucial that a transport network is resilient against disaster to prevent significant economic losses and ensure regular social, economic, and traffic order. However, using the transport metric for resilience improvement can only provide a limited view of transport pre-investments. This study develops an optimization framework to tackle the problem of resilient road pre-investment with the aim of resilience enhancement of traffic systems from an economic perspective by applying the integrated computable general equilibrium (CGE) model. First, we use the Shapley value, which considers road links’ interact cooperation, to determine critical candidate links that need to be upgraded. Second, we propose the Economic-based Network Resilience Measure (ENRM) as a performance indicator to evaluate network-level resilience from the economic perspective. Third, a bi-level multi-objective optimization model is formulated to identify the optimal capacity improvement for candidate critical links, where the objectives of the upper-level model are to minimize the ENRM and pre-enhancement budget. The lower-level model is built on the integrated CGE model. The genetic algorithm approach is used to solve the proposed bi-level model. A case study of the optimization framework is presented using a simplified Sydney network. Results suggest that a higher budget can help promote people's social welfare and improve transportation resilience. However, the Pareto-optimality is observed, and the marginal utility decreases with an increase in the investment budget. Further, the results also show that investment returns are higher in severe disasters. This study will help transport planners and practitioners optimize resilience pre-event investment strategies by capturing a wider range of project impacts and evaluating their economic impacts under general equilibrium rather than partial economic equilibrium, which is often assumed in traditional four-step transport planning.
Article
Full-text available
In the related literature, conventional approaches to assessing security risk and prioritising bridges have focused on unique characteristics. Although the unique characteristics appropriately reflect the economic and social consequences of failure, they neglect the consequences of a bridge failure at the network level. If network owners and operators prioritise bridges solely based on their unique characteristics, bridges with low object-level importance and high network-level importance have very low chances to get priority. In this paper, a bridge importance measurement index α(e) has been presented, prioritising bridges based on their unique characteristics, location and network topology. To describe how to use this index α(e), three numerical examples were provided. While the first example was related to a simple hypothetical network, the second and third examples were real networks related to the bridges of Wroclaw city. Using these examples, the results of bridge prioritisation obtained in the unique-characteristics-only state were compared to the state in which α(e) had been used. Results showed that considering the location of the bridge and the topological characteristics of the network change the bridges prioritisation. For instance, in the second example, it was observed that the use of the α(e), made bridge Bolesława Krzywoustego the essential bridge, while bridge Grunwaldzki was the essential bridge under the previous prioritisation made by researchers. However, the results of the third example showed that bridge Milenijny, which was considered the essential network bridge as stated in the previous prioritisation made by researchers, was again selected as the most critical bridge based on the α(e).
Article
Transportation is the key to a city's prosperity, however, there is possibility that the development and expansion of city make the transportation system complicated, uncertain and vulnerable, especially when in the face of damage. Although resilience is a critical factor for understanding and managing the damage and recovery of the transportation system, study on model of transportation resilience, which is able to effectively utilize actual data and is based on real cases, is still not enough. Therefore, this paper proposes a resilience model and investigates the optimization of the recovery strategy according to this model. It constructs a modeling framework based on the OD-grid network and provides a brand-new performance metric for transportation network resilience analysis based on grid capacity. It also develops two resilience assessment models and compares their characters. Focusing on the recovery of transportation resilience process, it presents two recovery strategies considering recovery sequence and resource allocation respectively and uses the GA algorithm to optimize them and solve the problem. It then demonstrates the effectiveness of the proposed model and the recovery algorithm through the resilience analysis and recovery strategy optimization of a city's transportation network model, which is built according to the real GPS data.
Article
Seismic risk assessment of road systems involves computationally expensive traffic simulations to evaluate the performance of the system. To accelerate this process, this paper develops a neural network surrogate model that allows rapid and accurate estimation of changes in traffic performance metrics due to bridge damage. Some of the methodological aspects explored when calibrating this neural network are defining sampling protocols, selecting hyperparameters, and evaluating practical considerations of the model. In addition to the neural network, a modified version of the local interpretable model-agnostic explanation (LIME) is proposed as a retrofitting strategy that minimizes earthquakes' impact on the system. The modified version (LIME-TI) uses traffic impacts (TI) and rates of occurrence to aggregate the importance of individual damage realizations during the computation of variable importance. This study uses the San Francisco Bay Area road network as a testbed. As a conclusion of this study, the neural network accurately predicts the system's performance while taking five orders of magnitude less time to compute traffic metrics, allowing decision-makers to evaluate the impact of retrofitting bridges in the system quickly. Moreover, the proposed LIME-TI metric is superior to others (such as traffic volume or vulnerability) in identifying bridges whose retrofit effectively improves network performance.
Article
Reducing vulnerability and enhancing resilience of infrastructure networks subject to uncertain disruptions is a challenging task as these networks become increasingly interconnected and interdependent. This article devotes to addressing the preparedness planning problem of interdependent infrastructure networks. The interdependency between infrastructure networks is characterized by a two-way physical interdependency, that is, the state of one infrastructure network is dependent on that of another infrastructure network, and vice versa. Furthermore, the multistate characteristics of infrastructure networks are taken into account and accommodated to the studied physical interdependency. A tailored two-stage stochastic programming framework, which is able to cope with the uncertainty associated with disruption scenarios, is put forth to facilitate an effective resilience enhancement strategy under a limited budget. In this framework, the first-stage problem selects the optimal protection levels of network components in advance, whereas the second-stage problem optimizes the network operation after a disruption scenario occurs. The proposed framework is implemented to an illustrative system composed of interdependent power–water distribution networks to demonstrate the effectiveness of the resilience enhancement strategy.
Article
This paper proposes a two-tier last-mile delivery model that optimally selects mobile depot locations in advance of full information about the availability of crowd-shippers and then transfers packages to crowd-shippers for the final shipment to the customers. Uncertainty in crowd-shipper availability is incorporated by modeling the problem as a two-stage stochastic integer program. Enhanced decomposition solution algorithms including branch-and-cut and cut-and-project frameworks are developed. A risk-averse approach is compared against a risk-neutral approach by assessing conditional-value-at-risk. A detailed computational study based on the City of Toronto is conducted. The deterministic version of the model outperforms a capacitated vehicle routing problem on average by 20%. For the stochastic model, decomposition algorithms usually discover near-optimal solutions within two hours for instances up to a size of 30 mobile depot locations, 40 customers, and 120 crowd-shippers. The cut-and-project approach outperforms the branch-and-cut approach by up to 85% in the risk-averse setting in certain instances. The stochastic model provides solutions that are 3.35%–6.08% better than the deterministic model, and the improvements are magnified with increased uncertainty in crowd-shipper availability. A risk-averse approach leads the operator to send more mobile depots or postpone customer deliveries to reduce the risk of high penalties for nondelivery.
Article
This paper presents a novel method for prioritising bridge retrofits within a regional road network subject to uncertain seismic hazard, using a technique that accounts for network performance while avoiding the combinatoric computational costs of exhaustive searches. Using global variance-based sensitivity analysis, a probabilistic ranking of bridges is determined according to how much their retrofit statuses influence the expected cost of the road network disruption. Bridges’ total-order sensitivity (Sobol’) indices are estimated with respect to the expected cost using the hybrid-point Monte Carlo approximation method. A bridge’s total-order Sobol’ index measures how much its retrofit status influences the variance of the expected cost of the road network performance and accounts for the effect of its interactions with other bridges’ retrofit states. For 71 highway bridges in San Francisco, a retrofit strategy based on bridges’ total-order Sobol’ indices outperforms other heuristic strategies. The proposed method remains computationally tractable while accounting for the probabilistic nature of the seismic hazard, the uniqueness of individual bridges, network effects, and decision-makers’ priorities. Because this method leverages existing risk assessment tools and models without imposing further assumptions, it should be extensible to other types of networks under different types of hazards and to other decision variables.
Article
Retrofitting critical components of a critical infrastructure system to improve its seismic performance has been considered as the most frequently used mitigation strategy in both literature and practice. This article mainly studies this mitigation strategy and formulates the seismic retrofit optimization problem for critical infrastructure systems under a limited retrofit budget in a general form, and then proposes a heuristic method to solve the problem efficiently in terms of the optimality gap and the computational cost. The proposed method mainly includes three steps: (1) generates a limited number of component damage scenarios to reformulate the problem as an approximated model; (2) adopts a component importance‐based method to reduce the solution space and applies the integer L‐shaped method to solve the approximated model; (3) employs the sample average approximation method to enhance the solution quality. To demonstrate the performance of the proposed method, it is applied to identify the optimal retrofit strategies for the Shelby power transmission system, the IEEE 14‐bus test system, and the IEEE 118‐bus test system and also compared with several existing methods. Results show that the proposed method is significantly more efficient than those existing methods. For the IEEE 14‐bus test system, the proposed method gets almost exact solutions, with errors less than 0.29%; for the other two systems, it returns the best solutions among all methods under various retrofit budgets.
Book
Full-text available
1 Introduction.- I: Conceptual and Modeling Issues.- 2 Economic Principles, Issues, and Research Priorities in Hazard Loss Estimation.- 3 Indirect Losses from Natural Disasters: Measurement and Myth.- 4 Has September 11 Affected New York City's Growth Potential?.- II: Economic Models.- 5 Measuring Economic Impacts of Disasters: Interregional Input-Output Analysis Using Sequential Interindustry Model.- 6 Geohazards in Social Systems: An Insurance Matrix Approach.- 7 Computable General Equilibrium Modeling of Electric Utility Lifeline Losses from Earthquakes.- 8 The Fall of the Iron Curtain and the Evolution of German Regional Labour Markets: A Self-Organized Criticality Perspective.- 9 Risk Perception, Location Choice and Land-use Patterns under Disaster Risk: Long-term Consequences of Information Provision in a Spatial Economy.- III: Integrative Models.- 10 The Dynamics of Recovery: A Framework.- 11 Earthquake Disaster Mitigation for Urban Transportation Systems: An Integrated Methodology That Builds on the Kobe and Northridge Experiences.- 12 Analysis of Economic Impacts of an Earthquake on Transportation Network.- 13 Benefit Cost Analysis for Renewal Planning of Existing Electric Power Equipment.- 14 Evaluating the Disaster Resilience of Power Networks and Grids.
Article
Full-text available
J. F. Benders devised a clever approach for exploiting the structure of mathematical programming problems withcomplicating variables (variables which, when temporarily fixed, render the remaining optimization problem considerably more tractable). For the class of problems specifically considered by Benders, fixing the values of the complicating variables reduces the given problem to an ordinary linear program, parameterized, of course, by the value of the complicating variables vector. The algorithm he proposed for finding the optimal value of this vector employs a cutting-plane approach for building up adequate representations of (i) the extremal value of the linear program as a function of the parameterizing vector and (ii) the set of values of the parameterizing vector for which the linear program is feasible. Linear programming duality theory was employed to derive the natural families ofcuts characterizing these representations, and the parameterized linear program itself is used to generate what are usuallydeepest cuts for building up the representations.In this paper, Benders'' approach is generalized to a broader class of programs in which the parametrized subproblem need no longer be a linear program. Nonlinear convex duality theory is employed to derive the natural families of cuts corresponding to those in Benders'' case. The conditions under which such a generalization is possible and appropriate are examined in detail. An illustrative specialization is made to the variable factor programming problem introduced by R. Wilson, where it offers an especially attractive approach. Preliminary computational experience is given.
Article
Full-text available
This paper addresses a multi-period investment model for capacity expansion in an uncertain environment. Using a scenario tree approach to model the evolution of uncertain demand and cost parameters, and fixed-charge cost functions to model the economies of scale in expansion costs, we develop a multi-stage stochastic integer programming formulation for the problem. A reformulation of the problem is proposed using variable disaggregation to exploit the lot-sizing substructure of the problem. The reformulation significantly reduces the LP relaxation gap of this large scale integer program. A heuristic scheme is presented to perturb the LP relaxation solutions to produce good quality integer solutions. Finally, we outline a branch and bound algorithm that makes use of the reformulation strategy as a lower bounding scheme, and the heuristic as an upper bounding scheme, to solve the problem to global optimality. Our preliminary computational results indicate that the proposed strategy has significant advantages over straightforward use of commercial solvers.
Article
Full-text available
The sample average approximation (SAA) method is an approach for solving stochastic optimization problems by using Monte Carlo simulation. In this technique the expected objective function of the stochastic problem is approximated by a sample average estimate derived from a random sample. The resulting sample average approximating problem is then solved by deterministic optimization techniques. The process is repeated with different samples to obtain candidate solutions along with statistical estimates of their optimality gaps. We present a detailed computational study of the application of the SAA method to solve three classes of stochastic routing problems. These stochastic problems involve an extremely large number of scenarios and first-stage integer variables. For each of the three problem classes, we use decomposition and branch-and-cut to solve the approximating problem within the SAA scheme. Our computational results indicate that the proposed method is successful in solving problems with up to 21694 scenarios to within an estimated 1.0% of optimality. Furthermore, a surprising observation is that the number of optimality cuts required to solve the approximating problem to optimality does not significantly increase with the size of the sample. Therefore, the observed computation times needed to find optimal solutions to the approximating problems grow only linearly with the sample size. As a result, we are able to find provably near-optimal solutions to these difficult stochastic programs using only a moderate amount of computation time.
Article
Full-text available
,Traditional stochastic,programming,is risk neutral,in the sense that,it is con- cerned,with,the optimization,of an expectation,criterion. A common,approach,to addressing risk in decision making problems is to consider a weighted mean-risk objective, where some dispersion,statistic is used,as a measure,of risk. We investigate the computational,suitability of various mean-risk objective,functions,in addressing,risk in stochastic programming,models. We prove that the classical mean-variance,criterion leads to computational,intractability even in the simplest stochastic programs. On the other hand, a number of alternative mean-risk functions,are shown,to be computationally,tractable,using slight variants of existing stochastic programming,decomposition,algorithms. We propose,decomposition-based parametric,cutting plane algorithms,to generate,mean-risk efficient frontiers for two particular classes of mean-risk objectives. Key words. Stochastic programming, mean-risk objectives, computational complexity, de-
Article
Full-text available
Routing problems appear frequently when dealing with the operation of communication or transportation networks. Among them, the message routing problem plays a determinant role in the optimization of network performance. Much of the motivation for this work comes from this problem which is shown to belong to the class of nonlinear convex multicommodity flow problems. This paper emphasizes the message routing problem in data networks, but it includes a broader literature overview of convex multicommodity flow problems. We present and discuss the main solution techniques proposed for solving this class of large-scale convex optimization problems. We conduct some numerical experiments on the message routing problem with four different techniques.
Article
The purpose of this review article is to provide a summary of the scientific literature on stochastic vehicle routing problems. The main problems are described within a broad classification scheme and the most important contributions are summarized in table form.
Article
The Network Design Problem (NDP) has long been recognized to be one of the most difficult and challenging problems in transport. In the past two decades, we have witnessed the development of a vast, growing body of research focused on formulations and solution procedures for the NDPs, which deal with the selection of either link improvements or link additions to an existing road network, with given demand from each origin to each destination. The objective is to make an optimal investment decision in order to minimize the total travel cost in the network, while accounting for the route choice behaviour of network users. In this paper, we present a general survey of existing literature in this area, and present some new developments in model formulations. We incorporate the elasticity of travel demand into the NDP and seek the economic‐based objective function for optimization. We also pose the mixed network design problem involving simultaneous choice of link addition and capacity improvement which is considered more sensible for road networks. In addition, we introduce the network reserve capacity concept for a capacity improvement plan, and raise and clarify some interesting issues relating to NDP and Braess's paradoxes. Finally, from the survey and the new proposal made herein, we offer some perspectives on future research.
Article
After the 1989 Loma Prieta earthquake, California's Department of Transportation (Caltrans) began an ambitious program to screen, prioritize, and retrofit 2,000 seismically deficient California bridges. It was unfortunate that before this program was completed the Northridge, Calif., earthquake occurred. This paper summarizes how bridges with a post-1989 seismic retrofit performed during the 1994 Northridge earthquake. A description of typical retrofit details is given. A table provides data on all retrofitted bridges where ground motion exceeded 0.25g (where g = acceleration due to gravity) during the earthquake. Case studies of two retrofitted bridges that experienced ground motion of 0.50g are presented. There was no serious damage to any of these retrofitted bridges, although some were within 100 m of collapsed structures. Caltrans retrofits are designed to prevent collapse during the maximum credible earthquake. Bridge retrofit performance during the Northridge earthquake provides compelling evidence that these retrofits can achieve that goal.
Article
To each stochastic program corresponds an equivalent deterministic program. The purpose of this paper is to compile and extend the known properties for the equivalent deterministic program of a stochastic program with fixed recourse. After a brief discussion of the place of stochastic programming in the realm of stochastic optimization, the definition of the problem at hand, and the derivation of the deterministic equivalent problem, the question of feasibility is treated in 4 with in 5 a description of algorithmic procedures for finding feasible points and in 6 a characterization of aspecial but important class of problems. Section 7 deals with the properties of the objective function of the deterministic equivalent problem, in particular with continuity, differentiability and convexity. Finally in 8, we view the equivalent deterministic program in terms of its stability, dualizability and solvability properties.
Article
This paper gives an algorithm for L-shaped linear programs which arise naturally in optimal control problems with state constraints and stochastic linear programs (which can be represented in this form with an infinite number of linear constraints). The first section describes a cutting hyperplane algorithm which is shown to be equivalent to a partial decomposition algorithm of the dual program. The two last sections are devoted to applications of the cutting hyperplane algorithm to a linear optimal control problem and stochastic programming problems.
Article
This paper is an attempt to describe and characterize the equivalent convex program of a two-stage linear program under uncertainty. The study has been divided into two parts. In the first one, we examine the properties of the solution set of the problem and derive explicit expressions for some particular cases. The second section is devoted to the derivation of the objective function of the equivalent convex program. We show that it is convex and continuous. We also give a necessary condition for its differentiability and establish necessary and sufficient conditions for the solvability of the problem. Finally, we give the equivalent convex program of certain classes of programming under uncertainty problems, i.e., when the constraints and the probability space have particular structures.
Article
Numerous transportation applications as diverse as capital investment decision-making, vehicle fleet planning, and traffic light signal setting all involve some form of (discrete choice) network design. The authors review some of the uses and limitations of integer programming-based approaches to network design, and describe several discrete and continuous choice models and algorithms. The objectives are threefold - to provide a unifying view for synthesizing many network design models, to propose a unifying framework for deriving many network design algorithms, and to summarize computational experience in solving design problems. The authors also show that many of the most celebrated combinatorial problems that arise in transportation planning are specializations and variations of a generic design model.
Article
This paper addresses the problem of determining which links should be improved in an urban road network so that total congestion in the city is minimized. A nonlinear mixed integer programming model is developed, and strategies for a branch-and-bound algorithm are presented. Particular attention is paid to the computational aspects of large-scale problems, and numerical results are reported.
Article
In the “standard” formulation of a stochastic program with recourse, the distribution ofthe random parameters is independent of the decisions. When this is not the case, the problemis significantly more difficult to solve. This paper identifies a class of problems that are“manageable” and proposes an algorithmic procedure for solving problems of this type. Wegive bounds and algorithms for the case where the distributions and the variables controllinginformation discovery are discrete. Computational experience is reported.
Article
We consider a class of stochastic mathematical programs with equilibrium constraints (SMPECs), in which all decisions are required to be made here-and-now, before a random event is observed. We show that this kind of SMPEC plays a very important role in practice. In order to develop effective algorithms, we first give some reformulations of the SMPEC and then, based on these reformulations, we propose a smoothed penalty approach for solving the problem. A comprehensive convergence theory is also included.
Article
In this paper, we discuss here-and-now type stochastic programs with equilibrium constraints. We give a general formulation of such problems and study their basic properties such as measurability and continuity of the corresponding integrand functions. We discuss also the consistency and rate of convergence of sample average approximations of such stochastic problems
Chapter
This chapter originally appeared in Management Science, April–July 1955, Vol. 1, Nos. 3 and 4, pp. 197–206, published by The Institute of Management Sciences. This article was also reprinted in a special issue of {\em Management Science}, edited by Wallace Hopp, featuring the “Ten Most Influential Papers of Management Science´s First Fifty Years,” Vol. 50, No. 12, Dec., 2004, pp. 1764–1769. For this special issue George B. Dantzig provided the following commentary: “I am very pleased that my first paper on planning under uncertainty is being republished after all these years. It is a fundamental paper in a growing field.”
Article
We generalize stochastic mathematical programs with equilibrium constraints (SMPEC) introduced by Patriksson and Wynter (Ref. 1) to allow for the inclusion of joint upper-level constraints and to change the continuity assumptions with respect to the uncertainty parameters assumed before by measurability assumptions. For this problem, we prove the existence of solutions. We discuss also algorithmic aspects of the problem, in particular the construction of an inexact penalty function for the SMPEC problem. We apply the theory to the problem of structural topology optimization.
Advancing mitigation technologies and disaster response for lifeline systems
  • Je Beavers
  • Editor
  • Je Beavers
  • Editor
Proceedings of workshop on mitigation of earthquake disasters by advanced technology (MEDAT-1)
  • M Shinozuka
  • O I Juran
  • M Shinozuka
  • O I Juran
Proceedings of workshop on mitigation of earthquake disasters by advanced technology (MEDAT-1). MCEER Technical Report
  • M Shinozuka
  • O I Juran
Shinozuka M, Juran I, O'rouke T, co-editors. In: Proceedings of workshop on mitigation of earthquake disasters by advanced technology (MEDAT-1). MCEER Technical Report; 2000.
Seismic retrofitting manuals for highway systems, volume I, seismic risk analysis of highway systems, and technical report for volume I
  • S D Werner
  • C E Taylor
  • J E Moore
  • J S Walton
Werner SD, Taylor CE, Moore JE, Walton JS. Seismic retrofitting manuals for highway systems, volume I, seismic risk analysis of highway systems, and technical report for volume I. Buffalo, New York: Multidisciplinary Center for Earthquake Engineering Research; 1999.
Advancing Mitigation Technologies and Disaster Response for Lifeline Systems, Technical Council on Lifeline Earthquake Engineering
  • J E Beavers
  • Editor
Beavers, J. E. Editor, Advancing Mitigation Technologies and Disaster Response for Lifeline Systems, Technical Council on Lifeline Earthquake Engineering, ASCE, 2003.
Continuing challenge: the Northridge earthquake of
  • Gw Housner
  • Cc Thiel
Modeling spatial economic impacts of disasters
  • Y Okuyama
  • Se Chang
Report to the Director, California Department of Transportation, by the Seismic Advisory Board
  • G W Housner
  • C C Thiel
Housner, G. W., and C. C. Thiel. ed. Continuing Challenge: The Northridge Earthquake of January 17, 1994: Report to the Director, California Department of Transportation, by the Seismic Advisory Board. Sacramento, CA: Caltrans, 1994.
Seismic Risk Analysis of Highway Systems, and Technical Report for Volume I
  • S D Werner
  • C E Taylor
  • J E Moore
  • J S Walton
Werner, S.D., C.E. Taylor, J.E. Moore, and J.S. Walton. Seismic Retrofitting Manuals for Highway Systems, Volume I, Seismic Risk Analysis of Highway Systems, and Technical Report for Volume I, Multidisciplinary Center for Earthquake Engineering Research, Buffalo, New York, 1999.
  • M Shinozuka
  • I Juran
  • T O'rouke
Shinozuka, M., I. Juran, and T. O'rouke. Co-Editors, Proceedings of Workshop on Mitigation of Earthquake Disasters by Advanced Technology (MEDAT-1), MCEER Technical Report, 2000.