Figure 44 - uploaded by Thai Nguyen
Content may be subject to copyright.
Binomial nomograph for determining sample size, n, and permitted number of defectives, c, for contractor's risk α and owner's risk β (Montgomery 1991). The procedure for using the nomograph to design a sampling plan is to (1) draw a line connecting α on the right-hand rule with the corresponding p 1 on the left-hand rule, (2) draw a similar line connecting (1-β) and p 2 , and (3) the point of intersection of the two lines gives the required sample size, n, and the maximum number of defectives permitted within the sample for acceptance. 

Binomial nomograph for determining sample size, n, and permitted number of defectives, c, for contractor's risk α and owner's risk β (Montgomery 1991). The procedure for using the nomograph to design a sampling plan is to (1) draw a line connecting α on the right-hand rule with the corresponding p 1 on the left-hand rule, (2) draw a similar line connecting (1-β) and p 2 , and (3) the point of intersection of the two lines gives the required sample size, n, and the maximum number of defectives permitted within the sample for acceptance. 

Source publication
Book
Full-text available
Load and Resistance Factor Design (LRFD) for Deep Foundations http://140.112.12.21/issmge/missing/NCHRP507.pdf http://www.nap.edu/openbook.php?record_id=13758&page=R3

Citations

... Although the COVR;c was assumed to be 0.35 for soils whose characteristics are well known, it should be larger for unfamiliar rocks with high spatial variability. Figure 15 illustrates the boundary lines if the COVR;c increases to 50% with reference to [65]. The boundary lines move to the upper right, contrary to those in Figure 14, and the applicability of the proposed method increases significantly. ...
Article
Full-text available
The use of data in the construction industry is growing rapidly. However, projects that do not have multiple stages, such as pile foundation and cantilever wall construction, are difficult to reinforce based on the data of observation. It cannot be said that the design–build construction process is optimized by piling data and active learning. In this paper, a new data-driven framework is proposed so that it can be used even for construction under single-stage conditions. The proposed method adopts a lower safety factor (SF) in the preliminary design than that in the conventional methods, and checks the performance after the building using piling data. Countermeasures are conducted to satisfy the target reliability, if necessary. Focusing on the expected total cost, the parametric studies reveal that the proposed method can reduce the expected total cost under specific conditions, such as lower countermeasure cost, higher failure cost, and higher relative costs of safety measures. Furthermore, our method exhibits robustness, as even with low initial safety factors, the expected total cost does not become excessively larger compared to the conventional methods. The findings highlight the potential benefits of piling data for optimizing construction projects under single-stage conditions.
... For example, P(C > c), which is the probability that the cost, C, exceeds a threshold c, and/or P(Dr > d): the probability that the drift, Dr, exceeds a threshold d). These probabilistic estimates will allow us to generalize the deterministic "triangle" approach in Figs. 2 and 3, by using the performance-based engineering design (PBED) (Paikowsky, 2004). The PBED primarily considers performances at the overall system level in terms of risk of collapse, fatalities, repair costs, and post-disaster loss of function, and is thus preferred over alternatives such as the load-andresistance-factor (LRFD) for the analysis of new and existing structures for disaster adequacy (Porter, 2003), and particularly for earthquakes. ...
Article
Full-text available
The resilience of communities has emerged as a major goal in policy and practice. Cities, states, and counties within the United States and around the world are passing laws requiring the incorporation of climate-related hazard vulnerability assessments within their master plan updates for resilience planning and design. The resilience of communities under present and future scenarios is thus becoming a cornerstone of decision making and actions. Decisions that would enhance resilience, however, span multiple sectors and involve various stakeholders. Quantifying community resilience is a key step in order to describe the preparedness level of communities, and subsequently locating non-resilient areas to further enhance their capacity to endure disasters. Two main approaches are currently being pursued to evaluate resilience. The first approach is the “community resilience” developed mainly by social scientists and planners, and it captures social resilience using numerous pre-disaster attributes to describe the functioning of a community. This approach subsumes that pre-disaster attributes can predict the community resilience to a disaster. The second approach is adopted for infrastructure resilience, mostly used by engineers, and it focuses on robustness, redundancy, resourcefulness, and rapidity. This approach is appropriate for systems that are operated by highly skilled personnel and where the actions are of engineering type. In this paper, we provide an overview of the two approaches, and we leverage their limitations to propose a hybrid approach that combines community and infrastructure capitals into an Area Resilience metric, called ARez. ARez captures the role/impact of both infrastructure and community and combines five sectors: energy, public health, natural ecosystem, socio-economic, and transportation. We present a proof-of-concept for the ARez metric, showing its practicality and applicability as a direct measure for resilience, over various time scales.
... The random variables involved in the limit state function for this study are pile resistance (R), dead load (DL), and live load (LL). The statistics of the bias corresponding to DL and LL were adopted from literature (Paikowsky et al. 2004) while the statistics of resistance bias were calculated using the pile data (Table 1). ...
... A global Lognormal distribution is the most appropriate distribution as it maintains conservatism in the lower tail regions of the Cumulative Distribution Functions (CDFs) (Motamed et al. 2016). Furthermore, past studies on pile foundations showed that the Lognormal distribution was the best fitting distribution for resistance biases (Paikowsky et al. 2004;Abdelsalam 2010;Motamed et al. 2016). The Shapiro-Wilk (SW) (1965) test and the Anderson Darling (AD) (1952) test were conducted to assess the Lognormal distribution assumptions. ...
... They are also useful for gaining insights on skewness, tail behavior, multi-modal behavior, and outliers (Ricci 2005). After confirming the Lognormal distribution of the resistance biases, resistance factors were calculated using the First Order Reliability Method (FORM) and Monte Carlo Simulation (MCS) for the target reliability indices (b T ) of 2.33 and 3.00 to account for redundant and non-redundant pile groups, respectively (Paikowsky et al. 2004). These biases and resistance factors are presented in Table 4. ...
Article
Full-text available
Existing static analysis methods and their resistance factors recommended in the AASHTO LRFD Bridge Design Specifications (2020) were developed for pile resistance predictions in soils. The performance of existing α- and β-methods on the LRFD design of steel H-piles in rock-based Intermediate GeoMaterials (IGMs) were evaluated in terms of both resistance and efficiency factors. Compared to AASHTO (AASHTO LRFD bridge design specifications, 9th ed, U.S. Customary Units, Washington, DC, 2020) recommendations, lower resistance factors calibrated for IGM-rocks confirmed that the prescribed reliability level cannot be attained using the existing α- and β-methods. Furthermore, lower efficiency factors were determined for the α- and β-methods on the shaft resistance and end bearing predictions. To improve the efficiency of pile designs in rock-based IGMs, calibrated α- and β-methods consisting of new equations for design coefficients were developed based on an electronic database (WyoPile) of pile load test data in IGM-rocks of Wyoming. The proposed LRFD procedure consists of probability based resistance factors determined using the First Order Reliability Model and Monte Carlo Simulation for target reliability indices of 2.33 and 3.00. Higher resistance and efficiency factors were achieved for the calibrated α- and β-methods. The uncertainties of the calibrated resistance factors described by the mean, standard deviation, and 95% confidence interval are also presented to address the effect of sample sizes used in this study.
... Load and resistance factor design (LRFD), as a rational probabilistic approach based on reliability to manage uncertainties in the building process, is widely employed in structural engineering [1][2][3][4][5][6][7]. In the past three decades, several methods for determining load and resistance factors have been developed, of which the first order reliability method (FORM) [8] was frequently used to account for stochastic fracture problems [9][10][11]. ...
... The contribution of this paper is mainly reflected in the following points: (1) the calculation processes of three existing methods are summarized. (2) A new model is proposed to avoid the mathematical limitation and to overcome the shortcoming of iterative computation. (3) The feasibility of calculating target mean resistance without iteration of the proposed method is investigated. ...
... Moreover, the existing method cannot be universally deployed due to the limitation of Equation (11). Although reliability computation based on the proposed method might be slightly smaller than βT in some cases, it has two intrinsic advantages: (1) No mathematical limitation; (2) No iteration calculation. Particularly, higher efficiency can be achieved when employing the proposed method for system reliability, and the error is also acceptable. ...
Article
Full-text available
Load and resistance factor design (LRFD) is widely used in building codes for reliability design. In the calculation of load and resistance factors, the third-moment method (3M) has been proposed to overcome the shortcomings (e.g., inevitable iterative computation, requirement of probability density functions (PDFs) of random variables) of other methods. With the existing 3M method, the iterative is simplified to one computation, and the PDFs of random variables are not required. In this paper, the computation of load and resistance factors is further simplified to no iterations. Furthermore, the accuracy of the proposed method is proved to be higher than the existing 3M methods. Additionally, with the proposed method, the limitations regarding applicable range in the existing 3M methods are avoided. With several examples, the comparison of the existing 3M method, the ASCE method, the Mori method, and the proposed method is given. The results show that the proposed method is accurate, simple, safe, and saves material.
... Many design methods for pile foundations do not provide provisions for pile set-up, an increase in the pile axial capacity with time after driving, due to a lack of comprehensive load test programs to study this phenomenon. However, with the emergence of Pile Driving Analyzer (PDA) tests, the static pile capacity and resistance distribution at various set-up times can be obtained with reasonable accuracy (Paikowsky 2004). PDA testing is a non-destructive test approach to estimate the pile capacity immediately after pile installation, which is called End of Initial Drive (EOID), and several days after the pile is installed, which is called Beginning of Restrike (BOR). ...
... PDA testing is a non-destructive test approach to estimate the pile capacity immediately after pile installation, which is called End of Initial Drive (EOID), and several days after the pile is installed, which is called Beginning of Restrike (BOR). These tests are conducted by applying a large mass onto the top of a pile while measuring the applied force and velocity by use of accelerometers and strain transducers attached to the pile (Paikowsky 2004). The pile resistances and load-displacement responses are then determined by matching simulated stress waves to the measured stress waves (Paikowsky 2004). ...
... These tests are conducted by applying a large mass onto the top of a pile while measuring the applied force and velocity by use of accelerometers and strain transducers attached to the pile (Paikowsky 2004). The pile resistances and load-displacement responses are then determined by matching simulated stress waves to the measured stress waves (Paikowsky 2004). Several researchers (Haque and Abu-Farsakh 2018;Ng 2011;Yang and Liang 2009) have studied full-scale pile tests and recognized potential savings by incorporating set-up into the designs. ...
Conference Paper
Full-text available
This paper presents a reliability-based design (RBD) for the serviceability limit state (SLS) of driven steel piles that incorporates pile set-up, which is the time-dependent increase in pile capacity. A database of Pile Driving Analyzer (PDA) tests was used to statistically characterize the uncertainties by a design method based on the cone penetration test (CPT). A hyperbolic tangent model was adopted to represent the PDA-derived load-displacement response of a pile, and the load component of the model was normalized with the capacity obtained from the PDA test. A new CPT-based design method was developed by correlating the pile resistances to the soil conditions. The capacity bias, a ratio of the measured to predicted capacity, was obtained from each pile test and characterizes the model uncertainty of predictions by the design method. Based on the PDA tests, probability distributions were fitted for the capacity biases and hyperbolic-tangent-model parameter. Lastly, Monte Carlo simulations were used with the probability distributions to calibrate resistance factors in the RBD. In the end, a series of resistance factors were developed to determine the allowable load at a SLS with consideration of different reliability indices and set-up periods. This research is aimed to manage risks and deliver cost-savings for infrastructure development.
... Accounting for these uncertainties and ensuring prescribed reliability, reliability theory is used to calibrate LRFD resistance factors. Many statistical approaches, Table 2.4 (Paikowsky 2004). Since the bridge spans were less than 118 ft, a dead to live load ratio of 2 was taken for LRFD calibration (Ng and Sritharan 2016). ...
... Resistance bias was defined as the ratio of measured pile resistance in CAPWAP to predicted pile resistance in WEAP. Other required statistical parameters were taken from Paikowsky (2004) as shown in Table 3.7. Since the bridge spans were less than 118 ft, a dead to live load ratio of two was taken for LRFD calibration (Ng and Sritharan 2016). ...
... RBD methods can be broadly classified into two categories: simplified RBD methods and full RBD methods. Simplified RBD methods include the load and resistance factor design (LRFD) methods [7,[10][11][12] and multiple resistance factor design (MRFD) methods [13][14]. These methods adopt partial factors or resistance factors calibrated by reliability analyses. ...
Article
Full-text available
Quantile-based first-order second-moment method is a novel reliability analysis method proposed by the authors that is with simplicity and efficiency close to the first-order second-moment method, yet with accuracy and robustness close to the first-order reliability method. However, the quantile-based first-order second-moment method is inefficient if directly applied to reliability-based design. The novelty of the current short communication is to re-formulate the quantile-based first-order second-moment method into inverse-reliability methods to enhance computational efficiency. Two inverse-reliability methods are proposed based on this re-formulation. One is more practical, whereas the other is often more efficient. The effectiveness of the proposed methods is illustrated by a friction pile design example and a spread footing design example for the National Geotechnical Experimentation Sites at Texas A&M University.
... Additionally, recent studies suggest a change in the intensity and the frequency of hurricanes in a warming climate [9,10,11,12]. To characterize extreme wind and wave loads on offshore wind turbines (e.g., substructure and foundation), the wind engineering standards such as IEC 61400-3 and DNV-OS-J101 [13] implement the load and resistance factor design method (LRFD; [14]). A wind and/or a wave field with a specific return period is usually used to evaluate a loading combination (e.g, 50year return period according to the IEC Standard [15]). ...
Article
Full-text available
The development of offshore wind projects off the US northeast coast requires a comprehensive assessment of extreme loads generated by hurricanes. In this study, we demonstrated that using simplified methods based on observed data at nearby stations (e.g., measure-correlate-predict algorithms) to assess wind and wave loads during extreme conditions may lead to significant errors. We used an advanced ocean modeling system (COAWST: Coupled Ocean Atmosphere Wave Sediment Transport) to assess environmental loading at the proposed wind farm sites offshore Rhode Island and Massachusetts. After validation of the COAWST model using historical hurricanes (e.g., Hurricane Sandy in 2012), a number of synthetic tropical storms that represented wind with various probabilities (e.g., 50-year return period) were simulated. The spatial and temporal variability of the wind and wave loads within the proposed sites were assessed. Nearly 40% variability of wave loads was shown in the proposed sites. The results indicated the advantage of more advanced modeling systems for extreme load characterization, in which wind and wave fields in offshore wind farms can be resolved.
... As a result, a reliability index í µí»½ í µí± = 3.04 was obtained, for which the probability of failure is í µí±ƒ í µí±“ =0.0012. The results are on the lower limit target í µí»½ í µí± values typically sought for deep foundations, which are 2.33 and 3.00 for redundant and non-redundant systems, respectively ( Paikowsky et al. 2004). ...
Article
Aquaculture in Maine is an important industry with expected growth in the coming years to provide food in an ecological and environmentally sustainable way. Accommodating such growth, farmers need more reliable engineering solutions, such as improving their anchoring systems. Current anchoring methods include deadweights (concrete blocks) or drag embedment anchors, which are of relatively simple construction and installation. However, in the challenge of accommodating larger loads, farmers have used larger sizes of the current anchors rising safety issues and costs during installation and decommissioning. Helical anchors are a foundation type extensively used onshore with the potential of adjusting the aquaculture growth demand, though research understanding their lateral and inclined capacity needs to be performed first. This study addresses such topic by performing 3D finite element simulations of helical anchors and studies their reliability for offshore aquaculture farming. Results obtained in this research indicate that the helical anchors capacity could be related to either pure vertical or horizontal resistances, depending on the load inclination angle. Reliability evaluation of helical anchors for inclined loading demand from an oyster aquaculture farm using the Hasoferd-Lind method, indicated these anchors are feasible for operational aquaculture loads.
... where ϕ is resistance factor, γ D ; γ L are dead and live load factors and Q D ; Q L are dead and live load. The LRFD method which is still developing and expanding in the different parts of the world has been used to develop several codes such as Eurocode 7 in Europe (BS EN 1997-1 2004, highway structure foundations (Barker and Duncan 1992;Nowak 1999;Paikowsky et al. 2004) and transmission line structure foundations Grigoriu 1995, 2003) in the United States, Limit State Design (LSD) codes (Becker 1996;Green and Becker 2001;Phoon, Kulhawy, and Grigoriu 2003), Canadian Commission on Building and Fire Codes., National Research Council of Canada., Institute for Research in Construction (Canada), National building code of Canada (2010) and Highway Bridge Design Code (2006) in Canada, Building Code of Australia (AS2159, 1995 and the Geo-Code 21 in Japan (Honjo and Kusakabe 2002;Honjo and Nagao 2007). ...
... Barker and Duncan (1992), in the NCHRP 343 report, proposed a new parameter to show the effect of the construction control method. Similarly, in NCHPR 507, Paikowsky et al. (2004) reported this effect using the following equation: ...
... Controlling the construction of a pile or quality control (QC) involves several experiments at the site and can be divided into different parts, the first in this process is to check/verify the capacity of the pile design (Paikowsky et al. 2004). Design validation is typically done using a static load test (SLT), dynamic analysis (like CAPWAP, PDA, etc.) or dynamic formulas (such as Gate, FHWA Gates, etc.). ...
Article
Full-text available
In recent years, the precise design of the pile foundations has gained considerable attention due to its important role in the cost management of the civil projects. While the piles are mostly designed using static analysis methods, their resistance during the construction process is controlled by other control methods. This inconsistency increases the cost and the delay in the construction plan. In order to address these issues, new approaches for a more reliable and economically efficient design of the piles incorporate the effects of construction control techniques in the initial design. In this paper, we use the results of the Dynamic load test (DLT), the static load test (SLT), and the dynamic formulas to improve the resistance factor of the LRFD method for the initial design. The results show that the DLT at BOR and the SLT have the greatest effect in increasing the resistance factors of the static analysis methods.