Table 3 - uploaded by Upmanu Lall
Content may be subject to copyright.
Statistics From the Historical Precipitation Record, Silver Lake, Utah, 1948 -1992

Statistics From the Historical Precipitation Record, Silver Lake, Utah, 1948 -1992

Source publication
Article
Full-text available
A nonparametric wet/dry spell model is developed for resampling daily precipitation at a site. The model considers alternating sequences of wet and dry days in a given season of the year. All marginal, joint, and conditional probability densities of interest (e.g., dry spell length, wet spell length, precipitation amount, and wet spell length given...

Context in source publication

Context 1
... of the precipitation comes in the form of winter snow and season 4 rainfall. We see from Table 3 that season 4 (fall) has the highest mean wet day precipita- tion and maximum wet day precipitation, while season 1 (win- ter) has the highest percentage of yearly precipitation. Season 1 (winter) has the highest average wet spell length and the longest wet spell length. ...

Similar publications

Article
Full-text available
The thermodynamic conditions of the University of Utah's TRIGA Reactor were simulated using SolidWorks Flow Simulation, Ansys, Fluent and PARET-ANL. The models are developed for the reactor's currently maximum operating power of 90 kW, and a few higher power levels to analyze thermohydraulics and heat transfer aspects in determining a design basis...
Article
Full-text available
There is a strong evidence of a generalized tendency to an anticipation of flowering from many tree species, particularly for fruit trees. Phenological modelling may be helpful in estimating bloom dates in periods with missing surveys, both in the past and for the future, relying on climatic scenarios; that allows to estimate frost risk by crossing...
Article
Full-text available
Current practice in modeling network traffic for planning applications is a four-step travel demand forecasting model (trip generation, trip distribution, mode choice, and traffic assignment) that requires travel surveys and specialized technical staff to operate. Although such a modeling approach has been used in practice in major metropolitan pla...

Citations

... Najafi et al. (2011) show that, if proper predictors are chosen, MLR techniques can be an efficient method for downscaling. Non-parametric methods such as k-nearest neighbors (Gangopadhyay et al., 2005), kernel density estimators (Lall et al., 1996), kernel regression (Kannan & Ghosh, 2013), non-homogenous Markov model (Mehrotra & Sharma, 2005), Bayesian model averaging (Zhang & Yan, 2015) etc. have also been widely used for rainfall downscaling. Mannshardt-Shamseldin et al. (2010) have used Generalized Extreme Value Theory with regression methods to downscale extreme precipitation. ...
Article
Full-text available
Quantifying the risk from extreme weather events in a changing climate is essential for developing effective adaptation and mitigation strategies. Climate models capturing different scenarios are often the starting point for physical risk. However, accurate risk assessment for mitigation and adaptation often demands a level of detail they typically cannot resolve. Here, we develop a dynamic data‐driven downscaling (super‐resolution) method that incorporates physics and statistics in a generative framework to learn the fine‐scale spatial details of rainfall. Our approach transforms coarse‐resolution (0.25°) climate model outputs into high‐resolution (0.01°) rainfall fields while efficaciously quantifying the hazard and its uncertainty. The downscaled rainfall fields closely match observed spatial fields and their distributions. Contrary to conventional thinking, our results suggest that coupling simple statistics and physics to learning improves the efficacy of downscaling midlatitude rainfall extremes from climate models.
... For nonstationary random series with periodic variation, such as monthly, ten-daily, and daily streamflow series, the seasonal autoregressive (SAR(p)) model is more suitable. Lall et al. (1996) proposed a nonparametric decomposition method based on kernel density estimation, which avoids the artificial assumption of probability distribution. Then, nonparametric models, such as the moving block bootstrap method (Srinivas and Srinivasan 2005) and k-nearest neighbor method (Prairie et al. 2007), are applied to the stochastic simulation of hydrological series. ...
Article
Full-text available
The streamflow process is an crucial information resource for the joint optimal operation of reservoirs. As the length and representativeness of historical streamflow samples are insufficient for practice projects, streamflow stochastic generation approaches are usually used to expand the streamflow series. For the joint operation and management of the multi-reservoir system, the multisite streamflow stochastic generation (MSSG) with high-dimensional temporal-spatial correlation poses a challenge. This paper develops the generative adversarial network as a novel MSSG model. In contrast to the existing literature on MSSG, which solely focuses on a specific case study and provides a comparatively one-sided assessment, this paper evaluates multiple characteristics of streamflow at various time scales from three MSSG models in two instances. Specifically, three MSSG models, namely the seasonal autoregression (SAR) model coupled with the master station method, the Copula model coupled with the master station method, and the deep convolutions generative adversarial network (DCGAN) model, are employed to generate monthly, ten-daily, and daily streamflow series of the two-reservoir and eight-reservoir systems. This study aims to examine the performance of three models and provide recommendations for implementing MSSG approaches in practice. Results show that: (1) the priority should be given to the maximum iterations on the DCGAN model at a large time scale, while at a smaller time scale, the training of the model is directly linked to the setting of batch size; (2) the Copula model is capable for better retaining statistical characteristics of streamflow series for similarity; (3) the SAR model excels in simulating the extremes of streamflow; and (4) the DCGAN model possesses a significant advantage in capturing the temporal-spatial higher-order correlation, especially in systems comprising more than two reservoirs and with small time scales (e.g., daily streamflow). Furthermore, this study presents comprehensive and multi-scale recommendations for selecting MSSG approaches, thereby providing a theoretical foundation and practical value for MSSG in diverse scenarios.
... Transfer functions or Regression are popular because of their simplicity, but they cannot model the variability and extreme events very well. Generalized Linear Model (GLM) (Yang et al. 2005), Markov chain models (Hughes et al. 1999), hidden Markov chain models (Bellone et al. 2000), spell length models (Lall et al. 1996), conditional random fields (Raje & Mujumdar 2009), beta regression (Mandal et al. 2016), fuzzy logic-based methodologies (Ghosh & Mujumdar 2006), Bayesian Joint Probability (BJP) modelling methodology (Robertson & Wang 2009), ANN-based methods (Crane & Hewitson 1998;Mondal & Mujumdar 2012), Machine Learning Models (Kumar et al. 2023), and Stochastic Space Random Cascade (SSRC) methodology for precipitation downscaling with the help of GCM data (Groppelli et al. 2011) are few of the documented SD approaches used for climate variable projections. ...
Article
Full-text available
Statistical downscaling of the General Circulation Model (GCM) simulations are widely used for accessing climate changes in the future at different spatiotemporal scales. This study proposes a novel Statistical Downscaling (SD) model established on the Convolutional Long Short-Term Memory (ConvLSTM) Network. The methodology is applied to obtain future projection of rainfall at 0.25° spatial resolution over the Indian sub-continental region. The traditional multisite downscaling models typically perform downscaling on a single homogeneous rainfall zone, predicting rainfall at only one grid point in a single model run. The proposed model captures spatiotemporal dependencies in multisite local rainfall and predicts rainfall for the entire zone in a single model run. The study proposes a Shared ConvLSTM model providing a single end-to-end supervised model for predicting the future precipitation for entire India. The model captures the regional variability in rainfall better than a region-wise trained model. The projected future rainfall for different scenarios of climate change reveals an overall increase in the rainfall mean and spatially non-uniform changes in future rainfall extremes over India. The results highlight the importance of conducting in-depth hydrologic studies for different river basins of the country for future water availability assessment and making water resource policies.
... In this way, climatological forecasts are generated by drawing samples from the ECDF. 3. The non-parametric KDE is used to characterize the distribution of hydroclimatic variables (Lall et al., 1996;Tijdeman et al., 2020): represents the probability density function (PDF) estimated by implementing the KDE for observations. Compared with the resampling and ECDF, the KDE utilizes Gaussian kernels to formulate the non-parametric PDF of observations. ...
Article
Full-text available
Ensemble climatological forecasts play a critical part in benchmarking the predictive performance of hydroclimatic forecasts. Accounting for the skewness and censoring characteristics of hydroclimatic variables, ensemble climatological forecasts can be generated by the log, Box‐Cox and log‐sinh transformations, by the combinations of the Bernoulli distribution with the Gaussian, Gamma, log‐normal, generalized extreme value, generalized logistic and Pearson type III distributions and by the non‐parametric resampling, empirical cumulative distribution function and kernel density estimation methods. This paper is concentrated on the reliability of the twelve types of ensemble climatological forecasts. Specifically, mathematical formulations are presented and large‐sample tests are devised to verify the forecast reliability for the Multi‐Source Weighted‐Ensemble Precipitation version 2 (MSWEP V2) across the globe. Climatological forecasts of monthly precipitation over 18,425 grid cells are generated for 30 years under leave‐one‐year‐out cross validation, leading to 6,633,000 (12×18425×30) sets of ensemble climatological forecasts. The results point out that the reliability of climatological forecasts considerably varies across the twelve methods, particularly in regions with high hydroclimatic variability. One observation is that climatological forecasts tend to deviate from the distributions of observations when there is inadequate flexibility to fit precipitation data. Another observation is that ensemble spreads can be overly wide when there exist overfits of sample‐specific noises in cross validation. Through the tests of global precipitation, the robustness of the log‐sinh transformation and the Bernoulli‐Gamma distribution is highlighted. Overall, the investigations can serve as a guidance on the uses of transformations, distributions and non‐parametric methods in generating climatological forecasts.
... In the field of daily rainfall generation, stochastic resampling methods have been successfully applied in many studies because continuity is not required [15][16][17][18][19]. Therefore, it could be assumed that no boundary continuity is required in disaggregated hourly rainfall. ...
Article
Full-text available
As infrastructure and populations are highly condensed in megacities, urban flood management has become a significant issue because of the potentially severe loss of lives and properties. In the megacities, rainfall from the catchment must be discharged throughout the stormwater pipe networks of which the travel time is less than one hour because of the high impervious rate. For a more accurate calculation of runoff from the urban catchment, hourly or even sub-hourly (minute) rainfall data must be applied. However, the available data often fail to meet the hydrologic system requirements. Many studies have been conducted to disaggregate time-series data while preserving distributional statistics from observed data. The K-nearest neighbor resampling (KNNR) method is a useful application of the nonparametric disaggregation technique. However, it is not easy to apply in the disaggregation of daily rainfall data into hourly while preserving statistical properties and boundary continuity. Therefore, in this study, three-day rainfall patterns were proposed to improve reproducible ability of statistics. Disaggregated rainfall was resampled only from a group having the same three-day rainfall patterns. To show the applicability of the proposed disaggregation method, probability distribution and L-moment statistics were compared. The proposed KNNR method with three-day rainfall patterns reproduced better the characteristics of rainfall event such as event duration, inter-event time, and toral amount of rainfall event. To calculate runoff from urban catchment, rainfall event is more important than hourly rainfall depth itself. Therefore, the proposed stochastic disaggregation method is useful to hydrologic analysis, particularly in rainfall disaggregation.
... Synthetic precipitation time series can be used in forecasting hydrological variables, particularly in producing likely scenarios preserving interchange of dry and wet frequencies. Statistic tools such as stochastic processes and resampling methods based on estimating of a kernel density for interested data are often used in hydrological fields (Lall et al. 1996;Wang et al. 2005). Above statistic tools are generally involved in forecasting hydrometeorological data. ...
Article
Full-text available
Providing useful inflow forecasts of the Manantali dam is critical for zonal consumption and agricultural water supply, power production, flood and drought control and management (Shin et al., Meteorol Appl 27:e1827, 2019). Probabilistic approaches through ensemble forecasting systems are often used to provide more rational and useful hydrological information. This paper aims at implementing an ensemble forecasting system at the Senegal River upper the Manantali dam. Rainfall ensemble is obtained through harmonic analysis and an ARIMA stochastic process. Cyclical errors that are within rainfall cyclical behavior from the stochastic modeling are settled and processed using multivariate statistic tools to dress a rainfall ensemble forecast. The rainfall ensemble is used as input to run the HBV-light to product streamflow ensemble forecasts. A number of 61 forecasted rainfall time series are then used to run already calibrated hydrological model to produce hydrological ensemble forecasts called raw ensemble. In addition, the affine kernel dressing method is applied to the raw ensemble to obtain another ensemble. Both ensembles are evaluated using on the one hand deterministic verifications such the linear correlation, the mean error, the mean absolute error and the root-mean-squared error, and on the other hand, probabilistic scores (Brier score, rank probability score and continuous rank probability score) and diagrams (attribute diagram and relative operating characteristics curve). Results are satisfactory as at deterministic than probabilistic scale, particularly considering reliability, resolution and skill of the systems. For both ensembles, correlation between the averages of the members and corresponding observations is about 0.871. In addition, the dressing method globally improved the performances of ensemble forecasting system. Thus, both schemes system can help decision maker of the Manantali dam in water resources management.
... More especially, in case of multimodal or skewed distributions where parametric functions might be incompatible and attribute for inconsistencies in the estimated quantiles. Therefore, from the last few decades, few demonstrations such as Schwartz (1967), Duins (1976), Singh (1977), Bowman (1984), Silverman (1986), Scott (1992), Lall et al. (1993), Lall (1995), Wand and Jones (1995); Jones and Foster (1996), Lall et al. (1996), Adamowski (1996Adamowski ( , 2000, Bowman and Azzalini (1997), Efromovich (1999), Duong and Hazelton (2003), Kim et al. (2003Kim et al. ( ), 2006, Ghosh and Mujumdar (2007) and Santhosh and Srinivas (2013) pointed the flexibility of non-parametric probability concept in the light of Kernel density estimations or kde. Kernel estimator is recognized as a much stable data smoothing procedure in the field of hydrologic or flood frequency analysis and which yields a bonafide density. ...
Article
Basin perspective hydrology and hydraulic water-related queries often demanding an accurate estimation of flood exceedance probabilities or return periods for assessing hydrologic risk. The research on the advancements of flood probability modelling contributed to reduction of flood risk, damage property and human life losses associated with the occurrence of flood events. Higher degree of uncertainty and complex flood dependence structure did not facilitate for their accurate prediction through deterministic approaches, which often demand a probability distribution framework. Unreliability of univariate frequency analysis under parametric or non-parametric framework would be an attribute for underestimation or overestimation of flood risk. Multivariate distribution framework facilitating a comprehensive understanding of flood structure for various possible occurrence combinations among the flood-related random vectors (i.e. flood peak flow, volume and duration). In this literature, copula function is recognized as a highly flexible tool for establishing multivariate joint dependency and their associated return periods in comparison with traditional multivariate functions. The incorporation of vine or pair-copula constructions (or PCC) further exaggerated the efforts of higher dimension copula construction, in terms of precision level in their estimated quantiles, under the minimum information concept. This review explored the efficacy of copula-based methodology for tackling multivariate design problems and can be used as a guideline for water practioner and hydrologist.
... As, copulas multivariate constructions eliminated the restrictions, in the context of approximating univariate flood marginals not necessary from the same parametric families, which would be following different distributions and need to model separately. On the other side, few demonstrations such as Schwarz (1967), Duin (1976, Singh (1977), Bowman (1984), Silverman (1986), Adamowski et al. (1989), Scott (1992), Lall (1995), Wand and Jones (1995); Jones et al. (1996), Lall et al. (1996), Adamowski (1996), Bowman andAzzalini (1997), Efromowhich (1999), Duong and Hazelton (2003), Kim et al. (2003Kim et al. ( , 2006, Ghosh and Mujumdar (2007) and Srinivas and Santhosh (2013) pointed the limitations of parametric distributions in case of unsymmetrical or multimodal distributions type and pointed towards the flexibility of nonparametric probability framework. The applicability of nonparametric estimations based on kernel density functions are beyond the scope of this literature and will be tackled in the separate paper. ...
Article
Comprehensive understanding of the flood risk assessments via frequency analysis often demands multivariate designs under the different notations of return periods. Flood is a tri-variate random consequence, which often pointing the unreliability of univariate return period and demands for the joint dependency construction by accounting its multiple intercorrelated flood vectors i.e., flood peak, volume & durations. Selecting the most parsimonious probability functions for demonstrating univariate flood marginals distributions is often a mandatory pre-processing desire before the establishment of joint dependency. Especially under copulas methodology, which often allows the practitioner to model univariate marginals separately from their joint constructions. Parametric density approximations often hypothesized that the random samples must follow some specific or predefine probability density functions, which usually defines different estimates especially in the tail of distributions. Concentrations of the upper tail often seem interesting during flood modelling also, no evidence exhibited in favours of any fixed distributions, which often characterized through the trial and error procedure based on goodness-of-fit measures. On another side, model performance evaluations and selections of best-fitted distributions often demand precise investigations via comparing the relative sample reproducing capabilities otherwise, inconsistencies might reveal uncertainty. Also, the strength & weakness of different fitness statistics usually vary and having different extent during demonstrating gaps and dispensary among fitted distributions. In this literature, selections efforts of marginal distributions of flood variables are incorporated by employing an interactive set of parametric functions for event-based (or Block annual maxima) samples over the 50-years continuously-distributed streamflow characteristics for the Kelantan River basin at Gulliemard Bridge, Malaysia. Model fitness criteria are examined based on the degree of agreements between cumulative empirical and theoretical probabilities. Both the analytical as well as graphically visual inspections are undertaken to strengthen much decisive evidence in favour of best-fitted probability density.
... More precisely, we used a simple but powerful nonparametric technique known as the smoothed bootstrap with variance correction [32], detailed in Alg. 1. This generator is used in hydroclimatology to improve modeling of precipitation [18] or streamflow [29]. Unlike the traditional bootstrap [9] which simply draws with replacement from the initial set of observations, the smoothed bootstrap can generate values outside of the original range while still being faithful to the structure of the underlying data [28,34], as shown in Fig. 5. store A in L 15 end 16 return L |V A | is the number of nodes in the graph whose node embeddings are contained in A. Note that µ A , σ 2 A , h A , in lines 3, 4, 5 and 9 are all d-dimensional vectors (i.e., the functions are applied column-wise). ...
Chapter
Full-text available
Graph learning is currently dominated by graph kernels, which, while powerful, suffer some significant limitations. Convolutional Neural Networks (CNNs) offer a very appealing alternative, but processing graphs with CNNs is not trivial. To address this challenge, many sophisticated extensions of CNNs have recently been introduced. In this paper, we reverse the problem: rather than proposing yet another graph CNN model, we introduce a novel way to represent graphs as multi-channel image-like structures that allows them to be handled by vanilla 2D CNNs. Experiments reveal that our method is more accurate than state-of-the-art graph kernels and graph CNNs on 4 out of 6 real-world datasets (with and without continuous node attributes), and close elsewhere. Our approach is also preferable to graph kernels in terms of time complexity. Code and data are publicly available (https://github.com/Tixierae/graph_2D_CNN).
... Lall and Sharma [27] pioneered this method in modelling hydrological time series data. Successive studies include [5,[28][29][30][31][32][33]. Although non-parametric methods have been tested before, most existing non-parametric weather generators operate at daily time scales or above and are of limited value in simulating Hortonian runoff generated instantaneously whenever precipitation intensity exceeds the landscape's infiltration capacity. ...
Article
Full-text available
This paper presents a new non-parametric, synthetic rainfall generator for use in hourly water resource simulations. Historic continuous precipitation time series are discretized into sequences of dry and wet events separated by an inter-event dry period at least equal to four hours. A first-order Markov Chain model is then used to generate synthetic sequences of alternating wet and dry events. Sequential events in the synthetic series are selected based on couplings of historic wet and dry events, using nearest neighbor and moving window methods. The new generator is used to generate synthetic sequences of rainfall for New York (NY), Syracuse (NY), and Miami (FL) using over 50 years of observations. Monthly precipitation differences (e.g., seasonality) are well represented in the synthetic series generated for all three cities. The synthetic New York results are also shown to reproduce realistic event sequences proved by a deep event-based analysis.