Fig 1 - uploaded by Jun Zhang
Content may be subject to copyright.
The block flow diagram of extracting gold from ores in a gold treatment plant.

The block flow diagram of extracting gold from ores in a gold treatment plant.

Contexts in source publication

Context 1
... this study, a GCLP plant with four pneumatic continuous stirred tank reactors (CSTRs) in a gold treatment plant will be investigated. The block flow diagram of extracting gold from ores in this plant is shown in Fig. 1. The photograph of field device and the simplified plant flowsheet of the GCLP in this plant are shown in Figs. 2 and 3, respectively. The procedures to extract gold from ores in this plant is mainly composed of floatation, washing and conditioning, gold cyanidation leaching, two-stage washing and gold recovered by zinc. The sulfide ...
Context 2
... kinetic reaction rate estimates with Tikhonov regularization (TR) method are shown in Fig. 10. To show the superiority of the proposed estimation method, the estimates with traditional finite difference (FD) method are also given together. For the noise-free case, the estimates with both methods are almost identical at the steady state. However, at the dynamic or transient state, the TR method has better estimation accuracy ...
Context 3
... demonstrate the effect of regularization factors on the estimates, several simulations with regularization factors of different orders of magnitude are made for the case of 5% noise and the corresponding estimates as well as RMSE (root mean square error) are shown in Fig. 11 and Table 2, respectively. It can be seen from Fig. 11 that the estimation curves become more and more smooth with the increase of k. However, the estimation accuracy deteriorates when k is sufficiently large, which indicates that a proper k need to be determined by taking account of the fitting error and smoothness simultaneously. The ...
Context 4
... demonstrate the effect of regularization factors on the estimates, several simulations with regularization factors of different orders of magnitude are made for the case of 5% noise and the corresponding estimates as well as RMSE (root mean square error) are shown in Fig. 11 and Table 2, respectively. It can be seen from Fig. 11 that the estimation curves become more and more smooth with the increase of k. However, the estimation accuracy deteriorates when k is sufficiently large, which indicates that a proper k need to be determined by taking account of the fitting error and smoothness simultaneously. The common approach to determine k is general cross ...
Context 5
... model are not an optimal operation for the actual plant, but only a suboptimal or even an infeasible point. To solve plant-model mismatch and process disturbances, real time optimization via modifier adaptation is applied to the GCLP plant. The block diagram of the adaptive RTO approach based on the proposed serial hybrid model is shown in Fig. 14. The proposed adaptive RTO strategy has the following (c) blue line with asterisk-λ =10 5 , purple dotted line with pentagram-λ =10 6 , black dot-dash line with square-λ =10 7 , red line with circle-simulated reality values. The RMSE of the estimates with TR method and FD method. advantage. When the serial hybrid model matches with the ...
Context 6
... adaptation approach in Fig. 14 is applied to the GCLP in a gold treatment plant. The relevant fixed parameters are taken ...
Context 7
... À3 (CNY/mg). Here the RTO period is 4 h to ensure that the actual process can reach a new steady state after changing the operation conditions. A conservative filter parameter of 0.5 for all of the inputs is used for preventing the modifier adaptation algorithm from instability. The RTO result with modifier adaptation approach (MAA) is shown in Fig. 15. In practice, the optimal feed rates of cyanide in each tank are unknown for the operators in the plant and only a suboptimal or worse set point can be obtained by their experience. Usually excessive cyanide is added into leaching tanks to prevent the loss of the not-leached gold in the ore. However, the fact is not the case and the ...
Context 8
... algorithm starts from a feasible operation point (Initial point 1 in Fig. 15) that is determined by operators according to the result of the model-based optimization and the operation conditions. And meanwhile the adaptive RTO strategy with modifier adaptation approach is implemented based on the plant measurements at the steady state. The adaptive algorithm drives the plant to move to better operation points ...
Context 9
... of production cost gradually. After about 14 iterations, the algorithm converges to a point at which the production cost decreases from 2601 CNY/h to 1996 CNY/h, 605 CNY/h saved. At the 44th iteration, the operation condition changes (where Cs 0 has a larger disturbance from 152 to 267) and faster feed rates of cyanide (Initial point 2 in Fig. 15) are chosen in order to leach more gold from the ore with high content. After about 13 iterations, the algorithm converges to a point at which the production cost decreases from 3457 CNY/h to 2967 CNY/h, 490 CNY/h saved. It can be observed from Fig. 15 that properly speaking, the algorithm does not converge to a fixed point but ...
Context 10
... has a larger disturbance from 152 to 267) and faster feed rates of cyanide (Initial point 2 in Fig. 15) are chosen in order to leach more gold from the ore with high content. After about 13 iterations, the algorithm converges to a point at which the production cost decreases from 3457 CNY/h to 2967 CNY/h, 490 CNY/h saved. It can be observed from Fig. 15 that properly speaking, the algorithm does not converge to a fixed point but fluctuates around the point, which can be attributed to the effect of measurement noise on the estimates of plant gradients. In practice, frequently small changes of set points are usually not allowable. Before the RTO result is applied, result analysis (RA) ...
Context 11
... that if considerable reduction of production cost cannot be obtained by several successive changes of the set points, only the optimal one of those is chosen as the implemented set point. The comparison of the prediction results with the two models. Production cost with only MAA Production cost with both MAA and RA Initial point 1 Initial point 2 Fig. 15. The RTO result with modifier adaptation approach for the GCLP ...

Citations

... Once the working environment or production conditions change, the model needs to be updated, which limits the implementation of model-based controllers. 36,37 Besides, the model-based control strategy not only has greater computational complexity but also has great engineering application limitations. 38 The MFAC control algorithm based on CFDL has the advantages of simple structure, good algorithm stability, and easy to realize controller design of the actual process. ...
Article
Full-text available
Hydrometallurgy technology can directly deal with low grade and complex materials, improve the comprehensive utilization rate of resources, and effectively adapt to the demand of low carbon and cleaner production. A series of cascade continuous stirred tank reactors are usually applied in the gold leaching industrial process. The equations of leaching process mechanism model are mainly composed of gold conservation, cyanide ion conservation, and kinetic reaction rate equations. The derivation of the theoretical model involves many unknown parameters and some ideal assumptions, which leads to difficulty and imprecision in establishing the accurate mechanism model of the leaching process. Imprecise mechanism models limit the application of model-based control algorithms in the leaching process. Due to the constraints and limitations of the input variables in the cascade leaching process, a novel model-free adaptive control algorithm based on compact form dynamic linearization with integration (ICFDL-MFAC) control factor is first constructed. The constraints between input variables is realized by setting the initial value of the input to the pseudo-gradient and the weight of the integral coefficient. The proposed pure data-driven ICFDL-MFAC algorithm has anti-integral saturation ability and can achieve faster control rate and higher control precision. This control strategy can effectively improve the utilization efficiency of sodium cyanide and reduce environmental pollution. The consistent stability of the proposed control algorithm is also analyzed and proved. Compared with the existing model-free control algorithms, the merit and practicability of the control algorithm are verified by the practical leaching industrial process test. The proposed model-free control strategy has advantages of strong adaptive ability, robustness, and practicability. The MFAC algorithm can also be easily applied to control the multi-input multi-output of other industrial processes.
... The complicated kinetic reaction mechanism and process random factors increase the difficulty in modeling the leaching system [12]. Due to the different production technologies, equipment parameters, and operation modes, the structure and parameters of the kinetic model of the gold cyanidation leaching system are different and need to be determined by appropriate methods [13]. The achievements of researchers in determining the gold leaching reaction mechanism provide a good guide for the establishment of the leaching system model in this paper. ...
Article
Full-text available
In order to improve the leaching efficiency of gold ore and reduce the environmental treatment cost of residual sodium cyanide, continuous stirred tank reactors are often connected in a cascade manner. A gold leaching system is a multiphase chemical reaction system, and its kinetic reaction mechanism is complex and affected by random factors. Using intelligent modeling technology to establish a hybrid prediction model of the leaching system, the dynamic performance of the process can be easily analyzed. According to the reaction principle and the theory of substance conservation, a mechanism model is established to reflect the main dynamic performance of the leaching system. In order to improve the global convergence of the optimization target, a particle swarm optimization (PSO) algorithm based on simulated annealing is used to optimize the adjustment parameters in the kinetic reaction velocity model. The multilayer long short-term memory (LSTM) neural network approach is used to compensate for the prediction errors caused by the unmodeled dynamics, and a hybrid model is established. The hybrid prediction model can accurately predict the leaching rate, which provides a reliable basis for guiding production, and also provides a model basis for process optimization, controller design, and operation monitoring. Finally, the superiority and practicability of the hybrid model are verified by a practical leaching industrial system test. The prediction model of key variables in the leaching process is established for the first time using the latest time series prediction technology and intelligent optimization technology. The research results of this paper can provide a good reference and guidance for other research on complex system hybrid modeling.
... The serial hybrid model structure not only makes full use of the available process knowledge but also has the advantage of easily discovering the relationship between multiple datasets by data-driven models. This is expected to provide better prediction performance than the pure mechanistic model and the pure data-driven model [38]. ...
Article
Full-text available
Kinetic modeling of fermentation processes is difficult due to the use of micro-organisms that follow complex reaction mechanisms. Kinetic models are usually not perfect owing to incomplete knowledge of the system. Recently, there is a lot of interest towards data-driven modeling as the amount of data collected, stored, and utilized is growing tremendously due to the advent of super-computing power and data storage devices. Additionally, data-driven models are simple and easy to build but their utility is restricted by the amount and quality of data required. Therefore, hybrid modeling is an attractive alternative to purely data-based modeling, wherein it combines a kinetic model with a data-based model resulting in improved accuracy and robustness. In this work, a hybrid model is developed for an industry-scale fermentation process (>100,000 gallons) using a three-step process. The accuracy of the kinetic model is first improved utilizing process knowledge obtained from the literature. Sensitivity analyses are then utilized to identify sensitive parameters in the kinetic model that have considerable influence on its prediction capability. Finally, a deep neural network (DNN)-based hybrid model is developed by integrating the kinetic model with a DNN trained with time-series process data to predict sensitive and uncertain model parameters. The hybrid model is shown to be more accurate and robust than the kinetic model, providing a novel capability to capture unknown time-varying dependencies among parameters.
... The pulp was filtered following each trial, the pulp was separated, and the liquid stage was examined for the extraction. Various parameters in the literature were used for cyanide leaching [28][29][30][31]. Based on the literature, five parameters were used in this work, i.e. potential of hydrogen (pH), solid content (in %), NaCN concentration (in ppm), leaching time (in Hr), and particle size (in µm) in order to predict the gold ore deposit. ...
Article
Full-text available
This paper elucidates a new idea and concept for exploration of the gold ore deposits. The cyanidation method is traditionally used for gold extraction. However, this method is laborious, time-consuming, costly, and depends upon the availability of the processing units. In this work, an attempt is made in order to update the gold exploration method by the Monte Carlo-based simulation. An excellent approach always requires a high quality of the datasets for a good model. A total of 48 incomplete datasets are collected from the Shoghore district, Chitral area of Khyber, Pakhtunkhwa, Pakistan. The cyanidation leaching test is carried out in order to measure the percentage of the gold ore deposits. In this work, the mean, median, mode, and successive iteration substitute methods are employed in such a way that they can compute the datasets with missing attributes. The multiple regression analysis is used to find a correlation between the potential of hydrogen ion concentration (pH), solid content (in %), NaCN concentration (in ppm), leaching time (in Hr), particle size (in µm), and measured percentage of gold recovery (in %). Moreover, the normal Archimedes and exponential distributions are employed in order to forecast the uncertainty in the measured gold ore deposits. The performance of the model reveals that the Monte Carlo approach is more authentic for the probability estimation of gold ore recovery. The sensitivity analysis reveals that pH is the most influential parameter in the estimation of the gold ore deposits. This stochastic approach can be considered as a foundation to foretell the probabilistic exploration of the new gold deposits.
... Both approaches have their own advantages and disadvantages as summarized in [1,2] and a choice is made based on the prior understanding about the system and the availability of data. In chemical engineering [3][4][5][6][7][8][9] and biotechnology [1,[10][11][12][13][14][15][16], hybrid modeling is emerging as a pragmatic solution to mathematical modeling, exploring the synergy between the two paradigms. Hybrid models have been very successful in systems that are only partially understood, and the availability of data is limited or/and costly. ...
Preprint
Full-text available
Mathematical models used for the representation of (bio)-chemical processes can be grouped into two broad paradigms: white-box or mechanistic models, completely based on knowledge or black-box data-driven models based on patterns observed in data. However, in the past two-decade, hybrid modeling that explores the synergy between the two paradigms has emerged as a pragmatic compromise. The data-driven part of these have been largely based on conventional machine learning algorithm (e.g., artificial neural network, support vector regression), which prevents interpretability of the finally learnt model by the domain-experts. In this work we present a novel hybrid modeling framework, the Functional-Hybrid model, that uses the ranked domain-specific functional beliefs together with symbolic regression to develop dynamic models. We demonstrate the successful implementation of these hybrid models for four benchmark systems and a microbial fermentation reactor, all of which are systems of (bio)chemical relevance. We also demonstrate that compared to a similar implementation with the conventional ANN, the performance of Functional-Hybrid model is at least two times better in interpolation and extrapolation. Additionally, the proposed framework can learn the dynamics in 50% lower number of experiments. This improved performance can be attributed to the structure imposed by the functional transformations introduced in the Functional-Hybrid model.
... The serial hybrid model structure not only makes full use of the available process knowledge but also has the advantage of easily discovering the relationship between multiple datasets by data-driven models. This is expected to provide better prediction performance than the pure mechanistic model and the pure data-driven model [38]. ...
... In the RTO paradigm, this problem can be addressed by utilizing the last output measurements to modify the steady-state optimization problem. 2,5,6 For EMPC, the economic performances also strongly rely on the accuracy of the dynamic process model in the resulting optimization problem. However, the classic EMPC may not always guarantee the economic performances due to the plantmodel mismatch. ...
... In order to estimate the derivatives of the concentration measurements with respect to time, data, and the derivatives are computed from the fitted function. The most commonly used method is the finite difference method, where the current sample, as well as several nearby samples, are used to approximate the current derivative.2,3 To avoid the amplification and propagation of the noise, smoothing techniques were developed to compute the derivative estimates from the noisy data accurately. ...
... Under each possible value of d m , 100 samples of the residual gold concentration in the solid phase are generated according to Equation(35), and the unknown factor is normally distributed, that is, e m~N (0, 0.02). Equations (2)-(4) are integrated with Equations (10)-(11) and Equation(35) to generate the historical data for estimating the uncertain parameter c ∞ , and 5% noise is also added into the measurements of the states in Equations (2)-(4) as used in the literature.2 The nominal parameters used in this work are illustrated in ...
Article
Full-text available
In order to dynamically operate the gold cyanidation leaching process (GCLP) under uncertainty, a multi‐stage economic model predictive control (EMPC) is proposed for GCLP for the transient and steady‐state economic optimization. The proposed multi‐stage EMPC is composed of two steps. In the first step, the unmeasurable uncertain parameters are estimated by using Tikhonov regularization based method, so as to avoid amplification and propagation of the noise measurements into the estimation. Based on the estimated results, the scenario tree for multi‐stage EMPC is generated from the historical data using a data‐driven approach, and the control inputs are obtained from solving the resulting large nonlinear programming problem (NLP) at each sampling point. The resulting uncertainty model and the probability of each scenario are more consistent with the actual industrial GCLP, and the solutions are less conservative. The efficiency of the proposed multi‐stage EMPC is verified through a simulated industrial GCLP. Compared with other EMPC methods, including classic EMPC and multi‐stage EMPC with box uncertainty region, the proposed method can reduce the economic cost while accounting for the constraints at the same time.
... C s ∞ represents the ideal residual gold grade in the ore after the leaching operation, which is a function of the average particle diameter d̅ of gold ore (9) where r Au and r CN are the kinetic reaction rates of leaching process, that is, the gold dissolution rate and the cyanide consumption rate, respectively, which are affected by C s , C CN , C o , and d̅ and are the most key aspect for modeling the gold cyanidation leaching process. 9,10,17,19 Usually, only an approximate empirical kinetic reaction rate model can be obtained and most of model uncertainties result from the kinetic model essentially. 9,10 The above mechanism model can be easily solved using an iterative numerical method based on the Matlab software. ...
... 9,10,17,19 Usually, only an approximate empirical kinetic reaction rate model can be obtained and most of model uncertainties result from the kinetic model essentially. 9,10 The above mechanism model can be easily solved using an iterative numerical method based on the Matlab software. 37,38 Finally, the gold recovery can be calculated by the following formula ...
Article
Full-text available
It is very important to establish an accurate process model for implementing further control and optimization of the gold cyanidation leaching process. Unfortunately, the important kinetic reaction rates affecting process operation are unmeasurable, and moreover, their estimation is an ill-posed inverse problem due to the fact that the noise in concentration measurements is easy to be amplified and propagated into the rate estimates by derivative operation. In this paper, the alternative strategies (finite difference, polynomial fitting, Savitzky–Golay filter, wavelet decomposition, and Tikhonov regularization) estimating the kinetic reaction rate of the gold cyanidation leaching process are investigated in detail. The simulation results show that the direct finite difference leads to poor estimating results for the noisy case and the other strategies are capable of avoiding the noise amplification and improving the estimating results to some extent. In all of the investigated strategies, the Tikhonov regularization leads to the satisfactory and acceptable estimating results in both the noiseless and noisy cases, which will lay an important foundation for the subsequent model identification, production index prediction, and operation optimization.
... Meanwhile, the machine learning methods have been widely applied in process modeling such as Gaussian process regression (GPR) [11], [12], support vector regression (SVR) [13], artificial neural networks (ANN) [14], [15], their application study has been analyzed in the literature [8], [16]. To solve the kinetic reaction rate expressions are difficult to be obtained accurately in the actual GCLP, Zhang J et al proposed a serial hybrid modeling method that combined the first-principle model (the mass conservation equations in steady-state mechanistic model) and data-driven model (two BP ANN models) [17]. Liu Y et al proposed a state evaluate method for the whole of the gold hydrometallurgical process, using a total projection to latent structures method to evaluate the current production state [18]. ...
... The use of a blower to introduce compressed air into the leaching tank not only acts as a pneumatic agitation but also provides dissolved oxygen required for the reaction. The detailed process description of GCLP please refer to [1]- [3], [17] and it will not be covered here. Recently, related scholars have conducted in-depth research on the reaction mechanism of GCLP [1], [2], [20]. ...
... To summarize, the optimal model parameters can be obtained by iterate alternately (16), (17) and (18), (19) until the model parameters converge. Once given a query sample, the corresponding probability distribution of the latent variables z q can be calculated by (13). ...
Article
Full-text available
Gold cyanidation leaching process (GCLP) as the central unit operation in hydrometallurgy, which suffers from the problem of the optimal setting point based-model is difficult to reach the optimal working point in actual GCLP due to model error, which leads to lower economic benefit. Meanwhile, the process data contains noise and uncertainty on account of the fluctuation of raw material properties. Therefore, how to take the most of that data to make the production process run in the optimal state of economic benefit under the premise that the quality index meets the production requirements is an urgent problem to be solved. In this paper, a data-driven iterative optimization compensation strategy is proposed to solve aforementioned problems. Firstly, probabilistic principle component analysis (PPCA) method is used to preprocess process data for eliminating the effect of noise and uncertainty; Secondly, two relevant models are established between the operating variable increment and the economic benefit increment, the quality index increment based on just in time (JIT) and partial least squares (PLS) method; Finally, the optimal operating variable increment that maximizes the economic benefit increment can be optimized under the condition of quality index satisfy the production requirements and iterated at the new working point, which is constantly close to the optimal working point to improve the economic benefit. Simulation studies have verified the validity of proposed method.
... In bioprocess engineering this included general bioreactor modeling (Psichogios, 1992) and production processes for penicillin (Can, Braake, Hellinga, Luyben, & Heijnen, 1997;Montague et al., 2010;Thompson & Kramer, 1994), baker's yeast (Feyo de Azevedo, Dahm, & Oliveira, 1997;Oliveira, 2004;Schubert, Simutis, Dors, Havlik, & Lubbert, 1994a, 1994b, and beer (Zorzetto, Filho, & Wolf-Maciel, 2000). Hybrid models have also been reported for several applications in chemical engineering (Georgieva, Feyo de Azevedo, Gonçalves, & Ho, 2003;Hu, Mao, He, & Yang, 2011;Nagrath, Messac, Bequette, & Cramer, 2004;Tian, Zhang, & Morris, 2001;Zander & Dittmeyer, 1999;Zhang, Mao, Jia, & He, 2015). ...
Article
Full-text available
Due to the lack of complete understanding of metabolic networks and reaction pathways, establishing a universal mechanistic model for mammalian cell culture processes remains a challenge. Contrarily, data‐driven approaches for modeling these processes lack extrapolation capabilities. Hybrid modeling is a technique that exploits the synergy between the two modeling methods. Although mammalian cell cultures are among the most relevant processes in biotechnology and indeed looks ideal for hybrid modeling, their application has only been proposed but never developed in the literature. This study provides a quantitative assessment of the improvement brought by hybrid models with respect to the state‐of‐the‐art statistical predictive models in the context of therapeutic protein production. This is illustrated using a dataset obtained from a 3.5 L fed‐batch experiment. With the goal to robustly define the process design space, hybrid models reveal a superior capability to predict the time evolution of different process variables using only the initial and process conditions in comparison to the statistical models. Hybrid models not only feature more accurate prediction results but also demonstrate better robustness and extrapolation capabilities. For the future application, this study highlights the added value of hybrid modeling for model‐based process optimization and design of experiments.