Article

Abnormal situation management: Challenges and opportunities in the Big Data Era

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Although modern chemical processes are highly automatic, abnormal situation management (ASM) still heavily relies on human operators. Process fault detection and diagnosis (FDD) are one of the most important issues of ASM but few FDD systems have been satisfactorily applied in real chemical processes since the concept of FDD was proposed about 40 years ago. In this paper, developments of chemical process FDD are briefly reviewed. The reason why FDD has not been widely implemented in the chemical process industry is discussed. One of the insights gained is that some basic problems in FDD such as how to define faults and how many faults to diagnose have not even been addressed well while researchers tirelessly try to invent new methods to diagnose fault. A new framework is proposed based on the big data in a cloud computing environment of a big chemical corporation for addressing the challenging issues in ASM.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... According to statistics, abnormal situations cause USD 20 billion in economic losses to the US petrochemical industry every year [4]. In China, abnormal situations result in unplanned shutdowns of over 5 days per year for 600,000 tons of catalytic cracking units, resulting in economic losses of over USD 1 million per day [5]. Therefore, strengthening the management of abnormal situations is crucial for ensuring the normal operation of the device and preventing losses caused by abnormalities. ...
... Eljack safety aspects of abnormal situations in industrial facilities and introduced current efforts to better manage abnormal situations [13]. Shu et al. systematically reviewed the development of FDD, analyzed the challenges and opportunities faced by FDD in the era of big data, and proposed a new FDD framework based on big data [5]. Arunthavanathan et al. analyzed the relationship between FDD, ASM, and risk assessment (RA), providing a roadmap for subsequent research on process safety [6]. ...
... Based on the above definition, Venkatasubramanian et al. gave a more specific definition of ASM, defining ASM as the entire activity of the timely identifying of deviations or abnormalities, diagnosing their root causes, and taking appropriate preventive and control measures to restore the chemical process to a normal state [17]. This definition further clarifies the content of ASM and has been widely accepted by researchers [5,13]. However, according to Dai et al., risk assessment was also an important aspect of ASM [12]. ...
Article
Full-text available
In the chemical process, abnormal situations are precursor events of incidents and accidents. Abnormal situation management (ASM) can effectively identify abnormalities and prevent them from evolving into incidents or accidents, ensuring the safe and smooth operation of chemical plants. In recent years, ASM has attracted extensive attention from the process industry and from academia, and a lot of research work has been conducted. However, the intelligence level of ASM in actual chemical plants is still relatively low, and industrial applications still face many difficulties and challenges. This review first summarizes the concepts and involved in the contents of ASM. Then, the latest research progress in various aspects of ASM is systematically reviewed. Finally, the challenges and future research directions of ASM are analyzed based on the perspective of industrial application. This review aims to provide the most cutting-edge reference for follow-up research on ASM, and to promote the intelligent development and practical industrial application of ASM in the chemical process.
... The continuous chemical industry presents great potential for improvements in the new perspectives of Industry 4.0. For example, safe and efficient plant operation requires constant monitoring of thousands of process variables that nowadays are still attributed to human operators [13]. Industry 4.0 technologies may help in reducing accidents and preserve the environment (CHRISTOFIDES et al., 2007) [14]. ...
... He et al. (2017) [30] argued that extracting useful information from Big Data is a significant challenge for monitoring process failure. [13] argued that although the concept of process failure diagnosis is an old research question, there are still only a few systems satisfactorily being applied in actual chemical processes. Create health, safety and environmental assessment models: according to Mohan et al. (2021) and Liu et al. (2017) [62], [70], online health, safety and environmental management (HSE) is one of the most important requirements of Industry 4.0, as the consumer is increasingly interested in the industry's sustainability policies. ...
... The authors stated that system reliability is essential in the chemical industry. [13] reaffirmed that timely, reliable and automatic decision-making (which supports operations in abnormal situations in chemical processes), is an indispensable cognitive function for the chemical industry 4.0. According to [33], system reliability is one of the most important factors in assessing the health, safety and environmental state of the chemical industry as well as the likelihood of completing assigned tasks under certain conditions without failure. ...
Article
Full-text available
Industry 4.0 technologies may provide great improvements in the productive environment of continuous chemical industries. For example, the availability of real-time data management improves unit operations integration in process intensification. This improvements provide increased profitability, safety, sustainability and fault prediction capability. However, the sector presents specific obstacles in deploying 4.0 technologies because of its intrinsic complexity. This paper objective is to identify the sector-specific difficulties and Critical Success Factors when implementing Industry 4.0 technologies. A comprehensive systematic literature review with content analysis was carried out . Among the emerging necessities identified, the need to simplify complex systems and intensify operations is highlighted. The literature also converges on the urgency for developing reliable systems for adverse event management and assessment models for health, safety and environmental management. This paper is therefore both a tool for managers who seek information when implementing 4.0 technologies and researchers who may be looking for new topics in this area. Future research opportunities in the area are also presented.
... The management of such process deviations is termed abnormal situation management or ASM (Dai et al., 2016;Eljack and Kazi, 2016). Abnormal situation management provides early warning of atypical situations coupled with timely diagnosis of abnormality root causation, and offers decision-making support to process operators to facilitate reasonable actions to restore the process to normalcy (Shu et al., 2016). In other words, ASM identifies deviations from normal operation, which may lead to failure conditions. ...
... Automation of abnormal situation management has not, however, been fully realized. Economic loss of about 20 billion USD (2016) has been estimated to result from abnormal situations in the chemical and petroleum industries in the United States (Shu et al., 2016). ...
... Such data are often nonlinear, highly time-variant, and non-Gaussian in distribution. Descriptions of satisfactory application of fault diagnosis techniques in real industrial processes are rare (Shu et al., 2016). Although process safety in conjunction with abnormal situation management has undergone significant improvements globally over the past couple of decades, this has not translated into a significant reduction in major process accidents. ...
Article
While efforts to use digital solutions in process operations are gaining wider acceptance, there are serious safety concerns that need to be addressed when adopting digitalization. Process operations have evolved from batch operation to continuous operation, and from smaller plants to large-scale plants. Automation and digitalization of processes, especially in process monitoring, instrumentation, and control are becoming the norm. Safety issues have also evolved with these developments, from simple equipment failure to failure of process systems (equipment with electronic systems), monitoring and control systems, data encryption systems, and most recently, software systems. How these evolving process safety issues should be taught in the classroom to educate and train the next generation of chemical engineers is a challenge with an opportunity. If such issues are not taught in academia, this will create a gap between education and practice, which would have a negative impact on the overall safety of process facilities. Therefore, proactively converting this challenge to an educational opportunity and bringing digital process safety issues into the classroom are of paramount importance to help reinforce the concept of making process safety learning a conscious choice. This will hopefully lessen our reliance on learning from accidents. The current paper presents the need to incorporate digital process safety as part of the chemical engineering curriculum to adequately address the process industry’s emphasis on digital solutions in process operations.
... With the advancement of digitalization and automation in recent decades, various industrial processes have generated massive amounts of data, and the computing power of machines has reached an unimaginable level ( Rehman et al., 2019 ). Especially in the modern chemical industry, production processes can be effectively monitored and controlled by distributed control systems (DCS) and advanced process control (APC) ( Shu et al., 2016 ). Technological progress has laid the foundation for the gradual evolution of chemical processes into smart ones on the wave of Industry 4.0 ( Weyer et al., 2015 ). ...
... These tangible benefits made researchers show incremental interest in FDD. Over the past 20 years, the number of publications on chemical process FDD has gradually increased ( Shu et al., 2016 ). An optimal FDD system should be able to detect the presence of faults in a timely and accurate manner, and provide a reliable diagnosis of the root cause of faults, thus providing decision support to the operators to restore the process to normal production status ( Venkatasubramanian et al., 2003 ). ...
Article
Process fault detection and diagnosis (FDD) is an essential tool to ensure safe production in chemical industries. After decades of development, despite the promising performance of some FDD methods on specific tasks, most FDD methods are not smart enough to tackle the complex challenges in real industrial processes, rendering an absence of commercialized FDD tools. Therefore, the implementation of smart FDD becomes an ambitious goal for process safety. In this paper, we provide an overview of the concept and major challenges of smart FDD. Recent FDD methods are comprehensively evaluated with respect to the characteristics of smart FDD. We also present the researches done by our group, which we believe would be a step forward for smart FDD. A range of future opportunities and new perspectives are further discussed. This review aims to illuminate potential directions for process safety and to contribute to the realization of commercial FDD tools.
... Data-driven fault detection and diagnosis methods represent the next-generation facility management and maintenance techniques adopting modern AI techniques, such as sensor networks [15,16], data analytics [17], big data [18,19], machine learning (ML) [20,21], cybernetic intelligence (CI) [22,23] and Internet of things (IoT) [24,25] and etc. For different building infrastructures, such as heating, ventilation air-conditioning (HVAC), plumbing, fire safety, electrical and elevator systems. ...
... Regular inspection of room condition and practice proper housekeeping should be carried out. The room should not be used as storage; remove all non-elevator related materials from the machine room.Adequate lighting should be provided in the elevator machine room to allow workers to conduct maintenance works safely and efficiently[17].Specify corrosion resistant material and components in elevator system to minimise damage by presence of water or excessive moisture. Test waterproofing of elevator pit before installation of elevator equipment in accordance with BS 5655-6, BS 5655-11, BS EN 8120, SS 550 or equivalent. ...
Article
Full-text available
Data-driven fault detection and diagnosis (FDD) methods, referring to the newer generation of artificial intelligence (AI) empowered classification methods, such as data science analysis, big data, Internet of things (IoT), industry 4.0, etc., become increasingly important for facility management in the smart building design and smart city construction. While data-driven FDD methods nowadays outperform the majority of traditional FDD approaches, such as the physically based models and mathematically based models, in terms of both efficiency and accuracy, the interpretability of those methods does not grow significantly. Instead, according to the literature survey, the interpretability of the data-driven FDD methods becomes the main concern and creates barriers for those methods to be adopted in real-world industrial applications. In this study, we reviewed the existing data-driven FDD approaches for building mechanical & electrical engineering (M&E) services faults and discussed the interpretability of the modern data-driven FDD methods. Two data-driven FDD strategies integrating the expert reasoning of the faults were proposed. Lists of expert rules, knowledge of maintainability, international/local standards were concluded for various M&E services, including heating, ventilation air-conditioning (HVAC), plumbing, fire safety, electrical and elevator systems based on surveys of 110 buildings in Singapore. The surveyed results significantly enhance the interpretability of data-driven FDD methods for M&E services, potentially enhance the FDD performance in terms of accuracy and promote the data-driven FDD approaches to real-world facility management practices.
... The developed system, through the early detection of abnormal situations, helps the operator to reduce material, energy, and production time losses. Shu et al. (2016) briefly reviewed the developments of fault diagnosis in chemical processes over the previous two decades. They also presented a new framework for big chemical corporations in a cloud computing environment to address the challenging issues in ASM. ...
... Meanwhile, a main challenge in the process conditioning is the lack of faulty historical data. By using cloud computing, companies can now store and share their historical process data, which can be utilized by peer companies for fault detection and diagnosis purposes (Shu et al. 2016). (6) Best real-time analysis: In chemical industries, data are collected from multiple sources with different time scales, so a most important challenge is to choose the best real-time analysis for datasets (Piovoso and Kosanovich 1994). ...
Article
Full-text available
Big data is an expression for massive data sets consisting of both structured and unstructured data that are particularly difficult to store, analyze and visualize. Big data analytics has the potential to help companies or organizations improve operations as well as disclose hidden patterns and secret correlations to make faster and intelligent decisions. This article provides useful information on this emerging and promising field for companies, industries, and researchers to gain a richer and deeper insight into advancements. Initially, an overview of big data content, key characteristics, and related topics are presented. The paper also highlights a systematic review of available big data techniques and analytics. The available big data analytics tools and platforms are categorized. Besides, this article discusses recent applications of big data in chemical industries to increase understanding and encourage its implementation in their engineering processes as much as possible. Finally, by emphasizing the adoption of big data analytics in various areas of process engineering, the aim is to provide a practical vision of big data.
... Safe production is a continuing concern within modern industry. With the development of automation and digitization, industrial processes can be efficiently controlled by systems like distributed control systems (DCS) and advanced process control (APC) (Shu et al., 2016). However, despite advances in control systems that have made production more intelligent, real-world processes are often rather complicated and inevitably in a fault state, leading to shutdowns, economic losses, injuries, or even catastrophic accidents in severe cases (Venkatasubramanian et al., 2003). ...
... ASM is a centralized and integrated process that implies instant detection of abnormal conditions, timely diagnosis of the root causes, and decision support to operators for the elimination of the faults (Hu et al., 2015;Dai et al., 2016). It has become a consensus in academia and industry that process monitoring including FDD is one of the most critical issues of ASM (Shu et al., 2016). Therefore, building an efficient, robust, and application-worthy process monitoring framework is of supreme importance for process safety. ...
Article
Industrial processes are becoming increasingly large and complex, thus introducing potential safety risks and requiring an effective approach to maintain safe production. Intelligent process monitoring is critical to prevent losses and avoid casualties in modern industry. As the digitalization of process industry deepens, data-driven methods offer an exciting avenue to address the demands for monitoring complex systems. Nevertheless, many of these methods still suffer from low accuracy and slow response. Besides, most black-box models based on deep learning can only predict the existence of faults, but cannot provide further interpretable analysis, which greatly confines their usage in decision-critical scenarios. In this paper, we propose a novel orthogonal self-attentive variational autoencoder (OSAVA) method for process monitoring, consisting of two components, orthogonal attention (OA) and variational self-attentive autoencoder (VSAE). Specifically, OA is utilized to extract the correlations between different variables and the temporal dependency among different timesteps; VSAE is trained to detect faults through a reconstruction-based method, which employs self-attention mechanisms to comprehensively consider information from all timesteps and enhance detection performance. By jointly leveraging these two models, our OSAVA method can effectively perform fault detection and identification tasks simultaneously and deliver interpretable results. Finally, through extensive evaluation on the Tennessee Eastman process (TEP), we demonstrate that our proposed OSAVA shows promising fault detection rate as well as low detection delay and can correctly identify the abnormal variables, compared with representative statistical methods and state-of-the-art deep learning models.
... 10 Meanwhile, the AI-based diagnosis tool not only can make quickly decisions and recommendation remedial actions even when given insufficient information, but also can automatically correct results according to the actual situation. [10][11][12][13][14][15][16][17][18] Typically, what the AI-based intelligent diagnosis is done is to train a mathematical decision algorithm by feeding in huge amount of equipment data including that of both normal and abnormal conditions, and failure mechanisms will then be identified and classified. ...
... 14,[19][20][21][22] Owing to the fact that the equipment diagnostic algorithm was obtained from learning features existing within abnormal situations, it is crucial that the algorithm should be sufficiently robust to distinguish among deficiencies in equipment and patterns caused by normal operation conditions. 23 Many literatures have reported that the diagnostic algorithms based on artificial neural network (ANN) 13,15,16,21,[24][25][26][27][28] This method involved putting the sensors at key positions where abnormalities could be directly revealed and setting them at a high sampling rate (12.8 kHz) to collect the vibration signals. After that, the wavelet neural network (WNN) was used to establish a model that diagnosed the causes of failure. ...
Article
Full-text available
Compressors in petrochemical plants are often crucial to process operations, and when a failure occurs, the outcome can be catastrophic. Many researches have been attempting to detect failure modes as early as possible to plan upfront repair and conceivably reduce maintenance time. A reciprocating compressor was selected as the target of this study, and a few years of historical records of maintenance parameters and maintenance work orders were gathered for analysis. The time history was divided into 13 events, and each event started with a normal operation and ended with a repair work order. Time‐domain features and wavelet decomposition features of the parameters were extracted, and the patterns stored within each event were identified using the artificial neural network and support vector machine. Moreover, a set of reasoning algorithms were developed to detect anomalies, and responsible failure modes were identified. For a specific type of compressor, the vibration signal was found to be related to most of the anomalies and thus used for evaluation. Results showed a >90% detection rate for failure mode diagnosis based on historical test data.
... This has led to the adoption of slow feature analysis (SFA) for modeling process data and effective monitoring and diagnosis (14,15). SFA offers distinct advantages over traditional MSPM methods, enabling distinct descriptions of steady states and temporal behaviors in industrial processes, unlike PCA, ICA, and CVA (16). By designing monitoring statistics tailored to process dynamics anomalies, SFA facilitates the distinction between nominal operating point switches and genuine faults that result in dynamic anomalies. ...
Article
Full-text available
This paper reviews the current state of research in data analytics and machine learning techniques, focusing on their applications in process industrial manufacturing, particularly in control and optimization. Key areas for future research include selection and transfer learning for process monitoring, addressing time-varying characteristics, and enhancing data-driven optimal control with domain-specific knowledge. Additionally, the paper explores reinforcement learning techniques and robust optimization, including distributional robust optimization, for high-level decision-making. Emphasizing the importance of historical knowledge of plants and processes, this paper aims to identify knowledge gaps and pave the way for future research in data-driven strategies for process industries, with a particular emphasis on energy efficiency and optimization.
... For example, Prof. J Rawlings's Group applied the state stability of stochastic input to verify the feasibility of nonlinear stochastic model of predictive control systems in process monitoring [7]. Although the analytical model-based methods are widely used and relatively mature, they require a large amount of prior knowledge and a large amount of work to obtain abnormal data and establish risk monitoring models [8,9]. ② The risk monitoring method based on mathematical statistics refers to extracting important characteristic information of data and constructing statistical indicators to measure data attributes. ...
Article
Full-text available
To ensure the stable and safe operations, this paper presents a modeling framework of dynamic risk monitoring for chemical processes. Multi-source process data are firstly denoised by the Wavelet Transform (WT). The Spearman’s rank correlation coefficient (SRCC) of these data is calculated based on an appropriate time step and time window. An optimal correlation threshold is further applied to transform the SRCC matrix into an adjacency matrix. Accordingly, the model of complex networks (CNs) can be established for characterizing massive, disordered, and nonlinear process data. Network structure entropy is particularly introduced to transform process data into a single time series of relative risk. To illustrate its validity, a diesel hydrofining unit and Tennessee Eastman Process (TEP) are selected as test cases. Results show that the proposed modeling framework can effectively and reasonably monitor the risks of chemical processes in real time.
... The nature of the operator's task has shifted from emphasizing perceptual-motor skills to involving cognitive activities such as monitoring, diagnosis, prognosis, decision-making, and problem-solving (Pascual et al., 2019). These cognitive activities need to be carried out efficiently and correctly by an operator to (i) understand the working of an automated system in achieving the design goals and (ii) intervene in case any process abnormalities arise; these are often outside the purview of automation systems (Shu et al., 2016;Zhang and Zhao, 2017). Inefficiencies in human cognitive performance can have catastrophic consequences in terms of compromising safety in process industries. ...
... As a result, data-driven process monitoring and fault diagnosis methods have been a growing hot spot in the past decades, such as multivariable statistics process monitoring (MSPM) [6] and neural network-based Manuscript [7], [8], [9], [10]. However, with the increasing complexity of the sensor networks, several sensors would release alarms together when a fault occurs and propagates in the system, resulting in alarm flooding phenomenon [11], [12]. This phenomenon might hinder data-driven fault diagnosis methods to obtain the true fault cause. ...
Article
Full-text available
Fault tracing technology, including root-cause diagnosis and propagation analysis, has become a growing hot spot in the field of industrial process monitoring. However, it is currently limited by the use of restricted alarm sequence data and the analysis without fault propagation analysis. To solve these problems, this article proposes a novel fault tracing method, namely causal topology-based variable-wise generative model (CTVGM). The CTVGM is first established according to the topological order of the variable causal graph. It contains a series of causal functions that are trained with normal data. Then, fault samples can be restored by the CTVGM to build up a diagnosis index called the recovery ratio (RR), which is used to determine the root causes. Meanwhile, the fault propagation paths are inferred by the recovery routes. In addition, a hierarchical CTVGM-based fault tracing strategy is designed to reduce the computation burden and enhance the modeling efficiency for large-scale complicated processes. The effectiveness of the proposed fault tracing method is verified on a numerical example and the Tennessee Eastman process case. Compared with existing methods, the results show that the proposed method not only achieves more accurate root-cause diagnosis performance but also obtains fault tracing results that are highly consistent with the process mechanisms.
... Mining raw industrial big data is expected to provide a viable path to reveal the interactions between process variables and turn this information into knowledge (Reis and Gins, 2017). However, chemical process data often exhibit high dimensionality, nonlinearity, nonstationarity, presence of noise, and presence of control loops (Shu et al., 2016), which makes causal discovery a challenging task. Therefore, there is a pressing need to design suitable data-driven causal discovery methods to capture causality in large-scale processes (Kühnert and Beyerer, 2014). ...
... Lavasani et al. [154] discussed the latest applications of big data in the chemical industry and stressed the necessity of big data analysis in various fields of process engineering. Shu et al. [155] proposed a new abnormal situation management (ASM) framework to solve the big data problem in the cloud computing environment of a big chemical corporation. Onel et al. ...
Article
Full-text available
Process fault detection and diagnosis (FDD) is a predominant task to ensure product quality and process reliability in modern industrial systems. Those traditional FDD techniques are largely based on diagnostic experience. These methods have met significant challenges with immense expansion of plant scale and large numbers of process variables. Recently, deep learning has become the newest trends in process control. The upsurge of deep neural networks (DNNs) in leaning highly discriminative features from complicated process data has provided practitioners with effective process monitoring tools. This paper is to present a review and full developing route of deep learning-based FDD in complex process industries. Firstly, the nature of traditional data projection-based and machine learning-based FDD methods is discussed in process FDD. Secondly, the characteristics of deep learning and their applications in process FDD are illustrated. Thirdly, these typical deep learning techniques, e.g., transfer learning, generative adversarial network, capsule network, graph neural network, are presented for process FDD. These DNNs will effectively solve these problems of fault detection, fault classification, and fault isolation in process. Finally, the developing route of DNN-based process FDD techniques is highlighted for future work.
... Abnormal situation management (ASM) provides an early warning for abnormal situations, timely diagnoses the causes, and provides decision support for technicians to take measures and restore the process to normal, which has made a great contribution to improving process safety. 1 Proper risk assessment (RA) helps to control the risks before they occur. Fault detection and fault diagnosis (FDD) means detecting whether faults have occurred and, if so, classifying the fault. ...
Article
Full-text available
Deep learning provides new ideas for chemical process fault diagnosis, reducing potential risks and ensuring safe process operation in recent years. To address the problem that existing methods have difficulty extracting the dynamic fault features of a chemical process, a fusion model (CS-IMLSTM) based on a convolutional neural network (CNN), squeeze-and-excitation (SE) attention mechanism, and improved long short-term memory network (IMLSTM) is developed for chemical process fault diagnosis in this paper. First, an extended sliding window is utilized to transform data into augmented dynamic data to enhance the dynamic features. Second, the SE is utilized to optimize the key fault features of augmented dynamic data extracted by CNN. Then, IMLSTM is used to balance fault information and further mine the dynamic features of time series data. Finally, the feasibility of the proposed method is verified in the Tennessee-Eastman process (TEP). The average accuracies of this method in two subdata sets of TEP are 98.29% and 97.74%, respectively. Compared with the traditional CNN-LSTM model, the proposed method improves the average accuracies by 5.18% and 2.10%, respectively. Experimental results confirm that the method developed in this paper is suitable for chemical process fault diagnosis.
... In other words, a process fault leads to deviations of process variables from the normal operating condition, which if poorly managed, result in the rare event [6][7][8][9]. Hence, to avoid or reduce the impact of rare event, it is crucial to diagnose the root cause of process fault for taking an appropriate troubleshooting decision to bring the process back to the normal operating condition [10,11]. alarm data to diagnose the root cause of process faults. ...
Article
In chemical processes, root cause diagnosis of process faults is highly crucial for efficient troubleshooting, since if poorly managed, process faults can lead to high-consequence rare events. For this purpose, Bayesian-based probabilistic models have been widely used because of their capability to capture causality in processes and perform root cause diagnosis. However, due to the acyclic nature of Bayesian networks, the existing probabilistic models do not account for presence of cyclic loops that are prevalent in chemical processes because of various control loops and coupling of process variables. Consequently, unaccountability of a high number of cyclic loops results in inaccurate root cause diagnosis. To improve the accuracy of root cause diagnosis, a modified Bayesian network (mBN) is proposed in this work that accounts for cyclic loops. Specifically, the mBN first identifies the weakest causal relation of a cyclic loop, and then converts it into a temporal relation. Because of this conversion, the mBN decomposes the cyclic network into an acyclic one over time horizon, thereby handling cyclic loops. Accounting for cyclic loops provides an improved structure of the causal network that aids in identifying correct causality. Finally, the performance of the proposed methodology is demonstrated through a case study of Tennessee Eastman process.
... Early data-driven FDD methods are mainly qualitative methods, including expert systems (ES), qualitative trend analysis (QTA) methods, and signed directed graphs (SDG), which are difficult to ensure the accuracy of diagnosis, and they cannot efficiently process large amounts of data. Besides, due to the increasing scale and complexity of modern industrial processes, more and more historical data are available, thus quantitative methods based on processing historical data have great advantages in applicability over qualitative methods [7]. Quantitative methods, such as principal component analysis (PCA), independent component algorithm (ICA), and Gaussian mixture model (GMM), are used in FDD. ...
Article
Full-text available
Fault detection and diagnosis (FDD) has received considerable attention with the advent of big data. Many data-driven FDD procedures have been proposed, but most of them may not be accurate when data missing occurs. Therefore, this paper proposes an improved random forest (RF) based on decision paths, named DPRF, utilizing correction coefficients to compensate for the influence of incomplete data. In this DPRF model, intact training samples are firstly used to grow all the decision trees in the RF. Then, for each test sample that possibly contains missing values, the decision paths and the corresponding nodes importance scores are obtained, so that for each tree in the RF, the reliability score for the sample can be inferred. Thus, the prediction results of each decision tree for the sample will be assigned to certain reliability scores. The final prediction result is obtained according to the majority voting law, combining both the predicting results and the corresponding reliability scores. To prove the feasibility and effectiveness of the proposed method, the Tennessee Eastman (TE) process is tested. Compared with other FDD methods, the proposed DPRF model shows better performance on incomplete data.
... (7)Belaud et al. (2014) proposed an open platform for collaborative simulation and scientific big data analysis, and illustrated various situations and challenges of chemical process engineering and natural disaster management. (8)Shu et al. (2016) proposed a new framework based on the big data in a cloud computing environment of a big chemical corporation to solve the challenging problems in abnormal situation management. (9)Yao (2016) proposed the establishment of a big data management platform on the basis of a study on the health and traffic safety of professional logistics transportation drivers. ...
Article
Big data has an important influence on safety management in various fields where its applications are becoming more prevalent. The analysis results of big data have become an important reference influencing safety decision-making. Realizing the promising benefits of big data in safety management has motivated us to write a review on the influence of big data and its applications in safety management. This study also investigates the challenges faced by big data in safety management and provides insights to future directions for research and practice. We first briefly introduce the development history of big data and its influence on safety management. We then review the general theories and technologies of big data in safety management. Finally, we summarize the typical applications of big data in safety management according to different fields. Additional findings from the review process are also presented.
... Particularly, the big data era has paved way for exploring data-driven methods for improving the performance of alarm systems. 16 The data driven methods with applications in chemical systems typically use statistical analysis methods such as principal component analysis (PCA), [17][18][19][20][21][22] partial least squares (PLS), 21,23 independent component analysis (ICA) [24][25][26] hidden Markov model, 27 and Fisher discriminant analysis (FDA). Machine learning approaches for fault detection and diagnosis (FDD) with chemical process systems applications are primarily based on artificial neural networks, 28 neuro fuzzy methods, 29,30 support vector machine (SVM), 31,32 k-nearest neighbor (knn), 33,34 and Bayesian network (BN). ...
Article
Full-text available
When a fault occurs in a process, it slowly propagates within the system and affects the measurements triggering a sequence of alarms in the control room. The operators are required to diagnose the cause of alarms and take necessary corrective measures. The idea of representing the alarm sequence as the fault propagation path and using the propagation path to diagnose the fault is explored. A diagnoser based on hidden Markov model is built to identify the cause of the alarm signals. The proposed approach is applied to an industrial case study: Tennessee Eastman process. The results show that the proposed approach is successful in determining the probable cause of alarms generated with high accuracy. The model was able to identify the cause accurately, even when tested with short alarm sub‐sequences. This allows for early identification of faults, providing more time to the operator to restore the system to normal operation.
... Recently, several advancements have been made to address these challenges in designing and managing efficient alarm systems. Particularly, the big data era has paved way for exploring data-driven methods for improving the performance of alarm systems 16 . ...
Preprint
Full-text available
When a fault occurs in a process, it slowly propagates within the system and affects the measurements triggering a sequence of alarms in the control room. The operators are required to diagnose the cause of alarms and take necessary corrective measures. The idea of representing the alarm sequence as the fault propagation path and using the propagation path to diagnose the fault is explored. A diagnoser based on hidden Markov model is built to identify the cause of the alarm signals. The proposed approach is applied to an industrial case study: Tennessee Eastman process. The results show that the proposed approach is successful in determining the probable cause of alarms generated with high accuracy. The model was able to identify the cause accurately, even when tested with short alarm sub-sequences. This allows for early identification of faults, providing more time to the operator to restore the system to normal operation.
... process are often only labeled in a small proportion and mostly unlabeled. 5 The reasons are as follows. (a) The occurring frequency of abnormal status is low. ...
Article
Full-text available
High repeatability of similar information but a lack of typical fault features, in the monitoring data of distillation processes for continuous production, leads to a small proportion of data with labels. Therefore, the requirement for a large number of labeled samples in conventional deep learning models cannot be met, resulting in significant performance degradation in their anomaly identification. In this paper, an intelligent anomaly identification method for small samples is proposed, based on semisupervised deep learning. Specifically, on the basis of a deep denoising autoencoder (DAE), semisupervised ladder networks (SSLN) is constructed to use a large number of unlabeled, process data to assist the supervised learning process, thus improving the performance of the anomaly identification model. In order to construct the optimum SSLN model, the influences of parameters such as the number of deep network layers, the proportion of labeled samples, and the noise intensity on identification accuracy are analyzed while making the information flow in the network more efficient. Experimental results of anomaly identification in the depropanization distillation process show that compared with the conventional multilayer perception (MLP) and convolutional neural network (CNN)‐DAE models, the proposed method can obtain a higher diagnostic accuracy in the case with limited labeled process data.
... FOPAM 2019 and several technical sessions in recent AIChE Annual Meetings, and for a good reason. Most companies collect continuously data from sensors that is stored for a certain time but never actually used, unless there is a need for post analytics as a part of troubleshooting 3 . The currently employed classical mathematical optimization models for scheduling 4 are typically based on fixed parameter sets, which are commonly maintained and updated off-line by few domain experts and represent mainly statistical averages. ...
Article
Data science has become an important research topic across scientific disciplines. In Process Systems Engineering, one attempt to create true value from process data is to use it proactively to improve the quality and accuracy of production planning, as often a schedule based on statistical average data is outdated already when reaching the plant floor. Thus, due to the hierarchical planning structures, it is difficult to quickly adapt a schedule to changing conditions. This challenge has also been investigated in integration of scheduling and control studies1. The project SINGPRO investigated the merging of big data platforms, machine learning and data analytics with process planning and scheduling optimization. The goal was to create online, reactive and anticipative tools for more sustainable and efficient operation. In this article, we discuss selected outcomes of the project and reflect the topic of combining optimization and data science in a broader scope.
... Big data analytics (BDA) has been increasingly applied in management of SCs [23], for procurement management (e.g., supplier selection [24], sourcing cost improvement [25], sourcing risk management [26], product research and development [27], production planning and control [28], quality management [29], maintenance, and diagnosis [30], warehousing [31], order picking [32], inventory control [33], logistics/transportation (e.g., intelligent transportation systems [34], logistics planning [35], in-transit inventory management [36], demand management (e.g., demand forecasting [37], demand sensing [38], and demand shaping [39]. A key application of BDA in SCM is to provide accurate forecasting, especially demand forecasting, with the aim of reducing the bullwhip effect [14,[40][41][42]. ...
Article
Full-text available
Abstract Big data analytics (BDA) in supply chain management (SCM) is receiving a growing attention. This is due to the fact that BDA has a wide range of applications in SCM, including customer behavior analysis, trend analysis, and demand prediction. In this survey, we investigate the predictive BDA applications in supply chain demand forecasting to propose a classification of these applications, identify the gaps, and provide insights for future research. We classify these algorithms and their applications in supply chain management into time-series forecasting, clustering, K-nearest-neighbors, neural networks, regression analysis, support vector machines, and support vector regression. This survey also points to the fact that the literature is particularly lacking on the applications of BDA for demand forecasting in the case of closed-loop supply chains (CLSCs) and accordingly highlights avenues for future research.
Article
Process monitoring is pivotal in process system engineering for abnormal situation management and ensuring process safety. This paper presents a review of Professor Khan's works on process monitoring. It examines (i) the number of publications, (ii) the type of publications, (iii) key sources, (iv) focused areas and their evolvement, and (v) the research impact by Professor Khan in process monitoring. The results suggest that journals are the primary sources he has used to disseminate research results. Over the years, his research focus evolved from detection to root cause diagnosis, fault propagation pathway analysis, and failure prognosis. Professor Khan has immensely impacted his peers, evidenced by his theoretical contributions, a higher number of recognitions by other researchers, and diversified workforce development.
Article
Full-text available
This paper presents a comprehensive review of the historical development, the current state of the art, and prospects of data-driven approaches for industrial process monitoring. The subject covers a vast and diverse range of works, which are compiled and critically evaluated based on the different perspectives they provide. Data-driven modeling techniques are surveyed and categorized into two main groups: multivariate statistics and machine learning. Representative models, namely principal component analysis, partial least squares and artificial neural networks, are detailed in a didactic manner. Topics not typically covered by other reviews, such as process data exploration and treatment, software and benchmarks availability, and real-world industrial implementations, are thoroughly analyzed. Finally, future research perspectives are discussed, covering aspects related to system performance, the significance and usefulness of the approaches, and the development environment. This work aims to be a reference for practitioners and researchers navigating the extensive literature on data-driven industrial process monitoring.
Article
Purpose This study considers the potential of logistics 4.0 for supply chain (SC) optimization in French retail. The authors investigate the implementation of Industry 4.0 technologies to optimize SC performance in the retail sector and SC's role in the digital transformation in supply chain management (SCM). Design/methodology/approach The authors first carry out a comprehensive bibliographic taxonomy to highlight the different existing digital tools. Based on this, the authors posed three research questions (RQs) and hypotheses to examine the contribution of logistics 4.0 in improving the performance of retail logistics. Then, the authors considered a case study of retail in France based on qualitative and quantitative analysis to answer all the RQs and examine the hypotheses. Findings The results showed that digital tools such as Cyber Security Systems (CSS), Big Data Analytics (BDA) and Blockchain (BC) technology are the most effective and appropriate tools to optimize the SC performance in retail. Practical implications This research work showed that the implementation of these tools in retail can offer several benefits such as improved productivity, optimized delivery times, improved inventory management and secure real-time communication, which leads to improved profitability of the SC. Originality/value The study opens a door to develop practical roadmaps for companies that enable smart deliveries based on logistics 4.0.
Article
Full-text available
The study and development of fault detection and diagnosis (FDD) systems are relevant tasks for industrial processes. Another prominent field is applying deep learning (DL) models to solve engineering problems, such as FDD systems’ design. Often, the preliminary tests are conducted using simulated datasets to verify the chosen methodology and avoid unnecessarily disturbing the real process. Even if the data used come from a computer simulation, it must remain as realistic as possible. In several studies, researchers have used the Tennessee Eastman Process (TEP) benchmark for addressing the application of DL models to build effective FDD frameworks. However, most of them use preexisting datasets, and this presents some drawbacks that can negatively impact the DL model’s training stage. In addition, none of them have evaluated how to adjust the existing FDD model when the process control strategy is changed. This paper presents various topologies of convolutional neural networks (CNNs) to model a FDD system for the TEP benchmark using new datasets. For the first time, we investigate the performance of fully convolutional networks (FCNs) in the TEP study case. Additionally, we apply transfer learning (TL) to surpass the model inadequacy when the data distribution changes due to an alteration in the process’ closed-loop system.
Article
As the digitalization of process industry deepens, process fault detection and diagnosis (FDD) is an essential tool to ensure safe production in chemical industries. However, FDD may have a long detection delay for some chemical faults. Process fault prognosis methods could predict the occurrence of faults in advance, which would give operators more time and reduce the impact of faults. Nevertheless, many fault prognosis methods still suffer from fixed or insufficient prediction time ahead, which greatly confines their usage in critical scenarios. In this paper, we propose a novel Transformer-based multi-variable multi-step (TMM) prediction method for chemical process fault prognosis. Specifically, Transformer models are trained to predict the change of process variables at the next step, and iterative forecasting is used to predict multi-step changes of process variables. Finally, extensive evaluation of applications in a continuous stirred tank heater (CSTH) system and the Tennessee Eastman process (TEP) demonstrates that the proposed TMM prediction method shows high prediction accuracy and early fault prognosis, compared with representative statistical methods and other advanced deep learning methods.
Article
Deep learning is a powerful tool for feature representation, and many methods based on convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have been applied on fault diagnoses for chemical processes. However, unlike attention mechanisms, these networks are inefficient when extracting features of long-term dependencies. The transformer method employs a self-attention mechanism and sequence-to-sequence model originally designed for natural language processing (NLP). This approach has attracted significant attention in recent years due to its great success in NLP fields. The fault diagnosis of a chemical process is a task based on multi-variable time series, which are similar to text sequences with a greater focus on long-term dependencies. This paper proposes a modified transformer model called Target Transformer, which includes not only a self-attention mechanism, but also a target-attention mechanism for chemical process fault diagnoses. The Tennessee Eastman (TE) process was used to evaluate our method’s performance.
Article
Process safety is still an issue in modern chemical industries. Accidents in chemical processes are still frequent and cause great losses for chemical industries. In this context, there is a demand for the development of intelligent fault detection and diagnosis (FDD) methods that can help operators manage chemical process faults. Since a large amount of process data has become available for monitoring systems as a result of the huge deployment of computer systems and information technologies in chemical industries, the study of data-based FDD methods has become the focus of this research area. Therefore, this work proposes to investigate the performance of a promising Bayesian recurrent neural network-based method in the detection of faults in a real chemical process. The case study is related to the detection of a specific type of fault in a real fluid catalytic cracking process. The method presented satisfactory performance during testing experiments, with a good accuracy detection and a very small number of false-negative cases.
Article
Fault Detection and Diagnosis (FDD) is a Process System Engineering (PSE) area of great importance, especially with increased process automation. It is one of the chemical engineering fields considered promising to Artificial Intelligence (AI) application. FDD systems can be useful to supervise Sour Water Treatment Units (SWTU) behavior, as they are chemical processes that present operational difficulties when disturbances occur. SWTU remove contaminants from sour water (SW) streams generated through petroleum processing, consisting mainly of small amounts of H2S and NH3. They are considered one of the primary aqueous wastes of refineries and cannot be disposed of due to environmental regulations. However, no previous studies focused on the development of FDD systems for SWTU exist and works on its dynamics are scarce. Hence, the present work proposes to study the dynamic simulated behavior of an SWTU and develop an FDD system applying AI techniques with hyperparameters optimization. The simulation was performed in Aspen Plus Dynamics® and ran to create normal operation and six relevant faults, including occurrences in the process (e.g., inundation and fouling) and sensors. FDD was performed through data classification, and results were evaluated mainly by accuracy and confusion matrices. Even after variable reduction, FDD was satisfactory with over 87.50% accuracy in all AI techniques. RF and SVM with linear and Gaussian kernels presented the best results, with over 93% of accuracy in training and testing, and had the shortest computing times. The second column’s sump level proved to be the most relevant variable for fault identification.
Article
Sulfur corrosion is one of the significant concerns that could cause potential hazards in the petrochemical industry, and the traditional periodical inspection techniques are insufficient to support timely and reliable monitoring of iron sulfides oxidation. The emerging data-driven fault detection models and innovative sensing technologies provide new opportunities to process safety monitoring. This article proposed an integrated approach that employed fiber-optic distributed temperature sensing (FO-DTS) system, electrochemical gas sensors, embedded systems, and neural networks to detect the exotherm of early-stage iron sulfides oxidation in complex scenarios. Specifically, the sulfides oxidation exotherm is simulated using programmed electrical heating devices, and a software simulation is conducted to optimize the heating rods’ power selection; the exothermic chemical reaction is carried out with the oxidation of dimethyl sulfoxide (DSMO) by hydrogen peroxide (H2O2) in a simulated stainless-steel reactor. The continuous temperature data and its spatial distribution of the targeted reactor surface are generated by the DTS, and the SO2 concentration is collected as an additional criterion. The edge computing gateway can handle the field data collection from different types of protocols and other auxiliaries. Furthermore, the performance of the field sensing system and the anomaly detection neural networks are tested. The result shows that the proposed method is able to distinguish the simulated iron sulfides oxidation exotherm from the chemical reaction exotherm with an acceptable accuracy rate. The details of the system components are also demonstrated as a reference for deploying similar sulfides oxidation monitoring tasks in practice.
Chapter
Full-text available
Alarm management is an effective solution to operation safety and efficiency in many industries. As the modern process industry becomes more complex and digitalized, alarm management becomes a necessity. However, the current status of alarm management applications is unsatisfactory, with too many alarms to convey valueless information to operators or even disturb them. Conventional alarm management can significantly alleviate alarm overloading but has difficulty in recognition and presentation of true abnormal situations. As an ultimate goal to generate one and only one alarm under a certain abnormal situation, we should meet and go beyond the standards and guidelines to achieve a smart alarm management. For this purpose, advanced alarm management should be developed and applied. This chapter presents an overview of conventional and advanced alarm management techniques with applications, after a brief introduction of the philosophy and concepts alarm management.
Article
The rapid advances occurring within the “Era of Big Data” and Industry 4.0 concepts are providing opportunities and challenges for improving business outcomes within the process industries especially in terms of process safety management (PSM). To ensure potential affordances are leveraged and potential limitations and risks are identified and addressed, a structured approach for identifying opportunities and challenges for people, plant and procedures could be beneficial. A literature review revealed that for process safety management there is a gap associated with exploring the utility of frameworks that facilitate a structured approach for thinking about data, information, knowledge and wisdom from a people, plant and procedures perspective. This article explores work done to answer the research question – Can the data-information-knowledge-wisdom-action (DIKWA) cycle and people-plant-procedures (P3) model help identify important considerations associated with ‘Big Data’ applications within the process industries especially in terms of process safety management? We apply these frameworks to an industry case study on high temperature hydrogen attack and find that it revealed insights into options for and affordances and limitations associated with adopting ‘Big Data’ technology and tools. The implications of these insights for process safety management are discussed then recommendations are made about future research which includes the need to use this work to inform the development of a useful and usable knowledge processing model for process safety applications.
Article
Modern chemical processes rely on distributed control systems to make the repetitive and routine adjustments to maintain steady operation. Operators are still required to “supervise the (system) supervisor” and intervene when variables exceed pre-programmed parameters to avert major incidents. Research in human-computer interaction and advanced process control has often focused on data-driven methods for fault detection as distinct from operator effectiveness. In this paper, we explore the application of a novel data-driven fault-detection technique to enhance operator decision support. During a simulated abnormal event, three users attempted to diagnose the root cause of a process upset using a traditional or standard interface, then with the addition of causal maps, in a A-B-A single-subject design. The causal maps were derived using a hierarchical method that could be applied to a wide range of chemical processes as an online, adaptive augmentation for abnormal situation management. Using a think-aloud technique, the three participants developed high quality insights into the process without negatively impacting the overall task load. These preliminary findings challenge prevailing wisdom in process control interface design, which often focuses on de-cluttering displays at the cost of information resolution.
Article
There always exists potential safety risk in chemical processes. Abnormalities or faults of the processes can lead to severe accidents with unexpected loss of life and property. Early and accurate fault detection and diagnosis (FDD) is essential to prevent these accidents. Many data-driven FDD models have been developed to identify process faults. However, most of the models are black-box models with poor explainability. In this paper, a process topology convolutional network (PTCN) model is proposed for fault diagnosis of complex chemical processes. Experiments on the benchmark Tennessee Eastman process showed that PTCN improved the fault diagnosis accuracy with simpler network structure and less reliance on the amount of training data and computation resources. In the meantime, the model building process becomes much more rational and the model itself is much more understandable.
Article
The reciprocating compressor is, in general, a critical equipment in a process plant. For certain ultra-high-pressure process, if the reciprocating compressor fails, often it will cause serious impact to not just the compressor itself, but also the process surrounds it. To prevent compressors from failures, an expert diagnosis system is needed. However, the traditional rule-based expert system is quite inefficient and difficult to create. For an expert prognosis system that is customized to meet needs of a specific process, one needs to refer to plant maintenance history, which is hard to come by due to the fact that most maintenance was poorly documented. This research attempt to demonstrate the feasibility of developing an expert prognosis system through implementation of association rules. Rather than mining from maintenance history, records of failure cases were collected from technical journal articles by extracting information containing failure symptoms and causes on failed components, that mimicking repair history. In total, 115 failure information out from 41 journal articles were gathered. Applications of this approach to practical use in a process plant is easy by replacing the failure information table with that from datamining the repair history. The failure information was first tabulated and then put through association analysis for support, confidence, and lift between two parameters. The demonstration program has been successful with 1-to-1, many-to-1, and many-to-many analysis among failed components, failure modes, and operation parameters.
Article
Fault diagnosis plays a vital role in ensuring safe and efficient operation of modern process plants. Despite the encouraging progress in its research, developing a reliable and interpretable diagnostic system remains a challenge. There is a consensus among many researchers that an appropriate modelling, representation and use of fundamental process knowledge might be the key to addressing this problem. Over the past four decades, different techniques have been proposed for this purpose. They use process knowledge from different sources, in different forms and on different details, and are also named model-based methods in some literature. This paper first briefly introduces the problem of fault detection and diagnosis, its research status and challenges. It then gives a review of widely used model- and knowledge-based diagnostic methods, including their general ideas, properties, and important developments. Afterwards, it summarises studies that evaluate their performance in real processes in process industry, including the process types, scales, considered faults, and performance. Finally, perspectives on challenges and potential opportunities are highlighted for future work.
Preprint
Full-text available
Big data analytics (BDA) in supply chain management (SCM) is receiving a growing attention. This is due to the fact that BDA has a wide range of applications in SCM, including customer behavior analysis, trend analysis, and demand prediction. In this survey, we investigate the predictive BDA applications in supply chain demand forecasting to propose a classification of these applications, identify the gaps, and provide insights for future research. We classify these algorithms and their applications in supply chain management into time-series forecasting, clustering, K-nearest-neighbors, neural networks, regression analysis, support vector machines, and support vector regression. This survey also points to the fact that the literature is particularly lacking on the applications of BDA for demand forecasting in the case of closed-loop supply chains (CLSCs) and accordingly highlights avenues for future research.
Article
Investigations into process accidents have identified that flaws in alarm management systems are a major contributing factor to these accidents. Poor alarm system design can lead to alarm flooding, loss of situation awareness, and poor decision-making, causing unnecessary shutdowns, or further escalation of the abnormal situations. A review of research literature suggests that there appears to be limited methods available to help analysts evaluate alarm system design and to prioritize and rationalize alarms in a manner that promotes operators’ situation awareness and correct decision-making. This article documents the first part of the research which aims to develop a means to rationalize defined alarms in operations through use of the causal modeling approach that is linked with the graph modeling technique that involves graph analytics to provide metrics to evaluate the alarm system performance. The article concludes by discussing the implications of the research findings and makes recommendations about further research required to fully develop an alarm system that use prioritization and rationalization to improve the operator's situation awareness and responses during abnormal operating situations.
Article
Full-text available
The intelligent alarm management system (IAMS) was built for suppressing nuisance alarm as well as providing valuable advisory information to help the panel operator focus quickly on important alarm information and take correct and quick action. A project to develop an IAMS to be used in a refinery in Singapore is presented. At the start of the project, the refinery recorded, on average, one alarm every 50 sec during normal operation and one alarm every 10 sec during plant upsets. After installing the IAMS, the average number of process alarms was ∼ 1000/day in normal conditions (about one alarm/3 min-station, should be manageable according to HSE survey). The alarm reduction was ∼ 50%. When the aromatic plant had an emergency shutdown due to a power loss, the IAMS was running in the background and contributed significantly to reducing alarm numbers for that day. With the IAMS, the total number of process alarms was 5629 (5333 alarms during shutdown from 14:04 to 24:00). Without the IAMS, the total number of process alarms would be at ≤ 9750 (would increase by 73%).
Conference Paper
Full-text available
The vision of the 4th industrial revolution describes the realization of the Internet of Things within the context of the factory to realize a significantly higher flexibility and adaptability of production systems. Driven by politics and research meanwhile most of the automation technology providers in Germany have recognized the potentials of Industry 4.0 and provide first solutions. However, presented solutions so far represent vendor-specific or isolated production system. In order to make Industry 4.0 a success, these proprietary approaches must be replaced by open and standardized solutions. For this reason, the SmartFactoryKL has realized a very first multi-vendor and highly modular production system as a sample reference for Industry 4.0. This contribution gives an overview of the current status of the SmartFactoryKL initiative to build a highly modular, multi-vendor production line based on common concepts and standardization activities. The findings and experiences of this multi-vendor project are documented as an outline for further research on highly modular production lines.
Article
Full-text available
The hazard and operability, or HAZOP, study is a prime method for the identification of hazards on process plants. This is the third in a series of papers which describes progress in the emulation of hazard identification in the style of HAZOP. The work reported is embodied in a computer aid for hazard identification, or HAZOP emulator, HAZID. The HAZID code is one of a suite of codes developed as part of the STOPHAZ project. The present paper describes the fluid model system and the evaluation of consequences.Companion papers describe: an overview of HAZID, with an account of HAZOP and HAZOP emulation, and of the issues underlying it; the unit model system; the evaluation and improvement of HAZID using case studies and other methods; some development topics. Conclusions from the work are given in the final paper.
Article
Full-text available
Recent advances in manufacturing industry has paved way for a systematical deployment of Cyber-Physical Systems (CPS), within which information from all related perspectives is closely monitored and synchronized between the physical factory floor and the cyber computational space. Moreover, by utilizing advanced information analytics, networked machines will be able to perform more efficiently, collaboratively and resiliently. Such trend is transforming manufacturing industry to the next generation, namely Industry 4.0. At this early development phase, there is an urgent need for a clear definition of CPS. In this paper, a unified 5-level architecture is proposed as a guideline for implementation of CPS.
Article
Full-text available
Keywords:batch process;online monitoring;PCA;chromatography;statistical analysis
Article
Chemical process accidents have tremendous impacts on the environment, as well as the sustainability of the chemical industry. Fault detection and diagnosis (FDD) are important to ensure safety and stability of chemical processes. However, the scarcity of fault samples has limited the wide application of FDD methods in the industry. In this work, we present an artificial immune system (AIS)-based FDD approach for diagnosing faults in the chemical processes without historical fault samples. This approach mimics the vaccine transplant in the medicine discipline. Historical fault samples collected from other chemical processes of the same type are used to generate vaccines to help construct fault antibody libraries for the diagnosis objective process. Case studies on the Pensim process and laboratory-scale distillation columns illustrate the effectiveness of our approach.
Article
The progressive concentration of production into large single-train units, and the increasing need to operate closer to risk situations requires refined methods for eliminating problems at the design stage. One method, called an ″operability study″ is based upon the supposition that most problems are missed because the system is complex rather than because of a lack of knowledge on the part of the design team. It can be used to examine preliminary process design flowsheets at the start of a project, or detailed piping and instrument diagrams at the final design phase. The other method, ″hazard analysis″ provides a full quantitative examination after a serious hazard has been identified. Examples of each of these methods is presented and discussed in detail.
Article
In this part of the paper, we review qualitative model representations and search strategies used in fault diagnostic systems. Qualitative models are usually developed based on some fundamental understanding of the physics and chemistry of the process. Various forms of qualitative models such as causal models and abstraction hierarchies are discussed. The relative advantages and disadvantages of these representations are highlighted. In terms of search strategies, we broadly classify them as topographic and symptomatic search techniques. Topographic searches perform malfunction analysis using a template of normal operation, whereas, symptomatic searches look for symptoms to direct the search to the fault location. Various forms of topographic and symptomatic search strategies are discussed.
Article
Causality inference and root cause analysis are important for fault diagnosis in the chemical industry. Due to the increasing scale and complexity of chemical processes, data-driven methods become indispensable in causality inference. This paper proposes an approach based on the concept of transfer entropy which was presented by Schreiber in 2000 to generate a causal map. To get a better performance in estimating the time delay of causal relations, a modified form of the transfer entropy is presented in this paper. Case studies on two simulated chemical processes, including the benchmark Tennessee Eastman process are performed to illustrate the effectiveness of this approach.
Article
Special pseudo-components (SPCs) are proposed to be the compositional entities for characterizing the complicated mixtures of stock and product oils involved in fluid catalytic cracking risers. SPCs have invariant and definite physicochemical properties and are defined in pairs of light and heavy oil cuts of narrow boiling range. A narrow cut of true boiling point distillation of a stock or product oil is expressed with a pair of SPCs, which constitutes the basis for the characterization procedure developed in this paper. A steady state model for a prototype riser with a side feed stream is formulated where material and heat balance is strictly observed, the hydraulic behavior is considered, and the kinetic scheme of Gupta et al. (2007) is adopted. Results of tests with production data from three commercial risers show that the kinetic scheme of Gupta et al. (2007) is predictive as coupled with the suggested characterization procedure in the sense that for a given riser and given catalyst, the kinetic parameters are independent of stock oils.
Conference Paper
Alarms are used in industrial plants to notify operators about any abnormality or fault in the process. In practice, however, a majority of alarms are false or nuisance alarms and only distract the operator from normal operation of the process. Filtering of process data, adding alarm delay and using alarm deadband are simple techniques that if utilized properly can reduce the false and nuisance alarm rate significantly. In this paper we investigate the effect of these three techniques on accuracy of the alarm system and detection delay. We also propose a framework for designing optimal filter, time delay and deadband to reduce false and missed alarm rates.
Conference Paper
Alarms are essential in every process, system and industrial complex. They are configured to notify the operators about any abnormal situation in the process. In practice, operators receive far more false and nuisance alarms than valid and useful alarms. In this paper, an overview on alarm analysis and design is given. Some of the reasons for false and nuisance alarms are discussed and a few solutions to reduce them are studied. False alarm rate, missed alarm rate and detection delay trade-offs in alarm design are also discussed.
Article
In this paper a new approach for modeling and monitoring of the multivariate processes in presence of faulty and missing observations is introduced. It is assumed that operating modes of the process can transit to each other following a Markov chain model. Transition probabilities of the Markov chain are time varying as a function of the scheduling variable. Therefore, the transition probabilities will be able to vary adaptively according to different operating modes. In order to handle the problem of missing observations and unknown operating regimes, the expectation maximization (EM) algorithm is used to estimate the parameters. The proposed method is tested on two simulations and one industrial case studies. The industrial case study is the abnormal operating condition diagnosis in the primary separation vessel of oil-sand processes. In comparison to the conventional methods, the proposed method shows superior performance in detection of different operating conditions of the process. © 2014 American Institute of Chemical Engineers AIChE J, 2014
Article
Data-based process monitoring has become a key technology in process industries for safety, quality, and operation efficiency enhancement. This paper provides a timely update review on this topic. First, the natures of different industrial processes are revealed with their data characteristics analyzed. Second, detailed terminologies of the data-based process monitoring method are illustrated. Third, based on each of the main data characteristics that exhibits in the process, a corresponding problem is defined and illustrated, with review conducted with detailed discussions on connection and comparison of different monitoring methods. Finally, the relevant research perspectives and several promising issues are highlighted for future work.
Article
Online fault diagnosis is one of the most important methods to ensure stability and safety in many chemical processes. In this work, a lab-scale distillation process is designed and built for fault diagnosis study, and the online fault diagnosis system (OFDS) is developed with a distributed control system (DCS) system and a real-time database. Artificial neural networks (ANNs) are used for startup state judgment and for fault detection in the steady state, while the dynamic artificial immune system (DAIS) is used for fault detection in the startup phase and for fault identification in both the startup phase and the steady state. The results of case studies clearly illustrate that the developed system is efficient in online fault diagnosis of distillation processes during the full operating cycle, especially when the number of historical fault samples is limited. The self-learning ability of the methods ensures that the system can remember and diagnose new faults, and the friendly interface of OFDS can show the current condition of the process to operators and get feedback from the operators for online learning.
Article
A novel networked process monitoring, fault propagation identification, and root cause diagnosis approach is developed in this study. First, process network structure is determined from prior process knowledge and analysis. The network model parameters including the conditional probability density functions of different nodes are then estimated from process operating data to characterize the causal relationships among the monitored variables. Subsequently, the Bayesian inference‐based abnormality likelihood index is proposed to detect abnormal events in chemical processes. After the process fault is detected, the novel dynamic Bayesian probability and contribution indices are further developed from the transitional probabilities of monitored variables to identify the major faulty effect variables with significant upsets. With the dynamic Bayesian contribution index, the statistical inference rules are, thus, designed to search for the fault propagation pathways from the downstream backwards to the upstream process. In this way, the ending nodes in the identified propagation pathways can be captured as the root cause variables of process faults. Meanwhile, the identified fault propagation sequence provides an in‐depth understanding as to the interactive effects of faults throughout the processes. The proposed approach is demonstrated using the illustrative continuous stirred tank reactor system and the Tennessee Eastman chemical process with the fault propagation identification results compared against those of the transfer entropy‐based monitoring method. The results show that the novel networked process monitoring and diagnosis approach can accurately detect abnormal events, identify the fault propagation pathways, and diagnose the root cause variables. © 2013 American Institute of Chemical Engineers AIChE J, 59: 2348–2365, 2013
Article
A self-organizing map (SOM) based methodology is proposed for fault detection and diagnosis of processes with nonlinear and non-Gaussian features. The SOM is trained to represent the characteristics of a normal operation as a cluster in a two-dimensional space. The dynamic behavior of the process system is then mapped as a two-dimensional trajectory on the trained SOM. A dissimilarity index based on the deviation of the trajectory from the center of the cluster is derived to classify the operating condition of the process system. Furthermore, the coordinate of each best matching neuron on the trajectory is used to compute the dynamic loading of each process variable. For fault diagnosis, the contribution plot of the process variables is generated by quantifying the divergences of the dynamic loadings. The proposed technique is first tested using a simple non-Gaussian model and is then applied to monitor the simulated Tennessee Eastman chemical process. The results from both cases have demonstrated the superiority of proposed technique to the conventional principal component analysis (PCA) technique.
Article
A combined data-driven and observer-design methodology for fault detection and isolation (FDI) in hybrid process systems with switching operating modes is proposed in this work. The main contribution is to construct a unified framework for FDI by integrating Gaussian mixture models (GMM), subspace model identification (SMI), and results from unknown input observer (UIO) theory. Initially, a GMM is built to identify and describe the multimodality of hybrid systems by using the recorded input/output process data. A state-space model is then obtained for each specific operating mode based on SMI if the system matrices are unknown. An UIO is designed to estimate the system states robustly, based on which the fault detection is laid out through a multivariate analysis of the residuals. Finally, by designing a set of unknown input matrices for specific fault scenarios, fault isolation is carried out through the disturbance-decoupling principle from the UIO theory. A significant benefit of the developed framework is to overcome some of the limitations associated with individual model-based and data-based approaches in dealing with the problem of FDI in hybrid systems. Finally, the validity and effectiveness of the proposed monitoring framework are demonstrated using a numerical example, a simulated continuous stirred tank heater process, and the Tennessee Eastman benchmark process. © 2014 American Institute of Chemical Engineers AIChE J, 2014
Article
Alarm systems in chemical plants alert process operators to deviations in process variables beyond predetermined limits. Despite more than 30 years of research in developing various methods and tools for better alarm management, the human aspect has received relatively less attention. The real benefit of such systems can only be identified through human factors experiments that evaluate how the operators interact with these decision support systems. In this paper, we report on a study that quantifies the benefits of a decision support scheme called Early Warning, which predicts the time of occurrence of critical alarms before they are actually triggered. Results indicate that Early Warning is helpful in reaching a diagnosis more quickly; however it does not improve the accuracy of correctly diagnosing the root cause. Implications of these findings for human factors in process control and monitoring are discussed.
Article
Chattering and repeating alarms, which repeatedly make transitions between alarm and non-alarm states without operators’ response, are the most common form of nuisance alarms encountered in industrial plants. The paper formulates two novel rules to detect chattering alarms caused by random noise and repeating alarms by regular patterns such as oscillation, and proposes an online method to effectively remove chattering and repeating alarms via the m-sample delay timer. Industrial examples are provided to support the formulated rules and to illustrate the effectiveness of the proposed method.
Article
Despite efforts to improve alarm systems, alarm flooding remains a significant problem in the process industries. Alarm summary displays for managing alarm floods do not fully support operator needs when responding to plant upsets. This Abnormal Situation Management Consortium (asmconsortium.org) funded study tested two alarm summary display designs in a simulated process control environment using twenty-four certified operators. The first display represented the traditional list-based alarm summary display typically used in control rooms. The second display was a new alarm tracker summary display, which showed alarms in a time series represented by icons and a short alarm description. Results of the simulated evaluation showed that when operators used a formal alarm response strategy that focused the new alarm tracker summary display by equipment area, they responded to more process events overall and had fewer false responses compared to when operators used the traditional list-based alarm summary. Relevance to industry New alarm summary displays can combine the benefits of list-based displays with time series presentation of alarm information. Process operators can be trained on formal alarm response strategies and should be given ample time to familiarize themselves with new displays as part of an effective deployment strategy.
Article
Fault detection and diagnosis is an important problem in process engineering. It is the central component of abnormal event management (AEM) which has attracted a lot of attention recently. AEM deals with the timely detection, diagnosis and correction of abnormal conditions of faults in a process. Early detection and diagnosis of process faults while the plant is still operating in a controllable region can help avoid abnormal event progression and reduce productivity loss. Since the petrochemical industries lose an estimated 20 billion dollars every year, they have rated AEM as their number one problem that needs to be solved. Hence, there is considerable interest in this field now from industrial practitioners as well as academic researchers, as opposed to a decade or so ago. There is an abundance of literature on process fault diagnosis ranging from analytical methods to artificial intelligence and statistical approaches. From a modelling perspective, there are methods that require accurate process models, semi-quantitative models, or qualitative models. At the other end of the spectrum, there are methods that do not assume any form of model information and rely only on historic process data. In addition, given the process knowledge, there are different search techniques that can be applied to perform diagnosis. Such a collection of bewildering array of methodologies and alternatives often poses a difficult challenge to any aspirant who is not a specialist in these techniques. Some of these ideas seem so far apart from one another that a non-expert researcher or practitioner is often left wondering about the suitability of a method for his or her diagnostic situation. While there have been some excellent reviews in this field in the past, they often focused on a particular branch, such as analytical models, of this broad discipline. The basic aim of this three part series of papers is to provide a systematic and comparative study of various diagnostic methods from different perspectives. We broadly classify fault diagnosis methods into three general categories and review them in three parts. They are quantitative model-based methods, qualitative model-based methods, and process history based methods. In the first part of the series, the problem of fault diagnosis is introduced and approaches based on quantitative models are reviewed. In the remaining two parts, methods based on qualitative models and process history data are reviewed. Furthermore, these disparate methods will be compared and evaluated based on a common set of criteria introduced in the first part of the series. We conclude the series with a discussion on the relationship of fault diagnosis to other process operations and on emerging trends such as hybrid blackboard-based frameworks for fault diagnosis.
Article
Batch process monitoring is a challenging task, because conventional methods are not well suited to handle the inherent multiphase operation. In this study, a novel multiway independent component analysis (MICA) mixture model and mutual information based fault detection and diagnosis approach is proposed. The multiple operating phases in batch processes are characterized by non-Gaussian independent component mixture models. Then, the posterior probability of the monitored sample is maximized to identify the operating phase that the sample belongs to, and, thus, the localized MICA model is developed for process fault detection. Moreover, the detected faulty samples are projected onto the residual subspace, and the mutual information based non-Gaussian contribution index is established to evaluate the statistical dependency between the projection and the measurement along each process variable. Such contribution index is used to diagnose the major faulty variables responsible for process abnormalities. The effectiveness of the proposed approach is demonstrated using the fed-batch penicillin fermentation process, and the results are compared to those of the multiway principal component analysis mixture model and regular MICA method. The case study demonstrates that the proposed approach is able to detect the abnormal events over different phases as well as diagnose the faulty variables with high accuracy. © 2013 American Institute of Chemical Engineers AIChE J, 59: 2761–2779, 2013
Article
Even after several years of trying, many plants still struggle with controlling alarm floods. Static rationalization can reduce your average number of alarms but without controlling the alarm floods, there is no help for the operator when he needs it the most. This session will cover the justification for alarm management from the safety and environmental as well as economic perspective. © 2012 American Institute of Chemical Engineers Process Saf Prog 32: 72–77, 2013
Article
Multiway kernel partial least squares method (MKPLS) has recently been developed for monitoring the operational performance of nonlinear batch or semi-batch processes. It has strong capability to handle batch trajectories and nonlinear process dynamics, which cannot be effectively dealt with by traditional multiway partial least squares (MPLS) technique. However, MKPLS method may not be effective in capturing significant non-Gaussian features of batch processes because only the second-order statistics instead of higher-order statistics are taken into account in the underlying model. On the other hand, multiway kernel independent component analysis (MKICA) has been proposed for nonlinear batch process monitoring and fault detection. Different from MKPLS, MKICA can extract not only nonlinear but also non-Gaussian features through maximizing the higher-order statistic of negentropy instead of second-order statistic of covariance within the high-dimensional kernel space. Nevertheless, MKICA based process monitoring approaches may not be well suited in many batch processes because only process measurement variables are utilized while quality variables are not considered in the multivariate models. In this paper, a novel multiway kernel based quality relevant non-Gaussian latent subspace projection (MKQNGLSP) approach is proposed in order to monitor the operational performance of batch processes with nonlinear and non-Gaussian dynamics by combining measurement and quality variables. First, both process measurement and quality variables are projected onto high-dimensional nonlinear kernel feature spaces, respectively. Then, the multidimensional latent directions within kernel feature subspaces corresponding to measurement and quality variables are concurrently searched for so that the maximized mutual information between the measurement and quality spaces is obtained. The I2 and SPE monitoring indices within the extracted latent subspaces are further defined to capture batch process faults resulting in abnormal product quality. The proposed MKQNGLSP method is applied to a fed-batch penicillin fermentation process and the operational performance monitoring results demonstrate the superiority of the developed method as apposed to the MKPLS based process monitoring approach.
Article
In 2005 an explosion rocked the BP Texas City refinery, killing 15 people and injuring 180. The company incurred direct and indirect financial losses on the order of billions of dollars for victims’ compensation as well as significant property damage and loss of production. The internal BP accident investigation and the Chemical Safety Board investigation identified a number of factors that contributed to the accident. In this work, we first examine the accident pathogens or lurking adverse conditions at the refinery prior to the accident. We then analyze the sequence of events that led to the explosion, and we highlight some of the provisions for the implementation of defense-in-depth and their failures. Next we identify a fundamental failure mechanism in this accident, namely the absence of observability or ability to diagnose hazardous states in the operation of the refinery, in particular within the raffinate splitter tower and the blowdown drum of the isomerization unit. We propose a general safety–diagnosability principle for supporting accident prevention, which requires that all safety-degrading events or states that defense-in-depth is meant to protect against be diagnosable, and that breaches of safety barriers be unambiguously monitored and reported. The safety–diagnosability principle supports the development of a “living” or online quantitative risk assessment, which in turn can help re-order risk priorities in real time based on emerging hazards, and re-allocate defensive resources. We argue that the safety–diagnosability principle is an essential ingredient for improving operators’ situation awareness. Violation of the safety–diagnosability principle translates into a shrinking of the time window available for operators to understand an unfolding hazardous situation and intervene to abate it. Compliance with this new safety principle provides one way to improve operators’ sensemaking and situation awareness and decrease the conditional probability that an accident will occur following an adverse initiating event. We suggest that defense-in-depth be augmented with this principle, without which it can degenerate into an ineffective defense-blind safety strategy.
Article
For batch processes, if sufficient fault batches are available, fault characteristics can be well understood and extracted, providing important information for fault diagnosis. However, sometimes, it is difficult and may be impractical to get sufficient batches for every fault case. Thus, how to derive reliable fault information based on limited batches has been an important question for fault diagnosis which, however, has not been well addressed yet. Starting from limited fault batches, this article proposes a fault diagnosis strategy based on reconstruction technique for multiphase batch processes. Two important modeling procedures are implemented by making full use of limited fault batches, concurrent phase partition and analysis of relative changes. First, for each fault case, a generalized time-slice is constructed by combining several consecutive time-slices within a short time region to explore local process correlations. The time-varying characteristics of normal and fault statuses are then jointly analyzed so that multiple sequential phases are identified simultaneously for all fault cases and normal case. Then, in each phase, monitoring models are developed from normal case with sufficient batches and each fault case is related with normal case for relative analysis to explore the relative changes (i.e., the fault effects). Comprehensive subspace decomposition is implemented where alarm-responsible fault deviations are extracted and used to develop fault reconstruction models which can more efficiently recover fault-free part and identify fault cause. Starting from limited batches, the proposed algorithm can efficiently distinguish different fault cases and offer reliable fault diagnosis performance. It is illustrated with a typical multiphase batch process, including one normal case and three fault cases with limited batches.
Article
In a typical large-scale chemical process, hundreds of variables are measured. Since statistical process monitoring techniques typically involve dimensionality reduction, all measured variables are often provided as input without weeding out variables. Here, we demonstrate that incorporating measured variables that do not provide any additional information about faults degrades monitoring performance. We propose a stochastic optimization-based method to identify an optimal subset of measured variables for process monitoring. The benefits of the reduced monitoring model in terms of improved false alarm rate, missed detection rate, and detection delay is demonstrated through PCA based monitoring of the benchmark Tennessee Eastman Challenge problem.
Article
Alarms are important for safe and reliable operation in process industries. Periodic alarm assessment is a crucial step in alarm management lifecycle that provides valuable feedback for fine tuning the alarm system. In this perspective tutorial, alarm data is represented using binary sequences and subsequently, two novel alarm data visualization tools are presented: (1) The High Density Alarm Plot (HDAP) charts top alarms over a given time period and (2) Alarm Similarity Color Map (ASCM) highlights related and redundant alarms in a convenient manner. The proposed graphical tools are instrumental in performance assessment of industrial alarm systems in terms of effectively identifying nuisance alarms such as chattering and related alarms based on routinely collected alarm event data. The special features and advantages of the proposed graphical tools are illustrated by successful application to two large scale industrial case studies, each involving over half a million observations for top fifty alarm tags.
Article
A nonlinear kernel Gaussian mixture model (NKGMM) based inferential monitoring method is proposed in this article for chemical process fault detection and diagnosis. Aimed at the multimode non-Gaussian process with within-mode nonlinearity, the developed NKGMM approach projects the operating data from the raw measurement space into the high-dimensional kernel feature space. Thus the Gaussian mixture model can be estimated in the feature space with each component satisfying multivariate Gaussianity. As a comparison, the conventional independent component analysis (ICA) searches for the non-Gaussian subspace with maximized negentropy, which is not equivalent to the multi-Gaussianity in multimode process. The regular Gaussian mixture model (GMM) method, on the other hand, assumes the Gaussianity of each cluster in the original data space and thus cannot effectively handle the within-mode nonlinearity. With the extracted kernel Gaussian components, the geometric distance driven inferential index is further derived to monitor the process operation and detect the faulty events. Moreover, the kernel Gaussian mixture based inferential index is decomposed into variable contributions for fault diagnosis. For the simulated multimode wastewater treatment process, the proposed NKGMM approach outperforms the ICA and GMM methods in early detection of process faults, minimization of false alarms, and isolation of faulty variables of nonlinear and non-Gaussian multimode processes.
Article
MapReduce is a framework for processing and managing large scale data sets in a distributed cluster, which has been used for applications such as generating search indexes, document clustering, access log analysis, and various other forms of data analytics. MapReduce adopts a flexible computation model with a simple interface consisting of map and reduce functions whose implementations can be customized by application developers. Since its introduction, a substantial amount of research efforts have been directed towards making it more usable and efficient for supporting database-centric operations. In this paper we aim to provide a comprehensive review of a wide range of proposals and systems that focusing fundamentally on the support of distributed data management and processing using the MapReduce framework.
Article
A novel process monitoring method is proposed that uses predictions from a dynamic model to predict whether process variables will violate an emergency limit in the future. The predictions are based on a Kalman filter and disturbance estimation. A critical feature of the proposed method is the evaluation of a T2 statistic as a “reality check” for deciding if the future predictions are reliable and thus can be used for making control decisions. Several simulation examples demonstrate the effectiveness of the proposed technique for both linear and nonlinear processes, and for a variety of disturbances.
Article
This paper describes a model of an industrial chemical process for the purpose of developing, studying and evaluating process control technology. This process is well suited for a wide variety of studies including both plant-wide control and multivariable control problems. It consists of a reactor/ separator/recycle arrangement involving two simultaneous gas—liquid exothermic reactions of the following form:A(g) + C(g) + D(g) → G(liq), Product 1,A(g) + C(g) + E(g) → H(liq), Product 2. Two additional byproduct reactions also occur. The process has 12 valves available for manipulation and 41 measurements available for monitoring or control.The process equipment, operating objectives, process control objectives and process disturbances are described. A set of FORTRAN subroutines which simulate the process are available upon request.The chemical process model presented here is a challenging problem for a wide variety of process control technology studies. Even though this process has only a few unit operations, it is much more complex than it appears on first examination. We hope that this problem will be useful in the development of the process control field. We are also interested in hearing about applications of the problem.
Article
To aid human experts in conducting HAZOP analysis in a more thorough and systematic way, a software system called PHASuite has been developed. The work is divided into two parts. First, a knowledge engineering framework has been proposed and discussed in Part I of this two-part paper. Based on the proposed framework, the second part focus on issues related to design and implementation of PHASuite. Standard software engineering methodologies have been applied to guide the design and development of PHASuite in order to achieve the goal of efficiency, flexibility and high quality. Layered and repository software architecture have been designed to handle the complex information flow and the multipart knowledge base in the system. Objected-oriented and component-oriented methodologies are adopted for design and implementation. Unified modelling language is used for design and documentation of development details. The results management facilities of PHASuite, including results summary, details and reports are presented. An industrial pharmaceutical process is presented as a case study to illustrate the applicability and procedure of using PHASuite for automated HAZOP analysis.
Article
Dealing with multidimensional problems has been the “bottle-neck” for implementing wavenets to process systems engineering. To tackle this problem, a novel multidimensional wavelet (MW) is presented with its rigorously proven approximation theorems. Taking the new wavelet function as the activation function in its hidden units, a new type of wavenet called multidimensional non-orthogonal non-product wavelet-sigmoid basis function neural network (WSBFN) model is proposed for dynamic fault diagnosis. Based on the heuristic learning rules presented by authors, a new set of heuristic learning rules is presented for determining the topology of WSBFNs. The application of the proposed WSBFN is illustrated in detail with a dynamic hydrocracking process.
Article
The Abnormal Situation Management® Consortium11This research study was sponsored by the Abnormal Situation Management® (ASM®) Consortium. ASM and Abnormal Situation Management are registered trademarks of Honeywell International, Inc. funded a study to investigate procedural execution failures during abnormal situations. The study team analyzed 20 publically available and 12 corporate confidential incident reports using the TapRoot® methodology to identify root causes associated with procedural execution failures. The main finding from this investigation was the majority of the procedural execution failures (57%) across these 32 incident reports were associated with abnormal situations. Specific recommendations include potential information to capture from plant incident to better understand the sources of procedural execution failures and improve use of procedures in abnormal situations.
Article
Much of the earlier work presented in the area of on-line fault diagnosis focuses on knowledge based and qualitatively reasoning principles and attempts to present possible root causes and consequences in terms of various measured data. However, there are many unmeasurable operating variables in chemical processes that define the state of the system. Such variables essentially characterise the efficiency and really need to be known in order to diagnose possible malfunction and provide a basis for deciding on appropriate action to be taken by operators. This paper is concerned with developing a soft sensor to assist in on-line fault diagnosis by providing information on the critical variable that is not directly accessible. The features of dynamic trends of the process are extracted using a wavelet transform and a qualitative interpretation, and then are used as inputs in the neural network based fault diagnosis model. The procedure is illustrated by reference to a refinery fluid catalytic cracking reactor.
Article
This paper outlines the history of Hazop, looks at future developments in its application to computer-controlled systems and suggests ways of using computers to increase the effectiveness of Hazops.
Article
Fault diagnosis in industrial processes are challenging tasks that demand effective and timely decision making procedures under the extreme conditions of noisy measurements, highly interrelated data, large number of inputs and complex interaction between the symptoms and faults. The purpose of this study is to develop an online fault diagnosis framework for a dynamical process incorporating multi-scale principal component analysis (MSPCA) for feature extraction and adaptive neuro-fuzzy inference system (ANFIS) for learning the fault-symptom correlation from the process historical data. The features extracted from raw measured data sets using MSPCA are partitioned into score space and residual space which are then fed into multiple ANFIS classifiers in order to diagnose different faults. This data-driven based method extracts fault-symptom correlation from the data eliminating the use of process model. The use of multiple ANFIS classifiers for fault diagnosis with each dedicated to one specific fault, reduces the computational load and provides an expandable framework to incorporate new fault identified in the process. Also, the use of MSPCA enables the detection of small changes occurring in the measured variables and the proficiency of the system is improved by monitoring the subspace which is most sensitive to the faults. The proposed MSPCA-ANFIS based framework is tested on the Tennessee Eastman (TE) process and results for the selected fault cases, particularly those which exhibit highly non-linear characteristics, show improvement over the conventional multivariate PCA as well as the conventional PCA-ANFIS based methods.
Article
Online fault diagnosis is a task of critical importance for maintaining a high level of operational safety in many chemical plants. The Petri-net models are adopted in this work for describing the fault propagation behaviors in batch processes. A systematic method has been developed to synthesize a timed Petri-net hierarchically structured according to any given piping and instrumentation diagram (P&ID) and its operating procedure. On the basis of this model, a diagnoser can be constructed automatically with a computer program for online implementation. Computer algorithms have also been devised to place additional sensors and/or synthesize extra operation steps for the purpose of improving diagnostic performance. Several examples are presented in this paper to demonstrate the effectiveness and correctness of the proposed approach.
Article
Due to the sheer size and complexity of modern chemical processes, single centralized monolithic monitoring strategies are not always well suited for detecting and identifying faults. In this paper, we propose a framework for distributed fault detection and identification (FDI), wherein the process is decomposed hierarchically into sections and subsections based on a process flow diagram. Multiple hierarchical FDI methods at varying levels of granularity are deployed to monitor the various sections and subsections of the process. The results from the individual FDI methods contain mutually nonexclusive fault classes at different levels of granularity. We propose an adaptation of the Dempster–Shafer evidence theory to combine these diagnostic results at different levels of abstraction. The key benefits of this scheme as demonstrated through two case studies—a simulated CSTR-distillation column system and the Tennessee Eastman challenge process—are improved diagnostic performance compared to individual FDI methods, robust localization of even novel faults, and a coherent explanation of the entire plant’s state.
Article
An adaptive agent-based hierarchical framework for fault type classification and diagnosis in continuous chemical processes is presented. Classification techniques such as Fisher’s discriminant analysis (FDA) and partial least-squares discriminant analysis (PLSDA) and diagnosis tools such as variable contribution plots are used by agents in this supervision system. After an abnormality is detected, the classification results reported by different diagnosis agents are summarized via a performance-based criterion, and a consensus diagnosis decision is formed. In the agent management layer of the proposed system, the performances of diagnosis agents are evaluated under different fault scenarios, and the collective performance of the supervision system is improved via performance-based consensus decision and adaptation. The effectiveness of the proposed adaptive agent-based framework for the classification of faults is illustrated using a simulated continuous stirred tank reactor (CSTR) network.
Article
Fault diagnosis is important for ensuring chemical processes stability and safety. The strong nonlinearity and complexity of batch chemical processes make such diagnosis more difficult than that for continuous processes. In this paper, a new fault diagnosis methodology is proposed for batch chemical processes, based on an artificial immune system (AIS) and dynamic time warping (DTW) algorithm. The system generates diverse antibodies using known normal and fault samples and calculates the difference between the test data and the antibodies by the DTW algorithm. If the difference for an antibody is lower than a threshold, then the test data are deemed to be of the same type of this antibody’s fault. Its application to a simulated penicillin fermentation process demonstrates that the proposed AIS can meet the requirements for online dynamic fault diagnosis of batch processes and can diagnose new faults through self-learning. Compared with dynamic locus analysis and artificial neural networks, the proposed method has better capability in fault diagnosis of batch processes, especially when the number of historical fault samples is limited.
Article
Companies often have multiple ‘sister’ plants in different locations, sometimes scattered around the globe. Generally, these companies are striving for similarly low risk among all these sister plants, even though they may have been built at different times, using evolving technology, and often have different capacities. In the pursuit of achieving acceptably low risk at all the chemical plants within a company, Process Safety specialists tend to put each under the PHA ‘microscope’ separately. FMC has several businesses that operate multiple “sister” plants, and has found that there are substantial benefits to be gained by also comparing the hazards at similar plants. © 2009 American Institute of Chemical Engineers Process Saf Prog, 2010
Article
Complex chemical process is often corrupted with various types of faults and the fault-free training data may not be available to build the normal operation model. Therefore, the supervised monitoring methods such as principal component analysis (PCA), partial least squares (PLS), and independent component analysis (ICA) are not applicable in such situations. On the other hand, the traditional unsupervised algorithms like Fisher discriminant analysis (FDA) may not take into account the multimodality within the abnormal data and thus their capability of fault detection and classification can be significantly degraded. In this study, a novel localized Fisher discriminant analysis (LFDA) based process monitoring approach is proposed to monitor the processes containing multiple types of steady-state or dynamic faults. The stationary testing and Gaussian mixture model are integrated with LFDA to remove any nonstationarity and isolate the normal and multiple faulty clusters during the preprocessing steps. Then the localized between-class and within-class scatter mattress are computed for the generalized eigenvalue decomposition to extract the localized Fisher discriminant directions that can not only separate the normal and faulty data with maximized margin but also preserve the multimodality within the multiple faulty clusters. In this way, different types of process faults can be well classified using the discriminant function index. The proposed LFDA monitoring approach is applied to the Tennessee Eastman process and compared with the traditional FDA method. The monitoring results in three different test scenarios demonstrate the superiority of the LFDA approach in detecting and classifying multiple types of faults with high accuracy and sensitivity. © 2010 American Institute of Chemical Engineers AIChE J, 2011
Article
An industrial process may operate over a range of conditions to produce different grades of product. With a data-based model, as conditions change, a different process model must be developed. Adapting existing process models can allow using fewer experiments for the development of a new process model, resulting in a saving of time, cost, and effort. Process similarity is defined and classified based on process representation. A model migration strategy is proposed for one type of process similarity, family similarity, which involves developing a new process model by taking advantage of an existing base model, and process attribute information. A model predicting melt-flow-length in injection molding is developed and tested as an example and shown to give satisfactory results. © 2009 American Institute of Chemical Engineers AIChE J, 2009
Article
This paper addresses the scenario where the manufacturing of a product with assigned quality specifications is transferred from a plant A to a plant B, which uses the same manufacturing process as plant A, but may differ for scale, configuration, actual operating conditions, measurement system arrangement, or simply location. The issue arises on whether the process data already available for plant A can be exploited to build a process monitoring model enabling to monitor the operation of plant B until enough data have been collected in this plant to design a monitoring model based entirely on the incoming data. This paper presents a general framework to tackle this problem (which we refer to as a model transfer problem), and three possible latent variable approaches within this framework are proposed and evaluated. One approach makes use of measurements coming from plant A only, whereas the other two integrate plant A data and plant B data into a single adaptive monitoring model. The proposed approaches are tested on an industrial spray-drying process, where plant A is a pilot unit and plant B is a production unit. It is shown that all proposed model transfer approaches guarantee very satisfactory monitoring performance in plant B, with quick fault detection, limited number of false alarms or undetected faults, and limited (or no) need of plant B data to accomplish the model transfer. We believe that these strategies can provide a valuable contribution to the practical implementation of quality-by-design methodologies and continuous quality assurance programs in product manufacturing.