Simplified process scheme of the ammonia production plant considered.

Simplified process scheme of the ammonia production plant considered.

Source publication
Article
Full-text available
Alarm floods represent a widespread issue for modern chemical plants. During these conditions, the number of alarms may be unmanageable, and the operator may miss safety-critical alarms. Chattering alarms, which repeatedly change between the active and non-active states, are responsible for most of the alarm records within a flood episode. Typicall...

Contexts in source publication

Context 1
... Desulfurization and Reforming; 2. Water-Gas Shift, CO2 Removal, and Methanation; 3. Ammonia synthesis and Cooling circuit; 4. Anhydrous ammonia storage, Pipeline, and Loading/unloading tankers. Fig. 1 shows a schematic representation of the plant layout for ammonia production, excluding storage, loading, and unloading (Section 4). Natural Gas, Air, and Steam are used as raw materials for ammonia synthesis, according to the following exothermic ...
Context 2
... than the number of False Negatives. The Accuracy, Precision, and Recall achieved by each algorithm are shown in Table 3 . Values in Table 3 indicate that the linear model produces the largest metrics. Similarly, the Deep model achieves a better performance than the Wide&Deep model. Fig. 10 , shows the Precision-Recall (P-R) curves of the three models calculated using different probability ...
Context 3
... evaluation database will be labeled as "Y". Oppositely, if the threshold is equal to 1, every alarm event in the evaluation database will be labeled as "N". Lowering the threshold causes the Recall to either decrease or to remain constant. Instead, Precision may increase or decrease when the threshold is reduced. Each point of the blue curves in Fig. 10 represents the Precision and Recall values obtained using a specific threshold. For a specific model (panels a, b, and c in Fig. 10 ), thresholds larger than THOLD ensures a Precision larger than ...
Context 4
... will be labeled as "N". Lowering the threshold causes the Recall to either decrease or to remain constant. Instead, Precision may increase or decrease when the threshold is reduced. Each point of the blue curves in Fig. 10 represents the Precision and Recall values obtained using a specific threshold. For a specific model (panels a, b, and c in Fig. 10 ), thresholds larger than THOLD ensures a Precision larger than ...
Context 5
... is better at memorizing (e.g., Linear) rather than generalizing (e.g., DNN, Wide&Deep). Future research should investigate whether different optimization strategies (e.g., different hyperparameters, learning decay, activation functions) could improve the performance of advanced but sensitive models such as the Deep and Wide&Deep. P-R curves in Fig. 10 suggest that precisions larger than 0.9 can always be achieved while maintaining the Recall close to 0.9 by varying the probability threshold. If the threshold is further reduced (i.e., below 0.05 for the linear model, 0.29 for the Deep, and 0.41 for the Wide&Deep), the Precision drops significantly. The selection of the best threshold ...
Context 6
... 1 occurs when the model fails to identify the first element of a Chattering sequence or, in other words, it fails to detect the first unique alarm event labeled as "Y" after one or more events labeled as "N". Fig. 11 clarifies this insight. As one might notice, the first event of the chattering sequence (the red dot in Fig. 11 ) has been incorrectly labeled (true label is Y, predicted label is N), and a False Negative has been produced as a consequence. Later in time, the model has correctly identified chattering (green dots). Also, the model has ...
Context 7
... 1 occurs when the model fails to identify the first element of a Chattering sequence or, in other words, it fails to detect the first unique alarm event labeled as "Y" after one or more events labeled as "N". Fig. 11 clarifies this insight. As one might notice, the first event of the chattering sequence (the red dot in Fig. 11 ) has been incorrectly labeled (true label is Y, predicted label is N), and a False Negative has been produced as a consequence. Later in time, the model has correctly identified chattering (green dots). Also, the model has correctly predicted the end of the chattering sequence, which occurred at 13:56:00 (not displayed in Fig. 11 ...
Context 8
... (the red dot in Fig. 11 ) has been incorrectly labeled (true label is Y, predicted label is N), and a False Negative has been produced as a consequence. Later in time, the model has correctly identified chattering (green dots). Also, the model has correctly predicted the end of the chattering sequence, which occurred at 13:56:00 (not displayed in Fig. 11 ...
Context 9
... 2 occurs when the model fails to identify the last element of a Chattering sequence. Fig. 12 provides an example of this. The last two unique alarm events of the series (red dots) have been incorrectly labeled (the true label is N, while the predicted label is Y), and two False Positive have been produced as a ...

Similar publications

Article
Full-text available
In order to overcome the complexities encountered in sensing devices with data collection, transmission, storage and analysis toward condition monitoring, estimation and control system purposes, machine learning algorithms have gained popularity to analyze and interpret big sensory data in modern industry. This paper put forward a comprehensive sur...

Citations

... However, the current approaches cannot completely avoid machine failures and the necessity of reactive maintenance. In this context, various areas are researching sophisticated approaches, e.g., fault diagnosis and isolation [10], system diagnosis [45,46], or alarm management [47,48] including also alarm flood reduction [49,50]. Recent studies show that unplanned downtime costs manufacturers worldwide approximately $50 billion per year [51]. ...
Preprint
Full-text available
Special machinery engineering is of great importance to the manufacturing industry and makes a comparatively large contribution to many economies. Digital transformation is already well advanced in many producing industries, and modern ICT technologies such as agents, service orientation, digital twins and artificial intelligence are being used with increasing success. In addition to improving specific product characteristics such as reliability or flexibility, the adoption of modern ICT technologies to ensure sustainability is being intensively discussed. So far, however, there has been little uptake of these technologies in the special machinery industry; sustainability receives little attention. This article examines in detail the reasons for and impediments to the adoption of modern ICT technologies based on a study among special machinery manufacturers. Observations of existing challenges were gathered during daily work, described in detail, and used to derive conclusions about causal barriers. From this, detailed requirements are derived to promote the adoption of modern ICT technologies in the special machinery engineering sector and, ultimately, to bring sustainability more into focus.
... Accuracy refers to the fraction of correct predictions. Precision indicates the fraction of true positive predictions, while recall indicates the proportion of positive labels that have been correctly predicted (Tamascelli et al., 2020). True positive (TP) and true negative (TN) indicate the correct predictions of "Potentially suitable" and "Unsuitable" materials, respectively. ...
Article
Full-text available
The issue of global warming imposes a change of paradigm in the energy sector to mitigate the human impact on the environment. In this perspective, hydrogen can be produced through water electrolysis and used in fuel-cell systems with near-zero pollutant emissions. Nevertheless, the distribution system represents one of the main bottlenecks for a future transition to a hydrogen economy. The possibility of transporting hydrogen through the existing pipeline network is economically attractive. Nevertheless, most pipeline steels are prone to hydrogen-induced damage, and their mechanical properties are degraded by hydrogen gas to an extent that could result in sudden component failures. Hydrogen embrittlement can be responsible for undesired releases with potentially catastrophic consequences. This study evaluates the safety of existing European natural gas pipelines for hydrogen transport through machine learning tools. The material susceptibility to hydrogen embrittlement is predicted under different working conditions in order to prevent loss of material integrity and eventual releases. This study aims at bridging the gap between safety and material science, as it can optimize predictive maintenance of hydrogen pipelines, thus promoting the widespread utilization of hydrogen in the forthcoming years.
... A feature is a "measurable property of an object or event with respect to a set of characteristics", and provides a machine-readable way to describe the relevant objects (ISO and IEC, 2022). The classes of the object under investigation are defined as "labels" (or categories) (Tamascelli et al., 2020). ...
... This gave the opportunity to refine and consolidate the final sets of labels populating our database. Finally, we completed the database replacing the missing values in the cells with the symbol "-" since there is no way to guess the lacking values through statistical estimations (Tamascelli et al., 2020). We therefore obtained the final structured database. ...
... A feature, whether numerical, categorical or textual, is a "measurable property of an object or event with respect to a set of characteristics", and provides a machine-readable way to describe the relevant objects (ISO, 2022). For each feature, labels indicating the classes of the object under investigation were defined (Tamascelli et al., 2020). Incident records usually have textual descriptions, which contain a large amount of hidden information. ...
Conference Paper
Full-text available
Hydrogen has the potential to channel a large amount of renewable energy from the production sites to the end users. Nevertheless, safety aspects represent the major bottleneck for its widespread utilization. The knowledge of past hydrogen-related undesired events is fundamental to avoid the occurrence of similar accidents in the future. Databases such as HIAD 2.0 and H2Tools are dedicated to those accidents, but the scarcity of structured and quantitative information makes it difficult to apply advanced data-driven analyses based on Machine Learning (ML). In this paper, undesired events related to the hydrogen value chain were selected from the HIAD 2.0 and MHIDAS databases. These records were collected in a structured repository tool, namely Hydrogen-related Incident Reports and Analyses (HIRA). The definition of its features is based on a critical comparison of the primary reporting systems, and an analysis of the literature regarding H2 safety. Subsequently, text mining tools were used to analyze the event descriptions in natural language, extract relevant information and data, and sort them in the database. Finally, the new database was analyzed through Business Intelligence (BI) and ML classification tools. Data-driven analyses could help identify valuable information about H2-related undesired events, promoting a safety culture, and improving accident management in the emerging hydrogen industry.
... Alarm data from a section of an ammonia production process [39] are analysed by Tamascelli et al. [38]. Due to the large quantity of hazardous substances stored and handled during normal activity, the plant has been classified as an "upper tier" Seveso III establishment. ...
... Although effective, this technique produces static results (i.e., chattering is quantified based on historical alarm data, but no conclusion can be drawn about the alarm's future behaviour). This Chattering Index approach is modified by Tamascelli et al. [38] to predict chattering behaviour by means of standard ML models. 72 N. Paltrinieri ...
Chapter
Full-text available
With the advent of digitalisation and big data sources, new advanced tools are needed to precisely project safety-critical system outcomes. Existing systems safety analysis methods, such as fault tree analysis (FTA), lack systematic and structured approaches to specifically account for system event consequences. Consequently, we proposed an algorithmic extension of FTA for the purposes of: (a) analysis of the severity of consequences of both top and intermediate events as part of a fault tree (FT) and (b) risk assessment at both the event and cut set level. The ultimate objective of the algorithm is to provide a fine-grained analysis of FT event and cut set risks as a basis for precise and cost-effective safety control measure prescription by practitioners.
... Alarm data from a section of an ammonia production process [39] are analysed by Tamascelli et al. [38]. Due to the large quantity of hazardous substances stored and handled during normal activity, the plant has been classified as an "upper tier" Seveso III establishment. ...
... Although effective, this technique produces static results (i.e., chattering is quantified based on historical alarm data, but no conclusion can be drawn about the alarm's future behaviour). This Chattering Index approach is modified by Tamascelli et al. [38] to predict chattering behaviour by means of standard ML models. 72 N. Paltrinieri ...
Chapter
Full-text available
This chapter describes some of the recurring themes that emerged from the contributions in this book, as well as from the workshop in which the contributions were presented and discussed. The themes are in one way or another related to the term “sociotechnical” and thus point to problems (old and new) that are linked to the relationship between the social and technological dimensions of organisations. The chapter provides a brief explanation of the history and current use of the term “sociotechnical” before discussing three sociotechnical issues that we believe are important for dealing with safety in the digital age.
... Alarm data from a section of an ammonia production process [39] are analysed by Tamascelli et al. [38]. Due to the large quantity of hazardous substances stored and handled during normal activity, the plant has been classified as an "upper tier" Seveso III establishment. ...
... Although effective, this technique produces static results (i.e., chattering is quantified based on historical alarm data, but no conclusion can be drawn about the alarm's future behaviour). This Chattering Index approach is modified by Tamascelli et al. [38] to predict chattering behaviour by means of standard ML models. ...
Chapter
Full-text available
Industry is stepping into its 4.0 phase by implementing and increasingly relying on cyber-technological systems. Wider networks of sensors may allow for continuous monitoring of industrial process conditions. Enhanced computational power provides the capability of processing the collected “big data”. Early warnings can then be picked and lead to suggestion for proactive safety strategies or directly initiate the action of autonomous actuators ensuring the required level of system safety. But have we reached these safety 4.0 promises yet, or will we ever reach them? A traditional view on safety defines it as the absence of accidents and incidents. A forward-looking perspective on safety affirms that it involves ensuring that “as many things as possible go right”. However, in both the views there is an element of uncertainty associated to the prediction of future risks and, more subtly, to the capability of possessing all the necessary information for such prediction. This uncertainty does not simply disappear once we apply advanced artificial intelligence (AI) techniques to the infinite series of possible accident scenarios, but it can be found behind modelling choices and parameters setting. In a nutshell, any model claiming superior flexibility usually introduces extra assumptions (“there ain’t no such thing as a free lunch”). This contribution will illustrate a series of examples where AI techniques are used to continuously update the evaluation of the safety level in an industrial system. This will allow us to affirm that we are not even close to a “no-brainer” condition in which the responsibility for human and system safety is entirely moved to the machine. However, this shows that such advanced techniques are progressively providing a reliable support for critical decision making and guiding industry towards more risk-informed and safety-responsible planning.
... It is also stated that with the help of the window size, the robustness of the model can still predict an alarm even if a sensor does not have a predefined threshold. In [11], the aim is to predict chattering alarms that occur more than 3 times per minute [3]. For this target, a dynamic chattering index from [19] is used to decide whether an alarm is chattering or not before training the model. ...
Chapter
Alarm systems are important assets for plant safety and efficiency in a variety of industries, including power and utility, process and manufacturing, oil and gas, and communications. Especially in the process-based industry, alarm systems collect a huge amount of data in the field that requires operators to take action carefully. However, existing industrial alarm systems suffer from poor performance, mostly with alarm overloading and alarm flooding. Therefore, this problem creates an opportunity to implement machine learning models in order to predict upcoming alarms in the industry. In this way, the operators can take the necessary actions automatically while they are using their capacity for other unpredicted alarms. This study provides an overview of alarm prediction methods used in industrial alarm systems with the context of their classification types. In addition, a comparative analysis was conducted between two state-of-the-art deep learning models, namely Long Short-Term Memory (LSTM) and Transformer, through a benchmarking process. The experimental results of both models were evaluated and contrasted to identify their respective strengths and weaknesses. Moreover, this study identifies research gaps in alarm prediction, which can guide future research for better alarm management systems.KeywordsAlarm PredictionAlarm ManagementAlarm FloodsNeural NetworksDeep LearningTransformerLSTM
... • Predicting chattering alarms [48] • Plant health diagnosis [50] their impact on sustainability and safety and to understand which AI-based solutions are likely to be allowed by a future harmonization of IEC 61508. ...
Preprint
The AI Act has been recently proposed by the European Commission to regulate the use of AI in the EU, especially on high-risk applications, i.e. systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity. On the other hand, IEC 61508, one of the most adopted international standards for safety-critical electronic components, seem to mostly forbid the use of AI in such systems. Given this conflict between IEC 61508 and the proposed AI Act, also stressed by the fact that IEC 61508 is not an harmonised European standard, with the present paper we study and analyse what is going to happen to industry after the entry into force of the AI Act. In particular, we focus on how the proposed AI Act might positively impact on the sustainability of critical infrastructures by allowing the use of AI on an industry where it was previously forbidden. To do so, we provide several examples of AI-based solutions falling under the umbrella of IEC 61508 that might have a positive impact on sustainability in alignment with the current long-term goals of the EU and the Sustainable Development Goals of the United Nations, i.e., affordable and clean energy, sustainable cities and communities.
... It is very convenient for users to open and use when using. Moreover, in the late need to expand the use of resources, you can also quickly upgrade the server configuration [3]. erefore, the design of many enterprise data centers is based on cloud computing service mode. is technical mode requires all hardware resources to be concentrated in a common basic resource pool that everyone can share [4]. ...
Article
Full-text available
At present, big data cloud computing has been widely used in many enterprises, and it serves tens of millions of users. One of the core technologies of big data cloud service is computer virtualization technology. The reasonable allocation of virtual machines on available hosts is of great significance to the performance optimization of cloud computing. We know that with the continuous development of information technology and the increasing number of computer users, different virtualization technologies and the increasing number of virtual machines in the network make the effective allocation of virtualization resources more and more difficult. In order to solve and optimize this problem, we propose a virtual machine allocation algorithm based on statistical machine learning. According to the resource requirements of each virtual machine in cloud service, the corresponding comprehensive performance analysis model is established, and the reasonable virtual machine allocation algorithm description of the host in the resource pool is realized according to the virtualization technology type or mode provided by the model. Experiments show that this method has the advantages of overall performance, load balancing, and supporting different types of virtualization.