Book

Simulation Modeling and Analysis

Taylor & Francis
Journal of the American Statistical Association
Authors:

Abstract

Mainly deal with queueing models, but give the properties of many useful statistical distributions and algorithms for generating them.
... One way to approach decision-making in Software Engineering project management is through simulation [11]. Simulation stands as one of the preeminent techniques within the field of operations research [12]. This invaluable tool serves the purpose of mitigating uncertainties inherent to decision-making processes or evaluating transformation strategies [13]. ...
... System Dynamics is a simulation paradigm proposed by Forrester in the 1950s [42] to make strategic decisions through the design and improvement of policies [12]. It has been applied principally in engineering, business and computer science [43] and even used to model the software development process [11]. ...
... It ensures that everyone involved in the project understands the variables and avoids including unnecessary or irrelevant variables in the model's objectives [143]. In addition, it contributes to the validation and verification process by starting from a base conceptualization, which in turn facilitates the analysis [12]. ...
Article
Full-text available
Software development projects demand high levels of interaction between work team members. This way, management and decision-making must be supported by analyzing the complex dynamics generated through individual interactions to complete the projects. This complexity can be addressed using system dynamics. This modeling approach studies how the structures and relationships between variables in a system interact to generate behaviors over time. It is used to understand and analyze complex systems and make informed decisions. The first step in modeling is articulating the problem. This step defines the key variables that will be included in the model. Still, the lack of a standardized procedure to select, measure, and propose causal relationships is evident. Subjectivity is often appealed to, but this could lead to inaccurate models and biased results. The challenge intensifies when it comes to qualitative variables. This study introduces a formal methodology to characterize such variables, addressing a gap in the existing literature. The use of systematic mapping and a survey-based study is proposed. The methodology is applied to characterize three social and human factors that influence the productivity of software development teams: communication, leadership, and teamwork. The results captured primary experimental research’s proven definitions, measurement mechanisms, and causal relationships. This formalized approach not only fills a significant gap in system dynamics but also lays a foundation for expanding its scope to encompass additional variables. As such, it represents a substantial methodological contribution to the field.
... Queuing system: servers offering services to customers. If servers are occupied or not available, customers wait in one or more queues [78] (p. 118). ...
... λ is the arrival rate per unit of time. The arrival times of customers are assumed to be independent of each other and stochastic [78] (p. 118). ...
... µ is the processing rate at a server. The processing times of customers are assumed to be independent of each other and stochastic [78] (p. 118). ...
Article
Full-text available
Healthcare systems are facing a shortage of nurses. This article identifies some of the major causes of this and the issues that need to be solved. We take a perspective derived from queuing theory: the patient–nurse relationship is characterized by a scarcity of time and resources, requiring comprehensive coordination at all levels. For coordination, we take an information-theoretic perspective. Using both perspectives, we analyze the nature of healthcare services and show that ensuring slack, meaning a less than exhaustive use of human resources, is a sine qua non to having a good, functioning healthcare system. We analyze what coordination efforts are needed to manage relatively simple office hours, wards, and home care. Next, we address the level of care where providers cannot themselves prevent the complexity of organization that possibly damages care tasks and job quality. A lack of job quality may result in nurses leaving the profession. Job quality, in this context, depends on the ability of nurses to coordinate their activities. This requires slack resources. The availability of slack that is efficient depends on a stable inflow and retention rate of nurses. The healthcare system as a whole should ensure that the required nurse workforce will be able to coordinate and execute their tasks. Above that, workforce policies need more stability.
... The different concepts of the workflow are combined in the work of Balci, where a life-cycle model for M&S is defined 41 . Figure 4 defines a simplified view of such a workflow of a simulation study based on 41,42 . Activities are shown using ellipses. ...
... However, the available data is not as information-rich as those gained from a specifically crafted model validation experiment. For physics-based models, traditional literature on this topic therefore ought to be reviewed 42,45,86 . ...
Article
Full-text available
As digitalization is permeating all sectors of society toward the concept of “smart everything,” and virtual technologies and data are gaining a dominant place in the engineering and control of intelligent systems, the Digital Twin (DT) concept has surfaced as one of the top technologies to adopt. This paper discusses the DT concept from the viewpoint of Modeling and Simulation (M&S) experts. It both provides literature review elements and adopts a commentary-driven approach. We first examine the DT from a historical perspective, tracing the historical development of M&S from its roots in computational experiments to its applications in various fields and the birth of DT-related and allied concepts. We then approach DTs as an evolution of M&S, acknowledging the overlap in these different concepts. We also look at the M&S workflow and its evolution toward a DT workflow from a software engineering perspective, highlighting significant changes. Finally, we look at new challenges and requirements DTs entail, potentially leading to a revolutionary shift in M&S practices. In this way, we hope to foster the discussion on DTs and provide the M&S expert with innovative perspectives.
... To check the model's outputs, we used black-box validation (Law, 2007), which is a popular technique used for this purpose. The following parameters were chosen for validation, number of required beds, waiting time, amount of activity in each department, and clinic utilizations broken down by IMDs. ...
... To make sure our validation was robust, white-box validation (Law, 2007) was utilized to further test each component of the model for consistency as captured in the pathway. We did this by looking at all the different parts of the model during both the development and post-development phases. ...
Article
Full-text available
Health inequalities are a perennial concern for policymakers and in service delivery to ensure fair and equitable access and outcomes. As health inequalities are socially influenced by employment, income, and education, this impacts healthcare services among socio-economically disadvantaged groups, making it a pertinent area for investigation in seeking to promote equitable access. Researchers widely acknowledge that health equity is a multi-faceted problem requiring approaches to understand the complexity and interconnections in hospital planning as a precursor to healthcare delivery. Operations research offers the potential to develop analytical models and frameworks to aid in complex decision-making that has both a strategic and operational function in problem-solving. This paper develops a simulation-based modelling framework (SimulEQUITY) to model the complexities in addressing health inequalities at a hospital level. The model encompasses an entire hospital operation (including inpatient, outpatient, and emergency department services) using the discrete-event simulation method to simulate the behaviour and performance of real-world systems, processes, or organisations. The paper makes a sustained contribution to knowledge by challenging the existing population-level planning approaches in healthcare that often overlook individual patient needs, especially within disadvantaged groups. By holistically modelling an entire hospital, socio-economic variations in patients' pathways are developed by incorporating individual patient attributes and variables. This innovative framework facilitates the exploration of diverse scenarios, from processes to resources and environmental factors, enabling key decision-makers to evaluate what intervention strategies to adopt as well as the likely scenarios for future patterns of healthcare inequality. The paper outlines the decision-support toolkit developed and the practical application of the SimulEQUITY model through to implementation within a hospital in the UK. This moves hospital management and strategic planning to a more dynamic position where a software-based approach, incorporating complexity, is implicit in the modelling rather than simplification and generalisation arising from the use of population-based models.
... One of the most common approaches for validating simulation models against actual historical data is to apply statistical testing such as t-tests, chisquare, Kolmogrov-simirnov. However, the statistical tests assume IID data (Independent and identically distributed random variables) whereas in most cases, real-life applications are non-stationary (i.e., possess time-series nature) (Law and Kelton 2000). Applying time-independent statistical tests would invalidate the validation process. ...
... In this paper, we attempt to combine a few methodologies to get a robust two-phase mechanism to validate time-series models. Law and Kelton (2000) proposed a technique to satisfy the IID (Independent and identically distributed random variables) assumptions when using t-test for time series data. According to them, the approach is to run n number of independent replications (preferable n >30) and compare that with n number of independent observations from the real-world system. ...
Article
Full-text available
Fulfillment centers in the E-commerce industry are highly complex systems that houses inventory and fulfill customer orders. One of the key processes at these centers involves translating customer demands into trucks and yard operations. Truck yards with operational issues can create delays in customer orders. In this paper, we show how a scalable cloud-based hybrid simulation model is used to improve yard operations, optimize flow and design, and forecast yard congestion. Cloud experimentation along with automated database connectivity allows any user to run simulation analyses to derive data driven operational decisions. We tested the model on two real world case studies, which results in cost savings for the organization. This paper also proposes a robust automated framework for setting simulation validation benchmarks and measuring model accuracy.
... Este estudo apresenta um modelo de decisão baseado na simulação por eventos discretos (Law, 2015) e na Teoria do valor multiatributo -MAVT (Belton and Stewart, 2002;Almeida et al., 2015). O modelo proposto avalia o desempenho de alternativas simuladas usando uma função de valor multicritério, que traduz as preferências do DM e os julgamentos de trade-off entre: lead time (CT), que representa o tempo médio gasto por uma peça para atravessar todo sistema proudivo, sejam atividades que agregam valor, ou não; quantidade de itens produzidos (PP), diz respeito ao total de itens acabados; estoque em processo (wip), itens ou componentes que estão parcialmente acabados em algum estágio do processo produtivo; e uso do recurso gargalo (u), que confere a taxa que determina o fluxo do processo. ...
... Na etapa 3, um experimento de simulação por eventos discreto é desenvolvido para avaliar o desempenho de cada cenário. Compete salientar que o modelo FITradeoff utiliza o método de Law (2015) para sistematizar as diferentes etapas do experimento de simulação. Ainda na etapa 3, é fundamental verificar a ocorrência de diferenças estatisticamente significativas entre o desempenho médio dos cenários para cada um dos critérios. ...
... The techniques employed to achieve the findings identified in the literature search can be classified into three main groups: empirical or analytical methods such as case studies, field surveys, and experiments [48]; mathematical modeling techniques that include linear programming, dynamic programming, Markov chains, regression analysis, and similar methods [49]; and finally, simulation methods, which include simulation experiments, scenario modeling, and sensitivity analysis [50]. ...
Article
Full-text available
With the increasing environmental concerns and legislative pressures, the focus on incorporating ecologically sustainable practices into inventory management systems has grown, leading to the emergence of green inventory management. However, this field is not without its challenges, with numerous conflicting real-world constraints and goals. A comprehensive literature review targeting green inventory management operating under a periodic review inventory system was conducted to identify research gaps and potential directions for future research. Despite the growing interest in the field, this review highlighted the scarcity of relevant studies. Out of the 1272 papers reviewed, only 16 studies, or 1.3%, met the criteria for exploring periodic review inventory systems while simultaneously considering environmental and economic aspects. These studies were further analyzed in detail and categorized according to key classification criteria. The future research directions highlighted the need for additional studies on periodic review inventory systems operating under stochastic market demand in the context of green supply chain management. The standardization of emission calculation methodologies was also emphasized as a crucial step towards aligning inventory management practices with the aim of increasing inventory management efficiency and the related improvement in the environmental performance of supply chains.
... Normally distributed observations are a common assumption used in many R&S procedures because Assumption 1 can be justified by the central limit theorem when observations are either within-replication averages or batch means (Law and Kelton 2000). In other words, when Assumption 1 is violated, the decision maker can apply the proposed procedures by treating batch means of non-normally distributed observations as basic observations. ...
... The purpose of model verification is to verify the correctness and reliability of the functional model, to ensure that the established model is consistent with the expected function of the actual system, so as to avoid potential problems during the system design and development phase. Here's how to do model validation (Law andKelton 2020, Chick andSanchez 2008): 1. Experimental verification: Verify the correctness of the functional model through experiments. ...
... The DES model analyzes the project execution process from an event-oriented perspective. It models the system as a network of queues and activities, where activities compete for resources [42], and the system state changes based on the set of activities that occur in the time slice [43]. In this research, the projects, which require additional allocation of resources, are decomposed into activities that require support from various resources. ...
Article
Full-text available
To reasonably allocate shared resources (SRs) in project portfolios (PP) and realize PP's maximal benefits, this research proposes a hybrid SRs allocation model based on system dynamics (SD) and discrete event simulation (DES). Unlike previous resource allocation methods that solely consider project level benefits, the proposed model simultaneously considers the benefits at both project and portfolio levels. Specifically, starting from the project activity level, we first construct a project schedule evaluation sub model using DES; Second, a synergy benefit evaluation sub model at portfolio level is constructed by considering both the positive and negative benefits brought by SRs allocation using SD; Finally, by clarifying the connection between the two sub models, a hybrid simulation model for SRs allocation is obtained. A case study is used to demonstrate the practicality of the hybrid model. Sensitivity analysis is then conducted to examine how the model output changes due to variations in parameters. The proposed model is also compared with DES model to demonstrate the superiority of the hybridization. Simulation results reveal that the proposed model can systematically integrate the effects of SRs allocation at both project and portfolio levels, which provides PP managers a tool to enhance SRs allocation performance.
... For each combination, 30 simulation replications were conducted. The sequential procedure described in Law and Kelton [32] was used to determine the number of replications. Three performance values collected here include the system's throughput (THP), the mean flow time of parts (MFTP), and the mean tardiness of parts (MTP) that reflect the need to assess both the efficiency and effectiveness of the in-line stocker system, as well as its ability to meet customer demands and deadlines. ...
Article
Full-text available
In-line stockers are the choice of automated material handling systems in many modern TFT-LCD (Thin Film Transistor-Liquid Crystal Display) bays. The performance of an in-line stocker depends much on the performance of its stacker cranes. In this paper, the aim is to study three stacker-crane control problems, i.e., task-determination, port-selection, and cassette-selection, which affect the performance of stacker cranes. This study represents a significant contribution to the field, providing new insights into these critical control problems. The purpose of task-determination problem is to determine whether the next task of a stacker crane is a port-clearing task or a cassette-pickup-and-delivery task. The port-selection problem will appear if the next task of the stacker crane is port-clearing. Its purpose is to determine which port the stacker crane should visit to perform port-clearing task. On the other hand, if the stacker crane’s next task is cassette-pickup-and-delivery, the cassette-selection problem will appear. In this problem, the stacker crane needs to determine which cassette it should pick up and deliver first. Rules with different control logics are proposed for each problem. To understand the performance of the proposed rules, computer simulations were conducted. Simulation results in three performance measures—throughput, the mean flow time of cassettes and the mean tardiness of cassettes—that reflect the need to assess both the efficiency and effectiveness of the in-line stocker system, as well as its ability to meet customer demands and deadlines were collected. The analysis of the results led to the identification of the best rules, longest starvation time rule and shortest distance from the cassette’s current position to its destination port rule, for each problem and the best combination of rules for achieving the best overall performance, resulting in a more effective and efficient solution for solving stacker-crane control problems. The research findings have practical implications for improving the efficiency and productivity of stacker cranes in various industrial settings. It is believed this knowledge can benefit TFT-LCD manufacturers in improving the performance of their in-line stockers and increase their competitiveness as a result.
... Systems involve either experiments with the real system or simulations of the system [40]. Consequently, the simulation model mirrors the characteristics of the system, replicating the behavior of the physical system under operational conditions. ...
Article
This study proposes a simulation model for allocating counterbalanced forklifts in a logistics distribution center (LDC) with aisle constraints. Modeling the case study for a consumer goods firm, the performance measures of the logistics operation were calculated and certain experimental scenarios were purposed for decision-making regarding the number of forklifts and their productivity. The relevance of this research is validated by the gap in existing literature on enhancing forklift assignments in massive storage systems with restrictions. The simulation scenarios contribute toward standardizing logistics operations with similar characteristics, starting from the layout stage of an LDC. The designed simulation model demonstrates that the simulated allocation incorporates technical and human resources in warehouse operations. Utilizing discrete-event simulation (DES) as a framework, this study assesses various scenarios in an LDC with restrictions on the forklift. The hypothesis of the problem was analyzed, and the simulation model was used to characterize the system behavior under different scenarios and guide the decision-making processes impacting operational costs and client service levels. This research employs DES to address performance indicators and operational costs, serving as a methodological guide for resource allocation in logistics operations at distribution centers.
... compassionate) under uncertain value-destroying events in line with prior simulation research in management (see Davis et al., 2007;Fauchart and Keilbach, 2009;Harrison et al., 2007). We first conduct a base-case analysis to derive preliminary insights (following a standard practice in simulation studies; see Sterman, 2000), and then conduct extensive computational analyses using a carefully designed simulation experiments (as prescribed by Kelton and Law (2000) and Montgomery (2004)). ...
Article
We develop a community-based model of entrepreneurial action under value-destroying uncertainty (e.g., disasters) to formalize two well-established altruistic motivations— reciprocal opportunity belief (a “calculative” mindset of doing good with expectations of future payback) and compassionate opportunity belief (a “non-calculative” mindset of doing good without expectations of future payback)—and identify which belief and contingencies produce greater community welfare (i.e., value). Three moderating factors are considered: community size, actor’s action desirability, and welfare value increment of the community members. Our analysis shows that when the three moderating factors are large, the reciprocal opportunity belief generally produces greater community welfare than the compassionate opportunity belief; otherwise, the reverse occurs. We conclude that calculative mindset and community size go hand in hand to produce greater network effects through altruistic-venturing actions, which ultimately lead to greater community welfare. Our findings contribute to the emerging literature on the post-disaster venturing by advancing the contingency effects of altruistic motives on entrepreneurial actions to alleviate others’ sufferings and the counter-intuitive benefits of “calculative” mindset. We also stimulate a new conversation to redirect research in entrepreneurship toward the “community” as a viable unit of analysis.
... It is used in various disciplines, such as economics, sociology, ecology, and public policy, especially for analyzing complex events that are difficult to observe or involve multiple levels of analysis. DEMS, as stated by Law (2014), is a simulation modeling technique that focuses on modeling and evaluating systems that undergo different status changes. The process requires building a simulation model that represents a system as a sequence of discrete events, such as arrivals, departures, and state changes. ...
... An effective Monte Carlo analysis should incor-porate not only the ranges of realistic possible outcome, but also the distributional nature of how the identified risks actually "behave" between identified extremes (CEAA, 2011;Kozlova and Yeomans, 2019). Although Monte Carlo simulation has been applied to wide spectrums of problems, the approach to its output analysis has remained comparatively static (Law and Kelton, 2000;Kozlova and Yeomans, 2022a, b). While simulation models enable a merger of the stochastic behaviours directly into the analysis process, they do not supply any prescriptive mechanism for determining actual system solutions (Kozlova and Yeomans, 2022b). ...
Article
Full-text available
“Real world” risk analysis in environmental contexts frequently requires the need to contrast numerous uncertain factors simultaneously and to communicate difficult-to-capture interactions. Monte Carlo simulation modelling of complex environmental sytems is frequently employed to integrate uncertain inputs and to construct probability distributions of the resulting outputs. Visual analytics and data visualization can then be employed for the processing, analyzing, and communicating of the influence of any multi-variable uncertainties on the system. The simulation decomposition (SimDec) analytical technique has recently been employed in the complex assessments of environmental systems. SimDec has proved to be beneficial in revealing interdependencies in complex models, lowering computational burdens, facilitating decision-maker perceptions, and especially, making analytical components visualizable. It has been demonstrated that many analytical findings would not have been revealed without the coloured visualizations provided by SimDec. However, an ad hoc colouring scheme of the distribution output is neither sufficient nor capable of producing much of the key visualizable information requisite for an effective SimDec analysis. Instead, an approach that has recently been referred to as an intelligent colouring has been proposed. This paper outlines, highlights, and demonstrates the importance of and best-practices in an intelligent colouring scheme needed for an effective SimDec analysis of complex environmental systems.
... Bu simülasyon türü, bir sistemin zaman içinde modellenmesini içerir. Ayrık olaylar, sistemin durumunu ve dolayısıyla performans ölçütlerini etkileyen belirli zamanlarda meydana gelir (Law, 2007). Ayrık olay simülasyonları, dinamik ve stokastik olarak durum değiştiren, olayların durum değişikliğini tetiklediği sistemleri modellemek için kullanılır ve güçlü bir kuyruk yapısı sergileyen sistemlerde kullanılır (Günal, 2012). ...
Article
Full-text available
Yoğun bakım ünitelerinde hasta akışının modellenmesi, süreçlerin daha iyi anlaşılmasına ve bu modellerin kullanımı yoğun bakım sistemlerinin işlevselliğinin artırılmasına katkıda bulunabilir. Yoğun bakım ünitelerinde (YBÜ) hasta akışının kötü yönetimi, hasta beklemelerine ve hastaların reddedilmesine neden olabilir. Ayrıca YBÜ yönetimi kapasite yönetimi ve planlaması açısından önemli zorluklarla karşı karşıya kalır. Bu araştırma, 3. basamak kamu üniversite hastanesinde yoğun bakım hastaların akışının ayrık olay simülasyonu yöntemiyle modellenmesine ve kapasite ihtiyacına odaklanmaktadır. Yoğun bakım ihtiyacı olan ve biten hastaların servisler arasındaki geçişlerinde gecikmeler yaşanabilmektedir. Bu çalışmanın amacı, Yoğun Bakım Ünitesi (YBÜ) hastalarının kabul, yoğun bakım yatağı bekleme ve taburculuk süreçlerindeki kısıtlamaları simüle ederek, hastane yönetim politikalarının performansını değerlendirmek ve mevcut yatak sayısında hasta bekleme sürelerinin minimize edildiği bir senaryoda gereken yatak sayısını hesaplamaktır. Ayrıca, diğer servis yataklarının dolu olması nedeniyle geciken taburculuk sürecinin alternatif bir politika önerisiyle ele alınması hedeflenmektedir. Oluşturulan simülasyon modeliyle, YBÜ hizmetlerinin mevcut durumunu hasta bekleme süreleri açısından azaltılabileceği bulunmuştur. Tam zamanında hasta taburculukları YBÜ yataklarına nakledilecek hastaların ortalama bekleme sürelerinin azaltılabileceği gözlemlenmiştir. Anahtar Kelimeler: Yoğun bakım ünitesi, hasta akışı, kapasite planlaması, simülasyon.
... One of the commonly used methods (also used in this paper) is the MLE method which Law and Kelton (1991) considered as appropriate method of parameter estimation in the distribution fitting process. According to Kottegoda and Rosso (2008, p. 107), "Maximum likelihood, or ML, is an alternative to the method of moments. ...
Article
Full-text available
This paper describes the frequency analysis of absolute maximum air temperatures, using annual maximum series (AMS) in the period 1961-2010 from 40 climatological stations in Serbia with maximum likelihood estimation of distribution parameters. For the goodness of fit testing of General Extreme Value (GEV), Normal, Log-Normal, Pearson 3 (three parameters), and Log-Pearson 3 distribution, three different tests were used (Kolmogorov-Smirnov, Anderson-Darling, chi-square). Based on the results of these tests (best average rank of certain distribution), the appropriate distribution is selected. GEV distribution proved to be the most appropriate one in most cases. The probability of exceedance of absolute maximum air temperatures on 1%, 0.5%, 0.2%, and 0.1% levels are calculated. A spatial analysis of the observed and modeled values of absolute maximum air temperatures in Serbia is given. The absolute maximum air temperature of 44.9?C was recorded at Smederevska Palanka station, and the lowest value of maximum air temperature 35.8?C was recorded at Zlatibor station, one of the stations with the highest altitude. The modeled absolute maximum air temperatures are the highest at Zajecar station with 44.5?C, 45.6?C, 47.0?C, and 48.0?C and the lowest values are calculated for Sjenica station with 35.5?C, 35.8?C, 36.1?C, and 36.2?C for the return periods of 100, 200, 500, and 1000 years, respectively. Our findings indicate the possible occurrence of much higher absolute maximum air temperatures in the future than the ones recorded on almost all of the analyzed stations.
... The model development process follows a recursive cycle of verification, validation, and calibration steps (Law and Kelton 1991). Model verification should assure that the programming implementation of the conceptual model is correct. ...
Article
Background Trees are a critical part of urban infrastructure. Cities worldwide are pledging afforestation objectives due to net-zero targets; however, their realisation requires a comprehensive framework that combines science, policy, and practice. Methods The paper presents the Green Urban Scenarios (GUS) framework for designing and monitoring green infrastructures. GUS considers weather, maintenance, tree species, diseases, and spatial distributions of trees to forecast their impacts. The framework uses agent-based modelling (ABM) and simulation paradigm to integrate green infrastructure into a city’s ecological, spatial, economic, and social context. ABM enables the creation of digital twins for urban ecosystems at any level of granularity, including individual trees, to accurately predict their future trajectories. Digital representation of trees is created using a combination of datasets such as earth observations from space, street view images, field surveys, and qualitative descriptions of typologies within existing and future projects. Machine learning and statistical models calibrate biomass growth patterns and carbon release schemes. Results The paper examines various green area typologies, simulating several hypothetical scenarios based on Glasgow’s urban forests. It exhibits the emergence of heterogeneity features of the forests due to interactions among trees. The growth trajectory of trees has a non-linear transition phase toward stable growth in its maturity. Reduced maintenance deteriorates the health of trees leading to lower survival rate and increased CO 2 emissions, while the stormwater alleviation capacity may differ among species. Conclusions The paper demonstrates how GUS can facilitate policies and maintenance of urban forests with environmental, social, and economic benefits.
... It allows attendees to pick out seats, purchase tickets, and receive affirmation. 6. Cruise Reservation System: Cruise traces use reservation systems to manipulate cabin bookings, dining alternatives, and onboard activities. ...
Article
The reservation device is a socio-political mechanism aimed toward redressing historic inequalities by means of allocating possibilities, assets, or advantages to precise marginalized agencies. Originating in diverse bureaucracy throughout the globe, it's been applied in training, employment, and public offerings. This summary affords an outline of the reservation gadget's evolution, its underlying principles, and its effect on societal dynamics. It additionally examines the challenges of implementation, which include potential controversies surrounding meritocracy and equitable distribution. As a essential device for social justice, the reservation system keeps to provoke discussions on its effectiveness and long-term sustainability.
... De acordo com Pidd (1998), um modelo é a representação externa explícita da realidade vista pelas pessoas que desejam usá-lo para entender, mudar, gerenciar e controlar aquela parte da realidade. Law & Kelton (1991) afirmam que as vantagens do uso da simulação são: modelar sistemas complexos que têm elementos estocásticos que devem ser resolvidos e analisados pela simulação, visto que estes não poderiam ser descritos perfeitamente por modelos matemáticos resolvidos analiticamente; fornecer um melhor controle sobre as condições experimentais do que seria possível no sistema real, pois podem ser feitas várias replicações no modelo, fornecendo os valores para todos os parâmetros; permitir a replicação precisa dos experimentos, possibilitando-se testar alternativas diferentes para o sistema; permitir simular longos períodos em um tempo reduzido ou vice-versa; evitar gastos desnecessários, em geral, é mais econômico que testar o sistema real. Pinto (1999) descreve que uma das vantagens da simulação é a possibilidade de controlar a velocidade com que as alterações no estado do modelo processam. ...
... compassionate) under uncertain value-destroying events in line with prior simulation research in management (see Davis et al., 2007;Fauchart and Keilbach, 2009;Harrison et al., 2007). We first conduct a base-case analysis to derive preliminary insights (following a standard practice in simulation studies; see Sterman, 2000), and then conduct extensive computational analyses using a carefully designed simulation experiments (as prescribed by Kelton and Law (2000) and Montgomery (2004)). ...
... After developing conceptual model which represents the causality analysis framework for retail consumer behavior, a validation testing for this model is required to conduct. Validation is concerned with determining whether the created model is an accurate representation of real system for study objectives (Law, 2013). In the validation stage, referring to Barlas (1996), tests of model structure, such as structure-verification test and boundary-adequacy test were carried out. ...
... It has been empirically proven that gamma distribution is the most appropriate distribution for representing the cycle times of tasks performed by human workers in manufacturing systems (Law and Kelton, 2015). Dallery and Gershwin (1992) used an exponential distribution, which is a special case of the gamma distribution, to represent the distribution of manual task-processing times. ...
Article
Production systems in industries are undergoing transformative changes, with the rise of Industry 4.0 technologies amplifying the complexity of manual and semi-automated workstations, necessitating advanced training and adaptability from human workers. Human workers, due to their unique blend of cognitive and motor skills, thus flexibility, are indispensable and will continue to play a pivotal role. Because of their unique experiences and attributes, they inherently exhibit variability in their processing times and learning rates, which complicates frequent production ramp-ups. Recognizing the lack of comprehensive models that simultaneously account for stochastic processing times and heterogeneous learning during production ramp-ups, this study aims to bridge this gap. We developed an analytical model of a two-worker production system with an intermediate buffer by focusing on worker learning curves, stochastic processing times, and learning heterogeneity. Through an illustrative case, we derived insights into the performance of such systems, specifically in terms of measures including the mean throughput time of a batch, mean waiting time of a part in the buffer, mean idle time of workers, work-in-progress distribution, and buffer usage during the production run. We found that deterministic learning models can significantly underestimate the throughput times, and even consistent average learning rates can lead to variable throughput times based on the learning patterns. Our findings emphasize the need for production managers to consider these factors for realistic and effective production planning, underscoring the novelty of our approach in addressing these intricate dynamics to improve not only system performance, but also worker well-being.
... In Type I censoring, parametric analysis requires a goodness of fit test for the data distribution, such as the Anderson Darling test. The Anderson Darling test is a statistical test that utilizes specific distributions such as normal, lognormal, log logistic, exponential, Weibull, and logistic distributions [6] [7]. ...
Article
Full-text available
Survival Analysis is a research method that examines the survival time of individuals or experimental units in relation to events such as death, disease, recovery, or other experiences. This study utilizes a parametric survival analysis model with a 2 parameter log logistic distribution and Maximum Likelihood Estimation (MLE) method to analyze the survival of students during their study period. The log logistic distribution is chosen due to its ability to capture early or late failure patterns. The objective of this research is to analyze type I censored survival data using the log logistic distribution applied to secondary data on student study duration. The dataset consists of 98 observations. The calculated values for the β and γ parameters of the 2 parameters log logistic distribution are 2.12831 and 0.0918891, respectively. The probability of students completing their studies by semester 8 (hazard function h(8)) is 0.370102, while the probability of students continuing their studies in semester 9 (survival function s(9)) is 0.320817.
Thesis
Full-text available
In the textile sector, which is labor-intensive and has an important place in the country's economy, it is very important to identify the problems in the system and to produce quick solutions to these problems. For this, it is needed to use the techniques that improve the system effectively in order to develop existing systems of companies in line with their goals and objectives. In the study, it is aimed to determine the current situation of the production line of a textile firm by simulation method. Within the scope of the study, it is aimed to create the best improvement plan by developing various production line improvement scenarios, to improve the system outputs with existing resources and to reduce bottlenecks by showing the effect of the use of quality tools in the scenarios developed. In this context, the current situation of the production line, assuming that the company is operating at full capacity, has been revealed by simulation method and pareto analysis and cause-effect diagram, which are quality improvement tools, are used to improve the current situation. Thus, a more systematic way has been followed while creating alternative scenarios. In the literature, quality tools are used to verify the current situation in the studies conducted in the textile sector. This study differs as it is used to determine priority improvement areas in the creation of alternative scenarios and contributes to the literature in this respect. In the first scenario made according to the results of the Pareto analysis, the amount of solid output in the system is improved by 11%, while in the second scenario, which is created using the cause and effect diagram, the amount of solid output is increased by 17% by employing 11 less workers. In the third scenario where both scenarios are performed together, the amount of solid output is increased by 25%. At the same time, the number of pending products is reduced by 69% with the second scenario and by 75% with the third scenario. According to the results of the study, it is seen that the third scenario uses the resources more effectively for the company and the product flow rate in the system increases, thus making the production system in the firm more sustainable.
Article
Full-text available
Sustainability as a concept is present in most aspects of our everyday life, and industry is no exception. Likewise, there is no doubt that the necessity to produce goods in a sustainable way and to ensure that products are sustainable is gaining more and more attention from producers, customers, governments, and various organizations. Understandably, there are several ways to increase the sustainable development of industrial production. One effective tool is simulation, which can have a significant impact on improving environmental, economic, and social sustainability. This paper explores the role of simulation as a powerful scientific and engineering solution in advancing sustainability within industrial ecosystems. Its main scope is to map the existing literature on the usage of simulation as a supportive tool for achieving this goal. For this purpose, a bibliometric analysis was conducted, allowing for tailored insights into the use of simulation in sustainable production.
Article
Full-text available
The most basic location problem is the Weber problem, that is a basis to many advanced location models. It is finding the location of a facility which minimizes the sum of weighted distances to a set of demand points. Solution approaches have convergence issues when the optimal solution is at a demand point because the derivatives of the objective function do not exist on a demand point and are discontinuous near it. In this paper we investigate the probability that the optimal location is on a demand point, create example problems that may take millions of iterations to converge to the optimal location, and suggest a simple improvement to the Weiszfeld solution algorithm. One would expect that if the number of demand points increases to infinity, the probability that the optimal location is on a demand point converges to 1 because there is no “space" left to locate the facility not on a demand point. Consequently, we may experience convergence issues for relatively large problems. However, it was shown that for randomly generated points in a circle the probability converges to zero, which is counter intuitive. In this paper we further investigate this probability. Another interesting result of our experiments is that FORTRAN is much faster than Python for such simulations. Researchers are advised to apply old fashioned programming languages rather than newer software for simulations of this type.
Article
Full-text available
Existing location models considering multi-purpose shopping behavior limit the number of stops a customer makes to two. We introduce the multi-purpose (MP) competitive facility location model with more than two stops. We locate one or more facilities in a competitive environment, assuming a shopper may stop multiple times during one trip to purchase different complementary goods or services. We show that when some or all trips are multi-purpose, our model captures at least as much market share as the MP models with fewer purposes. Our extensive simulation experiments show that the MP models work best when multiple new facilities are added. As the number of facilities increases, however, the returns diminish due to cannibalization. Also, with significant increases in complexity for each additional stop added, expanding the model beyond three purposes may not be practical.
Article
Full-text available
El artículo presenta el estado del arte y metodología del proyecto de investigación doctoral “Marco de referencia para el modelamiento y simulación de la ciberdefensa marítima – MARCIM”. El estado del arte definió los antecedentes del problema de investigación, estado de la actividad científica, tendencias y retos de las temáticas del MARCIM: ciberdefensa, modelamiento y simulación en ciberseguridad y ciberdefensa; y ciberseguridad y ciberdefensa marítima. La metodología se planteó con un enfoque en modelamiento de sistemas complejos, por fases y actores de aplicación. El artículo concluye principalmente que la ciberdefensa marítima a nivel estratégico se comporta como un sistema complejo, con dinámicas, procesos y elementos que no se pueden identificar claramente, que requieren del modelamiento y simulación, con un enfoque metaheurístico, para estudiar el conjunto de acciones e interacciones entre sus entidades.
Conference Paper
Full-text available
Fulfillment centers in the E-commerce industry are highly complex systems that houses inventory and fulfill customer orders. One of the key processes at these centers involves translating customer demands into trucks and yard operations. Truck yards with operational issues can create delays in customer orders. In this paper, we show how a scalable cloud-based hybrid simulation model is used to improve yard operations, optimize flow and design, and forecast yard congestion. Cloud experimentation along with automated database connectivity allows any user to run simulation analyses to derive data driven operational decisions. We tested the model on two real world case studies, which results in cost savings for the organization. This paper also proposes a robust automated framework for setting simulation validation benchmarks and measuring model accuracy.
Chapter
Computer simulation, the process of mathematical modelling performed on a computer, is designed to predict the behavior of a real-world system. As a system becomes more complex, the simulation engine must run numerous times in response to the increasing complexity of the input and the simulation process. Additionally, an expensive physical experiment needs to be performed to validate the results. This paper demonstrates an innovative, general-purpose simulation approach strengthened by refinement learning (RL), formalized in the SIM_RL algorithm, and using epidemic spread (COVID-19) test data. The main advantages of this approach are computational resource savings, reduced need for physical experiments, and the ability to predict system behavior based on actual results. Moreover, this approach can be used in various disciplines to solve complex simulation problems.
Article
Full-text available
Distribusi normal adalah asumsi penting untuk banyak metode statistik. Pengujian normalitas biasanya menggunakan beberapa metode seperti kolmogorov-smirnov, Anderson-Darling dan tes Shapiro-Wilk. Pengujian simulasi dengan menghasilkan data dari distribusi normal, distribusi- t dan distribusi eksponensial. Pada data bangkitan diperoleh bahwa metode Shapiro-Wilk lebih baik dari metode lainnya, sedangkan pada sampel besar didapatkan bahwa metode Anderson-Darling lebih baik dari metode yang lainnya. Data bangkitan distribusi menunjukkan nilai kumulatif menolak mendekati 100% yang artinya data tidak berdistribusi normal. Sedangkan pada data bangkitan karena nilai derajat bebas ( semakin konsisten terhadap maka menyebabkan distribusi data mendekati normal dan diperoleh hasil bahwa nilai kumulatif yang paling konsisten adalah metode Anderson-Darling. Selanjutnya pada bangkitan data menghasilkan nilai kumulatif menolak mendekati 100% dengan metode yang paling konsisten adalah Kolmogorov-Smirnov. Sehingga dapat diketahui bahwa pemilihan uji normalitas sangat bergantung pada jumlah sampel dan sebaran datanya.
Article
Full-text available
Objective: To evaluate the weight mix for the market obtained with variation in the pre-fixed fingerlings time, for the cultivation of tambaqui (Colossoma macropomum) in a semi-dug tank. Theoretical framework: The parameters evaluated regarding resources are, the tanks, each of which has its own capacity, its own state variable, which includes fish biomass, growth function and mortality rate. And identifying potential alternatives and making better decisions that optimize biomass production are important, but that take into account the reduction of its environmental impact. Method: The model incorporates two types of input variables. The discrete event variable, which comprises the number of fish in each batch, the number of tanks available, the time between the arrival of fingerlings in the system and the frequency of classification by weight for the market. The second refers to the continuous time variable, involving the weight of the fish, dissolved oxygen (DO) available to the fish, and feed consumption. Results and conclusions: The analysis showed that the decision variables are the quantities of fish, with the premise of final weight of 0.5 kg, 1 kg and 2 kg which are related to the hatching time pre-fixed at entry as “30, 40, 50, 60, 70, 80, 90, 100 days” in phase I, results in the optimization of production, target weight for the market as a function of time, in layout scenarios of 5 and 10 tanks, with the premise of harvesting in both, with Mix weight with 0.5kg, 1kg and 0.5kg, 1kg, 2kg to maximize net profit. Considering that the transition between growth phases is a stochastic process, which satisfies the Markov property. It was possible to define the balance between the input and output of the system. Research implications: The study is of great relevance, as it describes a sequential queue through the growth phases in relation to time, capable of determining the optimization of production with weight mix to maximize net profit. Originality/value: The research reveals that it is possible to use queuing theory analyzes in stochastic processes, evaluating the transition between time phases, which satisfies the Markov property.
Chapter
Microservices deployed and managed by container orchestration frameworks like Kubernetes are the bases of modern cloud applications. In microservice performance modeling and prediction, simulations provide a lightweight alternative to experimental analysis, which requires dedicated infrastructure and a laborious setup. However, existing simulators cannot run realistic scenarios, as performance-critical orchestration mechanisms (like scheduling or autoscaling) are manually modeled and can consequently not be represented in their full complexity and configuration space. This work combines a state-of-the-art simulation for microservice performance with Kubernetes container orchestration. Hereby, we include the original implementation of Kubernetes artifacts enabling realistic scenarios and testing of orchestration policies with low overhead. In two experiments with Kubernetes’ kube-scheduler and cluster-autoscaler, we demonstrate that our framework can correctly handle different configurations of these orchestration mechanisms boosting both the simulation’s use cases and authenticity.
Chapter
Perishable goods such as fruits and vegetables require timely and accurate handling routines to ensure a high degree of product quality across all stages of the supply chain. Consequently, they constitute a fundamental business factor for organizations that needs to be managed in a delicate and prudent fashion. The perishability of products characterizes a challenging environment that requires dynamic planning and evaluation approaches to avoid or countervail the negative energetic impacts of inefficient operations. By providing a sophisticated conceptualization of the given system and its dynamic evolution over time, computer simulation serves as viable tool for analyzing and optimizing energy-related aspects of production and logistics systems for perishable items. This chapter reviews the current state of research for simulating energy-related aspects of perishable products and highlights common energy performance indicators such as food waste, emissions, and temperature. To outline contextual interdependencies and provide practical insights into the use of simulation to assess energy aspects of perishables, three use cases are presented. These cases elaborate on the energetic implication of a juice production plant in Sweden, the estimation of food quality losses in regional strawberry supply chains in Austria, and the energy and media consumption of a beverage bottling plant in Germany.
Article
Full-text available
Throughput is an important parameter to evaluate production system performance. It is typically constrained by one or more resources referred to as ‘throughput bottlenecks’. To start improvement actions, the first step is to identify throughput bottlenecks. Consequently, several bottleneck detection methods were developed in the literature. But this literature remains largely unstructured, which makes it difficult for practitioners to select an appropriate method. To generate clarity and to consolidate the field, a systematic literature review was conducted. The review identified 14 different bottleneck detection methods that are classified according to the information used: queue states, process states, or combined queue and process states. It further identified three different modes used to operationalize the different bottleneck detection methods: gemba walk, discrete event simulation, and data science. This study further presents important research issues, identifies contingency factors for method application, and discusses important guidelines for the choice of operationalization mode in practice.
Article
Computer simulations have revolutionized the analysis of military scenarios. As computing power has advanced, simulations can now incorporate intricate tactical-level engagements. However, accurately representing actors’ decisions at this level poses new challenges for developing and validating these simulations. In this context, this paper presents the methodologies and lessons learned from a study conducted to assess the application of agent-based modeling and simulation (ABMS) in analyzing beyond visual range (BVR) air combat scenarios, focusing on the influence of agent behavior on the outcomes. The proposed approach integrates real pilots into a face validation phase to examine symmetric and asymmetric engagements. The results underscore the significance of agent behaviors for the outcomes, for example, showing how specific behaviors are capable of mitigating the advantages of superior weaponry. Furthermore, the research explores the dynamics of aircraft acting in pairs, demonstrating the potential to evaluate tactics and the impact of numerical advantage. Ultimately, the results enhance the simulations’ credibility and confirm their plausibility, in line with the face validation methodology. This powerful phase bolsters subsequent steps in the overall validation process. In addition, the findings show how specific configurations of the agents, including tactical coordination, can significantly affect the simulation outcomes and validity.