Fig 10 - uploaded by Ermeson Andrade
Content may be subject to copyright.
Availability study of Scenario 2.  

Availability study of Scenario 2.  

Source publication
Conference Paper
Full-text available
Cloud computing is a new paradigm that provides services through the Internet. Such paradigm has the influence of the previous available technologies (e.g., cluster, peer-to-peer and grid computing) and has been adopted to reduce costs, to provide flexibility and to make management easier. Companies like Google, Amazon, Microsoft, IBM, HP, Yahoo, O...

Citations

... The benefits of cloud computing in academic libraries include the following [25,26]:  Saving implementation and maintenance costs  Flexibility and innovation  Scalable and elastic infrastructure  Availability anytime, anywhere  Transparency  Connect and converse  User centric  Representation  Openness  Interoperability  Create and collaborate ...
Article
Full-text available
With the rapid increase in global intellectual production, academic libraries have been challenged to make this product available to beneficiaries at the perfect time and place. On the other hand, the rapid development in the field of information technology, especially cloud computing and open source software applications, offers many capabilities that can be harnessed in the work of academic libraries to meet the needs of beneficiaries. This paper aims to developing information services in the academic libraries in University of Diyala (UOD) through using cloud computing applications and open source software (Koha) to facilitate the search and retrieval of information by librarians and beneficiaries. A unified research platform has been created for all academic libraries at UOD using Koha integrated system and building a database of bibliographic records, which have been installed on the Google cloud platform. The unified search platform helps librarians and users retrieve information and search resources more efficiently from anywhere, at anytime by calling sources of information with multiple retrieval points (author, source, ISBN, abstract, keywords) as well as the ability to search for sources of information in all UOD libraries simultaneously, which contributes to reducing the time of the beneficiary in obtaining sources of information.
... The values of uptime ( ) are obtained by calculating the product of number of successfully deployed VMs. The value of MTTR associated with a VM is 0.21 minutes which is reported in [34], [35]. Accordingly, the values of MTTR are enumerated for different number of VM migrations that changes with the number of unpredicted VM failures. ...
Preprint
Full-text available
Massive upsurge in cloud resource usage stave off service availability resulting into outages, resource contention, and excessive power-consumption. The existing approaches have addressed this challenge by providing multi-cloud, VM migration, and running multiple replicas of each VM which accounts for high expenses of cloud data centre (CDC). In this context, a novel VM Significance Ranking and Resource Estimation based High Availability Management (SRE-HM) Model is proposed to enhance service availability for users with optimized cost for CDC. The model estimates resource contention based server failure and organises needed resources beforehand for maintaining desired level of service availability. A significance ranking parameter is introduced and computed for each VM, executing critical or non-critical tasks followed by the selection of an admissible High Availability (HA) strategy respective to its significance and user specified constraints. It enables cost optimization for CDC by rendering failure tolerance strategies for significant VMs only instead of all the VMs. The proposed model is evaluated and compared against state-of-the-arts by executing experiments using Google Cluster dataset. SRE-HM improved the services availability up to 19.56% and scales down the number of active servers and power-consumption up to 26.67% and 19.1%, respectively over HA without SRE-HM.
... The values of UT are obtained by computing product of number of successfully deployed VMs and time-interval over period { t 1 , t 2 }. The ℝ value associated to a VM is 0.21 minutes which is utilized from [44,45]. Accordingly, the values of ...
Article
Full-text available
The indispensable collaboration of cloud computing in every digital service has raised its resource usage exponentially. The ever-growing demand of cloud resources evades service availability leading to critical challenges such as cloud outages, SLA violation, and excessive power consumption. Previous approaches have addressed this problem by utilizing multiple cloud platforms or running multiple replicas of a Virtual Machine (VM) resulting into high operational cost. This paper has addressed this alarming problem from a different perspective by proposing a novel nline virtual machine ailure ℙrediction and olerance odel (OFP-TM) with high availability awareness embedded in physical machines as well as virtual machines. The failure-prone VMs are estimated in real-time based on their future resource usage by developing an ensemble approach-based resource predictor. These VMs are assigned to a failure tolerance unit comprising of a resource provision matrix and Selection Box (S-Box) mechanism which triggers the migration of failure-prone VMs and handle any outage beforehand while maintaining the desired level of availability for cloud users. The proposed model is evaluated and compared against existing related approaches by simulating cloud environment and executing several experiments using a real-world workload Google Cluster dataset. Consequently, it has been concluded that OFP-TM improves availability and scales down the number of live VM migrations up to 33.5% and 83.3%, respectively, over without OFP-TM.
... We assume the base number of CPU, memory, and storage as 32 cores [37], 64 GB [28], and 300 GB [37], respectively. Physical nodes have MTTF and MTTR values of 8760 hours and 1.667 hours, respectively [38]. These values are used as input for SPN models to calculate the overall SFC availability 2 https://networkx.org/documentation/stable/index.html ...
... We also assume that different VNFs have different MTTF values. We assume that the VNF of type 1 has an MTTF value of 2880 hours [38]. For the other types of VNFs, we assume that as the price increases, the MTTF increases 5% based on the assumption that the more expensive a VNF is, the more reliable it is. ...
... The values of uptime ( ) are obtained by calculating the product of number of successfully deployed VMs. The value of MTTR associated with a VM is 0.21 minutes which is reported in [34], [35]. Accordingly, the values of MTTR are enumerated for different number of VM migrations that changes with the number of unpredicted VM failures. ...
Article
Full-text available
Massive upsurge in cloud resource usage stave off service availability resulting into outages, resource contention, and excessive power-consumption. The existing approaches have addressed this challenge by providing multi-cloud, VM migration, and running multiple replicas of each VM which accounts for high expenses of cloud data centre (CDC). In this context, a novel VM Significance Ranking and Resource Estimation based High Availability Management (SRE-HM) Model is proposed to enhance service availability for users with optimized cost for CDC. The model estimates resource contention based server failure and organises needed resources beforehand for maintaining desired level of service availability. A significance ranking parameter is introduced and computed for each VM, executing critical or non-critical tasks followed by the selection of an admissible High Availability (HA) strategy respective to its significance and user specified constraints. It enables cost optimization for CDC by rendering failure tolerance strategies for significant VMs only instead of all the VMs. The proposed model is evaluated and compared against state-of-the-arts by executing experiments using Google Cluster dataset. SRE-HM improved the services availability up to 19.56% and scales down the number of active servers and power-consumption up to 26.67% and 19.1%, respectively over HA without SRE-HM.
... We assume server MTTF and MTTR of 8760 h and 1.667 h, respectively. For the VNFs, we assume parameters of generic virtual machines with an MTTF of 2880 h and an MTTR of 0.17 h as per [4]. In line with [44]. ...
... For these experiments, we use a variation of the simulation setup described in Table 1, which is shown in Table 3. We assume two groups of servers: one group SFC lifetime ( ) 1000 h [44] where 14 servers have an MTTF of 8760 hours [4], and another group with 14 servers and an MTTF of 7884 hours (a reduction of 10%). We use this variation in order to increase the server heterogeneity of the network. ...
Article
Full-text available
Software-defined networking and network functions virtualisation are making networks programmable and consequently much more flexible and agile. To meet service-level agreements, achieve greater utilisation of legacy networks, faster service deployment, and reduce expenditure, telecommunications operators are deploying increasingly complex service function chains (SFCs). Notwithstanding the benefits of SFCs, increasing heterogeneity and dynamism from the cloud to the edge introduces significant SFC placement challenges, not least adding or removing network functions while maintaining availability, quality of service, and minimising cost. In this paper, an availability- and energy-aware solution based on reinforcement learning (RL) is proposed for dynamic SFC placement. Two policy-aware RL algorithms, Advantage Actor-Critic (A2C) and Proximal Policy Optimisation (PPO), are compared using simulations of a ground truth network topology based on the Rede Nacional de Ensino e Pesquisa Network, Brazil’s National Teaching and Research Network backbone. The simulation results show that PPO generally outperformed A2C and a greedy approach in terms of both acceptance rate and energy consumption. The biggest difference in the PPO when compared to the other algorithms relates to the SFC availability requirement of 99.965%; the PPO algorithm median acceptance rate is 67.34% better than the A2C algorithm. A2C outperforms PPO only in the scenario where network servers had a greater number of computing resources. In this case, the A2C is 1% better than the PPO.
... The MTTR is the same for all VNF types (0.5 hours) since it is related to the maintenance strategies used by a firm, such as the time for finding and subsequently repairing a fault [16]. We also assume that the MTTF and MTTR values for a server are 8760 hours and 1.67 hours, respectively [17]. These values can be adapted for different scenarios according to the network manager requirements. ...
... This section presents a proposed methodology and a fog computing environment, which we used to evaluate the modeling proposal for capacity planning and performance evaluation. Our methodology is based on [5,6,28,29], and it proved to be effective. Following this methodology, we are able to determine a way to obtain the information that we need to propose an analytical model. ...
Article
Full-text available
Cloud computing is attractive mostly because it allows companies to increase and decrease available resources, which makes them seem limitless. Although cloud computing has many advantages, there are still several issues such as unpredictable latency and no mobility support. To overcome these problems, fog computing extends communication, storage, and computation toward the edge of network. Therefore, fog computing may support delay-sensitive applications, which means that the application latency from end users can be improved, and it also decreases energy consumption and traffic congestion. The demand for performance, availability, and reliability in computational systems grows every day. To optimize these features, it is necessary to improve the resource utilization such as CPU, network bandwidth, memory, and storage. Although fog computing extends the cloud computing resources and improves the quality of service, executing capacity planning is an effective approach to arranging a deterministic process for web-related activities, and it is one of the approaches of optimizing web performance. The goal of capacity planning in fog computing is preparing the system for an incoming workload, so we are able to optimize the system’s utilization while minimizing the total task execution time, which happens before sending the load toward the cloud environment or not sending it at all. In this paper, we evaluate the performance of a web server running in a fog environment. We also use QoS metrics to plan its capacity. We proposed performance closed-form equations extracted from a continuous-time Markov chain model of the system.
... We assume server MTTF and MTTR of 8760 hours and 1.667 hours, respectively. For the VNFs, we assume parameters of generic virtual machines with an MTTF of 2880 hours and an MTTR of 0.17 hours as per [44]. In line with [45], we assume CPU energy consumption of 40W and memory energy consumption of 30.17W. ...
... For these experiments, we use a variation of the simulation setup described in Table 1, which is showed in Table 3. We assume two groups of servers: one group where 14 servers have an MTTF of 8760 hours [44], and another group with 14 servers and an MTTF of 7884 hours (a reduction of 10%). We use this variation in order to increase the server heterogeneity of the network. ...
Preprint
Full-text available
Software defined networking (SDN) and network functions virtualisation (NFV) are making networks programmable and consequently much more flexible and agile. To meet service level agreements, achieve greater utilisation of legacy networks, faster service deployment, and reduce expenditure, telecommunications operators are deploying increasingly complex service function chains (SFCs). Notwithstanding the benefits of SFCs, increasing heterogeneity and dynamism from the cloud to the edge introduces significant SFC placement challenges, not least adding or removing network functions while maintaining availability, quality of service, and minimising cost. In this paper, an availability- and energy-aware solution based on reinforcement learning (RL) is proposed for dynamic SFC placement. Two policy-aware RL algorithms, Advantage Actor-Critic (A2C) and Proximal Policy Optimisation (PPO2), are compared using simulations of a ground truth network topology based on the Rede Nacional de Ensino e Pesquisa (RNP) Network, Brazil's National Teaching and Research Network backbone. The simulation results showed that PPO2 generally outperformed A2C and a greedy approach both in terms of acceptance rate and energy consumption. A2C outperformed PPO2 only in the scenario where network servers had a greater number of computing resources.
... The benefits of cloud computing in academic libraries include the following [25,26]:  Saving implementation and maintenance costs  Flexibility and innovation  Scalable and elastic infrastructure  Availability anytime, anywhere  Transparency  Connect and converse  User centric  Representation  Openness  Interoperability  Create and collaborate ...
Article
Full-text available
With the rapid increase in global intellectual production, academic libraries have been challenged to make this product available to beneficiaries at the perfect time and place. On the other hand, the rapid development in the field of information technology, especially cloud computing and open source software applications, offers many capabilities that can be harnessed in the work of academic libraries to meet the needs of beneficiaries. This paper aims to developing information services in the academic libraries in University of Diyala (UOD) through using cloud computing applications and open source software (Koha) to facilitate the search and retrieval of information by librarians and beneficiaries. A unified research platform has been created for all academic libraries at UOD using Koha integrated system and building a database of bibliographic records, which have been installed on the Google cloud platform. The unified search platform helps librarians and users retrieve information and search resources more efficiently from anywhere, at anytime by calling sources of information with multiple retrieval points (author, source, ISBN, abstract, keywords) as well as the ability to search for sources of information in all UOD libraries simultaneously, which contributes to reducing the time of the beneficiary in obtaining sources of information.