Figure - available from: Cluster Computing
This content is subject to copyright. Terms and conditions apply.
Statistics of three processes (Task-VM, VM-PM, Task-VM-PM)

Statistics of three processes (Task-VM, VM-PM, Task-VM-PM)

Source publication
Article
Full-text available
Recently, there has been growing interest in distributed models for addressing issues related to Cloud computing environments, particularly resource allocation. This involves two main approaches: task scheduling, where the Cloud provider assigns tasks to Virtual Machines (VMs), and VM-to-Physical Machine mapping. These aspects are closely linked to...

Citations

... The algorithm also initiates migration of entire batch of tasks that comprise the job to minimize the overhead. Saidi and Bardou [44] discussed a state-of-the-art literature on resource allocation in cloud computing. The paper focussed challenges of resource allocation, especially task scheduling and virtual machine placement. ...
Article
Full-text available
In recent years, the occurrence of task failures are becoming prevalent in cloud computing due to various factors such as the increasing complexity of cloud environments, heterogeneity of resources, resource limitations and inadequate allocation. Task failure due to insufficient allocation poses a significant challenge in cloud computing. When tasks are not allocated effectively, they may not be completed within their deadlines which ultimately leads to failure. Hence, effective allocation strategies combined with appropriate fault tolerance measures are vital for addressing these challenges and mitigating the risk of task failures. This paper proposes a fault-tolerant task allocation algorithm (FTTA) for independent tasks with deadline through preemptive migration in heterogeneous cloud environments to reduce task failure. The proposed algorithm involves three phases: the initial phase decides the priority of tasks in the ready list to minimize the execution time and meet task deadlines, the second phase includes the selection of a suitable virtual machine with minimum execution time and the last phase assigns task on available or non-available (which may available in future) virtual machines to find the best execution time within the deadline limit. During the task allocation process, the algorithm adopts fault-tolerant strategy that includes preemptive migration if necessary which allows the migration of tasks to identify the best suitable virtual machine. An analysis of the proposed algorithm reveals that the overall time complexity is O(nlogn+nm2)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n\log n + n m^2)$$\end{document} where n is the number of tasks and m is the number of virtual machines. Further, the performance of the algorithm is evaluated for different sets of tasks (small to large) while varying the number of virtual machines. The experimental results demonstrate that FTTA outperforms First Come First Served (FCFS), Priority based algorithm, Shortest Job First (SJF), Dynamic Maximum Minimum (Dy max min) and RADL algorithms in terms of number of rejected tasks, makespan, speedup and efficiency.
... In this paper, we perform a thorough performance comparison between PSO, CS and the well known SA meta-heuristic. All three meta-heuristics have been used to solve the VMP problem [6][7][8], but no work has compared their performance behaviors on the same problem. In this work, we compare these three meta-heuristics using virtual machine placement (VMP), a recent hard CO problem. ...
... The complexity of this step depends on the number of VMs and PMs, denoted as N and M, respectively, and the constraints defined in Eqs. (4)- (6). Thus, the initialization complexity can be O(NÃM). ...
... The complexity of this step depends on the number of VMs and PMs, denoted as N and M, respectively, and the constraints defined in Eqs. (4)- (6). Thus, the initialization complexity can be O(NÃMÃP size ). ...
Article
Full-text available
Virtual machine placement (VMP) has a significant importance with respect to resource utilization in cloud data centers. Indeed, the optimized management of machine placement usually results in a significant reduction in energy consumption. VMP is a bin packing problem generalization, which is a well known hard combinatorial optimization problem. Besides being NP-hard, VMP is characterized by conflicting objectives and a noisy search space. Meta-heuristics, such as genetic algorithms, particle swarm optimization (PSO), cuckoo search (CS), tabu search and simulated annealing (SA) have been shown to be effective for this category of problems. This paper reports a performance comparison between SA, CS and PSO meta-heuristics to solve the VMP problem. In contrast to reported research work in this area, we study the performance behavior of these three meta-heuristics with respect to, not only the quality of solutions, but also the quality of the explored solution sub-space, in addition to the convergence speed towards reported solutions and the speed with which each meta-heuristic evolves towards the best reported optimized solution. Extensive simulations on randomly generated tests with sizes varying between 200 and 1000 virtual machine demands show that PSO achieves the best performance behavior with respect to all criteria. Moreover, for all tests, PSO produces a reduction of as much as 17% of the number of physical machines, 15% of the energy cost and 21% of the resource utilization of physical machines.
... Cloud job scheduling allocates tasks/cloudlets to the computing resources, a well-known NP-complete problem [1]. Many algorithms can effectively schedule tasks among multiple Virtual Machines (VMs) in a data-center [1,4,5], unscheduled downtime due to hardware/software or power failures compromising the efficiency of these schedulers. Some users may have specific execution deadlines, too, that require a scheduler capable of mapping the tasks to computing resources so that the specified deadlines are met. ...
Article
Full-text available
Cloud computing has become popular for small businesses due to its cost-effectiveness and the ability to acquire necessary on-demand services, including software, hardware, network, etc., anytime around the globe. Efficient job scheduling in the Cloud is essential to optimize operational costs in data centers. Therefore, scheduling should consider assigning tasks to Virtual Machines (VMs) in a Cloud environment in such a manner that could speed up execution, maximize resource utilization, and meet users’ SLA and other constraints such as deadlines. For this purpose, the tasks can be prioritized based on their deadlines and task lengths, and the resources could be provisioned and released as needed. Moreover, to cope with unexpected execution situations or hardware failures, a fault-tolerance mechanism could be employed based on hybrid replication and the re-submission method. Most of the existing techniques tend to improve performance. However, their pitfall lies in certain aspects such as either those techniques prioritize tasks based on a singular value (e.g., usually deadline), only utilize a singular fault tolerance mechanism, or try to release resources that cause more overhead immediately. This research work proposes a new scheduler called the Deadline and fault-aware task Adjusting and Resource Managing (DFARM) scheduler, the scheduler dynamically acquires resources and schedules deadline-constrained tasks by considering both their length and deadlines while providing fault tolerance through the hybrid replication–resubmission method. Besides acquiring resources, it also releases resources based on their boot time to lessen costs due to reboots. The performance of the DFARM scheduler is compared to other scheduling algorithms, such as Random Selection, Round Robin, Minimum Completion Time, RALBA, and OG-RADL. With a comparable execution performance, the proposed DFARM scheduler reduces task-rejection rates by 2.34–9.53 times compared to the state-of-the-art schedulers using two benchmark datasets.
... The crucial task is to efficiently map these tasks onto cloud resources, aiming for optimal scheduling with minimal resource consumption. Numerous approaches exist to discover optimal schedules, and to tackle various optimization problems such as TET, energy consumption and response time, researchers employ metaheuristics-versatile algorithms that provide nearly optimal solutions to scheduling and optimization challenges [36]. In the current era, researchers in cloud environments have utilized both single-objective and multiobjective-based task scheduling and mapping algorithms. ...
Article
Full-text available
Providing scalable and affordable computing resources has become possible thanks to the development of the cloud computing concept. In cloud environments, efficient task scheduling is essential for maximizing resource usage and enhancing the overall performance of cloud services. This research offers a more effective method for using optimization techniques to improve the efficiency of cloud computing task scheduling. Data centers, hosts, and virtual machines (VMs) comprise cloud infrastructures, and work scheduling is crucial to achieving peak performance. To save time, money, energy, and reaction times, scheduling must be done effectively; the primary objective of this research is to develop and evaluate optimization techniques for task scheduling in cloud environments. The following goals are prioritized in the proposed work: (i) reducing the Total Execution Cost (TEC) of the scheduling process; (ii) reducing the Total Execution Time (TET) during mapping; (iii) achieving appropriate task-to-VM mapping to reduce Energy Consumption (EC); and (iv) reducing the overall Response Time (RT) of the cloud scheduling system. To accomplish these objectives, we offer a method based on the use of three optimization techniques: Tabu Search (T), Bayesian Classification (B), and Whale Optimization (W). Our experimental findings show that, in terms of accomplishing the targeted objectives, the suggested TBW optimization methodology outperforms more well-known approaches like GA-PSO and Whale Optimization. By offering insights into efficient resource usage techniques and overall system effectiveness by 95% for the range of 8 to 14 VMs, this work helps ongoing attempts to improve the performance of cloud computing.
... The use of IoMT devices in the healthcare industry has been increasing rapidly in recent years, with the potential to improve patient outcomes and reduce costs [1]. However, time-sensitive healthcare IoMT-based applications require real-time data processing, which can be challenging to achieve due to the limitations of cloud computing [2,3]. The cloud's centralized architecture, coupled with network latency, can lead to delays in data processing, making it difficult to provide timely and accurate responses. ...
Article
Full-text available
In recent years, healthcare monitoring systems (HMS) have increasingly integrated the Internet of Medical Things (IoMT) with cloud computing, leading to challenges related to data latency and efficient processing. This paper addresses these issues by introducing a Machine Learning-based Medical Data Segmentation (ML-MDS) approach that employs a k-fold random forest technique for efficient health data classification and latency reduction in a fog-cloud environment. Our method significantly improves latency issues, enhancing the Quality of Service (QoS) in healthcare systems and demonstrating its adaptability in heterogeneous network scenarios. We specifically employ the Random Forest algorithm to mitigate the common problem of overfitting in machine learning models, ensuring broader applicability across various healthcare contexts. Additionally, by optimizing data processing in fog computing layers, we achieve a substantial reduction in overall latency between healthcare sensors and cloud servers. This improvement is evidenced through a comparative performance analysis with existing models. The proposed framework not only ensures secure and scalable management of IoMT health data but also incorporates a stochastic approach to mathematically formulate performance indicators for the HMS queuing model. This model effectively predicts system response times and assesses the computing resources required under varying workload conditions. Our simulation results show a classification accuracy of 92%, a 56% reduction in latency compared to existing models, and an overall enhancement in e-healthcare service quality.
... Efficient resources utilization remains a key issue in parallel and distributed computing environments. Resource allocation and task scheduling problems are well-known in this field, and a lot of effort has been invested in improving the management of Cloud resources [7,8,9,10]. In particular, cost optimization of scientific workflows has been a focus of attention [11]. ...
Preprint
Full-text available
A bstract In the field of genomics, bioinformatics pipelines play a crucial role in processing and analyzing vast biological datasets. These pipelines, consisting of interconnected tasks, can be optimized for efficiency and scalability by leveraging cloud platforms such as Microsoft Azure. The choice of compute resources introduces a trade-off between cost and time. This paper introduces an approach that uses Linear Programming (LP) to optimize pipeline execution. We consider optimizing two competing cases: minimizing cost with a run duration restriction and minimizing duration with a cost restriction. Our results showcase the utility of using LP in guiding researchers to make informed compute decisions based on specific data sets, cost and time requirements, and resource constraints.
... Our proposed method, which is based on entropy theory, falls on the one hand into the category of heuristic methods and, on the other hand, uses the metaheuristic method NSGA-III to solve the problem on a large scale. Before reading this section, it is recommended that interested readers refer to survey articles such as [22] for a more detailed and comprehensive study. ...
Article
Full-text available
One of the practical preferences of cloud service providers is to use specialized physical hosts. In other words, the goal is to place homogeneous virtual machines (VMs) on the physical host according to performance criteria such as energy consumption, resource wastage, and utilization. virtual machine placement (VMP) falls into NP‐hard knapsack problems. To overcome the time complexity, the use of heuristic and metaheuristic methods has attracted the attention of researchers. In this paper, we use an entropy‐based method for VMP for the first time. The proposed method tries to place the VMs on physical machines by considering the type of VMs to minimize entropy. Entropy is a measurable property that is more associated with disorder, randomness, or uncertainty. We use one of the most common entropy criteria called the Gini coefficient. In summary, among the different placement combinations of VMs, those that can minimize the Gini coefficient are preferred. We then solve the multi‐objective problem with the non‐dominated sorting genetic algorithm (NSGA‐III). We also combine this method with differential evolution methods to improve the quality of solutions. Recent research in other engineering fields has shown that combining metaheuristic methods with differential evolution methods increases the rate of convergence toward the optimal solution. The simulation results on the CloudSim simulator, along with statistical analysis, show that the entropy‐based method has a significant improvement over the state‐of‐the‐art methods in terms of significant performance criteria such as utilization, resource wastage, and energy consumption.
... Traditional scheduling refers to the conventional methods and algorithms used for task scheduling in computing systems, including cloud computing. These approaches are typically rule-based and rely on predefined policies and heuristics to allocate tasks to available resources [12]. Traditional scheduling methods often prioritize task completion time, resource utilization, and load balancing. ...
Article
Full-text available
The advent of the cloud computing paradigm has enabled innumerable organizations to seamlessly migrate, compute, and host their applications within the cloud environment, affording them facile access to a broad spectrum of services with minimal exertion. A proficient and adaptable task scheduler is essential to manage simultaneous user requests for diverse cloud services using various heterogeneous and varied resources. Inadequate scheduling may result in issues related to either under-utilization or over-utilization of resources, potentially causing a waste of cloud resources or a decline in service performance. Swarm intelligence meta-heuristics optimization technique has evinced conspicuous efficacy in tackling the intricacies of scheduling difficulties. Thus, the present manuscript seeks to undertake an exhaustive review of swarm intelligence optimization techniques deployed in the task-scheduling domain within cloud computing. This paper examines various swarm-based algorithms, investigates their application to task scheduling in cloud environments, and provides a comparative analysis of the discussed algorithms based on various performance metrics. This study also compares different simulation tools for these algorithms, highlighting challenges and proposing potential future research directions in this field. This review paper aims to shed light on the state-of-the-art swarm-based algorithms for task scheduling in cloud computing, showing their potential to improve resource allocation, enhance system performance, and efficiently utilize cloud resources.
Article
In the context of the wide application of big data technology, it is particularly important to optimize the allocation of teaching methods and learning resources. This study first expounds the key role of big data in the optimization of teaching methods and the allocation of learning resources, and emphasizes how big data technology promotes the transformation and development of education and teaching models. Based on the analysis of traditional models of teaching method optimization and learning resource allocation, this study proposes a new model driven by big data. By accurately identifying students’ learning needs and behavior patterns, the model optimizes teaching methods and allocation of learning resources. This study introduces the whole process of data collection, cleaning, analysis and modeling. In the process, it shows how big data can be integrated, analyzed, and applied to further support the construction and validation of models. Through empirical research and effect evaluation, this study proves the validity of the model of teaching method optimization and learning resource allocation driven by big data, and demonstrates how big data can promote educational equity and improve educational quality.