Figure - available from: Computing
This content is subject to copyright. Terms and conditions apply.
Scheduling types in cloud computing

Scheduling types in cloud computing

Source publication
Article
Full-text available
In recent years, companies have used the cloud computing paradigm to run various computing and storage workloads. The cloud offers faster and more profitable services. However, the issue of resource allocation is a significant challenge for cloud providers. The excessive consumption of resources has raised the need for better management of them. In...

Citations

... The proposed strategy is compared to current methods, showing improved performance in resource usage, energy usage, and reaction time. (Belgacem, 2022). Cloud computing involves competing clients and suppliers, with evolving requirements. ...
Article
Cloud technology provides computing services over the internet, enabling entrepreneurs to access tools and services previously only available to large organizations, enhancing efficiency, business scaling, and competitiveness. With a step-by-step practical performance, the study builds real-time clouds using several lab scenarios. The research offers recommendations for cloud computing networks' performance, security, and awareness in this way. The study investigates and improves cloud computing networks in IoT and other network architectures using cheminformatics, a combination of chemistry, computer, and mathematics. It computes topological invariants, such as K-banhatti sombor (KBSO) invariants (KBSO), Dharwad Invariants, K-banhatti Redefined Zagreb (KBRZ), their different forms, and Quadratic-contra harmonic invariants (QCI), to explore and enhance their characteristics like scalability, efficiency, higher throughput, reduced latency, and best-fit topology. The main objective is to develop formulas to check the topology, and performance of certain cloud networks without experiments and produce mathematical modeling results with graphical results. It also gives the optimized ranges of the network with one optimized value. After these evaluations, the network graph also checks for irregularities if exist with the help of the Irregularity Sombor (ISO) index. The study also produced real-time scenario-based clouds and performance-based use. The results will help researchers construct and improve these networks with different physical characteristics.
... The abstract of their paper provides a comparative analysis of different architectures in Agriculture 4.0, evaluating them against eight specific criteria and discussing the advantages and disadvantages of each [65]. [66] conducted an in-depth study on dynamic resource allocation (DRA) in cloud computing, reviewing a range of approaches, scheduling techniques, and optimization metrics. The paper effectively reviews and clarifies various DRA methods, providing a detailed categorization of scheduling and optimization techniques pivotal in the evolution of cloud computing. ...
Article
Full-text available
The development of parallel computing, distributed computing, and grid computing has introduced a new computing model, combining elements of grid, public computing, and SaaS. Cloud computing, a key component of this model, assigns computing to distributed computers rather than local computers or remote servers. Research papers from 2017 to 2023 provide an overview of the advancements and challenges in cloud computing and distributed systems, focusing on resource management and the integration of advanced technologies like machine learning, AI-centric strategies, and fuzzy meta-heuristics. These studies aim to improve operational efficiency, scalability, and adaptability in cloud environments, focusing on energy efficiency and cost reduction. However, these advancements also present challenges, such as implementation complexity, adaptability in diverse environments, and the rapid pace of technological advancements. These issues necessitate practical, efficient, and forward-thinking solutions in real-world settings. The research conducted between 2017 and 2023 highlights the dynamic and rapidly evolving field of cloud computing and distributed systems, providing valuable guidance for ongoing and future research. This body of work serves as a crucial reference point for advancing the field and emphasizing the need for practical, efficient, and forward-thinking solutions in the ever-evolving landscape of cloud computing and distributed systems.
... Various techniques are evaluated based on several parameters, including monetary aspects (like service cost), application performance metrics (such as response time, execution time, delay, SLA violations, task type, required processor number, throughput, resource availability, and utilization), security, and energy efficiency (overall power and energy consumption) [33]. Optimization methods like load balancing, Round Robin, Bin Packing algorithm, and Gradient Search algorithm are recognized for improving performance, reducing costs, and lowering energy consumption in IaaS resources [34]. ...
... In cloud computing, resource scheduling is a crucial area of focus, with numerous techniques being developed based on various scheduling attributes. The Batch Mode Dynamic Scheduling Algorithm, for instance, alternates between online and batch modes, catering to different request rates, with batch mode scheduling requests only after a thorough analysis of collected sets [34]. Similarly, the Load-Based Scheduling Algorithm is designed to maximize profit and efficiency under strict deadline constraints, specifically addressing heavy-tailed requests through effective workload control [54]. ...
... In high-demand scenarios, the Auction-Based Resource Allocation Algorithm is used to minimize resource wastage, using auctions for dynamic resource allocation, thus optimizing revenue [34]. The Autoscaling Prediction Model for Resource Provisioning employs a predictive framework to anticipate workload and provision VMs accordingly, using various predictive techniques such as ARIMA, Neural Networks (NN), and Support Vector Machine (SVM) [57]. ...
Article
Full-text available
This review paper provides an in-depth examination of distributed resource management in cloud computing, focusing on the critical elements of allocation, scheduling, and provisioning. Cloud computing, characterized by its dynamic and scalable nature, necessitates efficient resource management techniques to optimize performance, cost, and service. The study encompasses a comprehensive analysis of various strategies in resource allocation, scheduling methodologies, and provisioning techniques within the cloud computing paradigm. Through comparative analysis, this paper aims to highlight the synergies and trade-offs inherent in these methods, offering a holistic view of distributed resource management. It contributes to the field by bridging the gap in existing literature, presenting a critical, comparative analysis of current strategies and their interplay in distributed cloud environments.
... The exponential growth of digital data from diverse sources necessitates dynamic resource allocation to meet real-time processing demands [3] [4]. Traditional static approaches fall short in handling evolving data needs, prompting the use of machine learning-driven dynamic resource allocation strategies [5]. Dynamic resource allocation is critical for supporting complex analytical tasks in Big Data Analytics, such as real-time data streaming and predictive modeling [6]. ...
Article
Full-text available
Edge computing in big data refers to processing and analysing data closer to its source, reducing latency and bandwidth usage. It leverages devices at the network edge to perform computations, making real-time analytics feasible. This distributed approach improves efficiency and enables faster decision-making, critical for applications like IoT, autonomous vehicles, and healthcare. The research proposes an innovative approach that harnesses three machine learning algorithms Gradient Boosting Decision Trees (GBDT), Deep Q-Network (DQN), and Genetic Algorithm (GA) to enable dynamic adaptive resource allocation within edge computing environments tailored for big data analytics. GBDT enhances classification accuracy by sequentially refining predictions through decision trees, accommodating heterogeneous data types and yielding high prediction accuracy crucial for dynamic edge environments. The GA evaluates resource allocation strategies represented as chromosomes within a population, selecting promising solutions as parents for the next generation and generating diverse offspring through crossover and mutation operations to discover optimal solutions. DQN facilitates intelligent resource allocation by iteratively refining Q-values based on experiences gathered during interactions with the environment, utilizing a neural network to determine optimal actions for a given state, thereby enhancing performance and efficiency in edge computing environments. This integrated approach ensures flexible resource allocation and fortified capabilities for big data analytics within edge computing environments. The research underscores GBDT as the most promising algorithm for resource allocation in edge computing environments, owing to its exceptional performance in resource utilization, scalability, and accuracy. This nuanced understanding of algorithmic behaviour in dynamic settings offers invaluable insights for optimizing resource allocation strategies, thereby enhancing the efficiency and effectiveness of edge computing systems in handling big data analytics tasks.
... Resource allocation involves the scheduling and assignment of supplies [5]. Combining resource allocation with task scheduling helps minimize system latency. ...
... The current cost function is computed and represented in Eqn. (5). ...
Article
Full-text available
Cloud enterprises face challenges in managing large amounts of data and resources due to the fast expansion of the cloud computing atmosphere, serving a wide range of customers, from individuals to large corporations. Poor resource management reduces the efficiency of cloud computing. This research proposes an integrated resource allocation security with effective task planning in cloud computing utilizing a Machine Learning (ML) approach to address these issues. The suggested ML-based Multi-Objective Optimization Technique (ML-MOOT) is outlined below: An enhanced task planning, based on the optimization method, aims to reduce make-span time and increase throughput. An ML-based optimization is developed for optimal resource allocation considering various design limitations such as capacity and resource demand. A lightweight authentication system is suggested for encrypting data to enhance data storage safety. The proposed ML-MOOT approach is tested using a separate simulation setting and compared with state-of-the-art techniques to demonstrate its usefulness. The findings indicate that the ML-MOOT approach outperforms the present regarding resource use, energy utilization, reaction time, and other factors.
... However, allocating resources in a way that balances workload fluctuations is a challenging task. For businesses looking to cut expenses without sacrificing speed or high availability, cloud resource management optimization approaches are an invaluable resource [12]. For companies that depend on cloud infrastructure to run mission-critical applications, this is especially important. ...
... Random vectors equally distributed between 0 and 1 are included in ⃗ and ⃗ where the element d is progressively decreased from 2 to 0. The , , wolves are thought to comprehend it better since the location of the meal is never evident in advance. (12), (13), and (14) ...
Article
Full-text available
5G networks provide unmatched speed, connectivity, and the ability to support a wide range of diversified services, ushering in a transformational era of telecommunications. By applying innovative methods in data analysis, time-series forecasting, and optimization, this work provides a thorough strategy to addressing these difficulties. Detailed configuration and performance management data collecting from a 5G experimental prototype forms the basis of our methodology. Slicing ratios, priority, QCI (Quality of Service Class Identifier), and power measurements are among the critical parameters included in this collection. This work uses min-max normalization to guarantee consistency and standardized scaling in order to get this data ready for in-depth examination. For time-series forecasting, this novel method presents the Recursive LSTM (Long Short-Term Memory) model. LSTM networks, which are well-known for their ability to capture long-term dependencies, are essential for identifying temporal patterns in the information. In order to carefully adjust parameters and improve dynamic slicing configurations, this work employs the Grey Wolf Optimization (GWO) method. The GWO algorithm makes sure that network resource allocation constantly adapts to fulfill various objectives, taking inspiration from the hierarchical grey wolf pack's structured decision-making process. The combination of these advanced techniques results in a solution that greatly improves the accuracy and flexibility of time-series forecasting and resource distribution in 5G networks. Through the harmonic integration of data-driven insights, LSTM predictions, and the effective optimization capabilities of GWO, our methodology enables 5G networks to allocate resources with agility and flexibly, ultimately providing real-time, high-quality services. The method outperforms other approaches like CNN, CNN-LSTM, and RNN-LSTM, which were all implemented with MATLAB, by a substantial margin of 5.55%, with an accuracy of 99.12%.
... These algorithms allocate workloads to servers based on various factors, including energy efficiency and server utilization. Different resource scheduling types exist that include task, VM, and storage (Belgacem, 2022) Several algorithms for dynamic resource scheduling have been proposed, including multi-objective nested Particle Swarm Optimization (TSPSO) (Jena, 2015), Heuristic-based PSO algorithm (Alsaidy et al., 2022), Hybridized Whale Optimization algorithm (Strumberger et al., 2019), and Deep-Learning-Based algorithms (Jiang et al., 2020) among others. By ensuring that the most energy-efficient resources are utilized first, resource scheduling helps reduce the overall energy footprint of data centers and cloud services. ...
Chapter
Amidst an era marked by a relentless surge in digital data and computational demands, the imperative for eco-conscious and sustainable computing solutions has reached unprecedented significance. This study delves into the emerging realm of green cloud computing (GCC), a pivotal catalyst in cultivating a greener digital tomorrow. To nurture a sustainable digital frontier, this research investigates various GCC strategies encompassing efficient data center designs, resource optimization techniques, and innovative virtualization practices. Additionally, the authors scrutinize real-world instances of industry leaders embracing sustainable energy sources. Furthermore, they shed light on the obstacles within eco-friendly cloud computing while illuminating forthcoming trends for the triumphant integration of sustainable and eco-friendly technologies. This study offers profound insights for researchers, students, and stakeholders alike.
... In the case of online workflows, this complexity may even increase. When planning tasks and allocating resources in cloud environments, a number of goals [20,21] can be taken into account, such as lowering energy consumption, load balancing, and increasing resource utilization. In the literature, several resource management strategies have been put forth, each of which aims to accomplish one or more goals in the most effective way possible. ...
Article
Full-text available
Cloud organizations now face a challenge in managing the enormous volume of data and various resources in the cloud due to the rapid growth of the virtualized environment with many service users, ranging from small business owners to large corporations. The performance of cloud computing may suffer from ineffective resource management. As a result, resources must be distributed fairly among various stakeholders without sacrificing the organization’s profitability or the satisfaction of its customers. A customer’s request cannot be put on hold indefinitely just because the necessary resources are not available on the board. Therefore, a novel cloud resource allocation model incorporating security management is developed in this paper. Here, the Deep Linear Transition Network (DLTN) mechanism is developed for effectively allocating resources to cloud systems. Then, an Adaptive Mongoose Optimization Algorithm (AMOA) is deployed to compute the beamforming solution for reward prediction, which supports the process of resource allocation. Moreover, the Logic Overhead Security Protocol (LOSP) is implemented to ensure secured resource management in the cloud system, where Burrows–Abadi–Needham (BAN) logic is used to predict the agreement logic. During the results analysis, the performance of the proposed DLTN-LOSP model is validated and compared using different metrics such as makespan, processing time, and utilization rate. For system validation and testing, 100 to 500 resources are used in this study, and the results achieved a make-up of 2.3% and a utilization rate of 13 percent. Moreover, the obtained results confirm the superiority of the proposed framework, with better performance outcomes.
... They consider different parameters, e.g., Quality of Service (QoS), resource failure, resource mapping, resource prediction, resource pricing, resource provisioning, resource scheduling, Virtual Machine (VM) migration placement, and workload balancing for classifying existing works for the survey. In [18], the authors survey the dynamic aspect of resource allocation in the cloud, thereby improving its importance. This paper studies the aspects of Dynamic Resource Allocation (DRA) in cloud computing environments. ...
Article
Full-text available
In recent years, there has been a trend to integrate networking and computing systems, whose management is getting increasingly complex. Resource allocation is one of the crucial aspects of managing such systems and is affected by this increased complexity. Resource allocation strategies aim to effectively maximize performance, system utilization, and profit by considering virtualization technologies, heterogeneous resources, context awareness, and other features. In such complex scenario, security and dependability are vital concerns that need to be considered in future computing and networking systems in order to provide the future advanced services, such as mission-critical applications. This paper provides a comprehensive survey of existing literature that considers security and dependability for resource allocation in computing and networking systems. The current research works are categorized by considering the allocated type of resources for different technologies, scenarios, issues, attributes, and solutions. The paper presents the research works on resource allocation that includes security and dependability, both singularly and jointly. The future research directions on resource allocation are also discussed. The paper shows how there are only a few works that, even singularly, consider security and dependability in resource allocation in the future computing and networking systems and highlights the importance of jointly considering security and dependability and the need for intelligent, adaptive and robust solutions. This paper aims to help the researchers effectively consider security and dependability in future networking and computing systems.
... Enterprises or users who build private clouds have different purposes for using the cloud and have different requirements for different operating environments. Therefore, it is important to provide a customized virtual-machine environment suitable for various companies or users who wish to operate a private cloud [11][12][13][14][15]. ...
Article
Full-text available
A cloud-computing company or user must create a virtual machine to build and operate a cloud environment. With the growth of cloud computing, it is necessary to build virtual machines that reflect the needs of both companies and users. In this study, we propose a bespoke virtual machine orchestrator (BVMO) as a method for constructing a virtual machine. The BVMO builds resource volumes as core assets to meet user requirements and builds virtual machines by reusing and combining these resource volumes. This can increase the reusability and flexibility of virtual-machine construction. A case study was conducted to build a virtual machine by applying the proposed BVMO to an actual OpenStack cloud platform, and it was confirmed that the construction time of the virtual machine was reduced compared with that of the existing method.