ArticlePDF Available

An Improved Round Robin Schedduling Algorithm for CPU Scheduling

Authors:

Abstract

There are many functions which are provided by operating system like process management, memory management, file management, input/outputmanagement, networking, protection system and command interpreter system. In these functions, the process management is most important function because operating system is a system program that means at the runtime process interact with hardware. Therefore, we can say that for improving the efficiency of a CPU we need to manage all process. For managing the process we use various types scheduling algorithm. There are many algorithm are available for CPU scheduling. But all algorithms have its own deficiency and limitations. In this paper, I proposed a new approach for round robin scheduling algorithm which helps to improve the efficiency of CPU.
A preview of the PDF is not available
... From the results it is proved that the first level policy plays a very important role in achieving approximately good fairness. Results have also shown that the proposed system is useful for resource providers who can build and manage different computing environments which can contain multiple challenging applications [9]. The algorithm that is used by the CPU to schedule the processes affects the performance of the Operating systems. ...
... New Time Quantum = 6.25 (9) V. TESTING AND RESULTS ...
... "An Improved Round Robin Scheduling Algorithm for CPU Scheduling" [11]. In this paper , in order to increase CPU usage, they proposed an integrating Round-Robin using Genetic algorithm. ...
Conference Paper
Processes/task are scheduled in order to finish the task on time. CPU Scheduling is a technique that permits one process to utilize the CPU while other is delayed (on standby) due to a lack of resources such as I / O, etc., allowing the CPU to be fully utilized. The goal of CPU Scheduling is to improve the system's efficiency, speed, and fairness. When the CPU is not being used, the operating system chooses one of the processes in the queue to start. A temporary CPU scheduler performs the selecting process. The Scheduler chooses one of the memory processes that are ready to run and assigns CPU to it. Every system software must have scheduling, and practically all virtual machines are scheduled over before use. To enhance CPU efficiency, CPU utilization, delay, and CPU cycles is the primary objective of all presently available CPU scheduling techniques. There are various ways to tackle this for example algorithms like FCFS, SJN, Priority scheduling and many more but in this paper we chose to work with Round Robin (RR) Scheduling algorithm. RR algorithm solves the stated challenges as it's method's reasoning is significantly influenced by the length of the timeslot. In comparison time slices should be huge than of the context switch time, as it enhances the performance by lowering the load on CPU. In this study, we review an existing technique to reduce context switching and break the fixed. With optimization, the range of the time quantum is utilized through the RR scheduling algorithm. This paper is researching mainly on the employment of Context Switching, and RR scheduling. Scheduling is discussing about how time slice of one process is completed, processor is allocated to next process , saving of the state of process is needed because for the next time it can it run the process from the place it was halted. The review of this paper signifies the comparison of context switching based on different scheduling algorithm and the past work significance of study.
... Generally, task scheduling algorithms consist of traditional task scheduling and metaheuristic algorithms [16]. Traditional task scheduling algorithms include First Come First Serve (FCFS) [17], Shortest Job First (SJF) [17], Max-Min [18], Min-Min [19], and polling scheduling [20,21]. Maheswaran et al. [22] modified standard heuristics for task assignment in predictable environments. ...
Article
Full-text available
With the rapid development of cloud computing and network technologies, large-scale remote sensing data collection tasks are receiving more interest from individuals and small and medium-sized enterprises. Large-scale remote sensing data collection has its challenges, including less available node resources, short collection time, and lower collection efficiency. Moreover, public remote data sources have restrictions on user settings, such as access to IP, frequency, and bandwidth. In order to satisfy users’ demand for accessing public remote sensing data collection nodes and effectively increase the data collection speed, this paper proposes a TSCD-TSA dynamic task scheduling algorithm that combines the BP neural network prediction algorithm with PSO-based task scheduling algorithms. Comparative experiments were carried out using the proposed task scheduling algorithms on an acquisition task using data from Sentinel2. The experimental results show that the MAX-MAX-PSO dynamic task scheduling algorithm has a smaller fitness value and a faster convergence speed.
Chapter
Cloud Computing enables multiple users to access a network of computing resources. To satisfy the expectations of cloud providers and consumers, it has become more focused on delivering Quality of Service (QoS). QoS is geared towards reducing task completion time, also referred to as makespan, and response time, while enhancing the efficiency of resource utilization. To achieve QoS, novel task scheduling strategies are employed, as traditional schedulers have often failed to meet the required standards. Several schedulers have prioritized reducing waiting or response times, without considering the impact on specific processes. To meet QoS requirements, new task scheduling techniques are frequently utilized. This article proposes the FSRmSTS task scheduling algorithm, which combines First Come First Serve (FCFS), Shortest-Job-First (SJF), and Round Robin (RR) with Median Standard Time Slice schedulers. In order to balance waiting times for both short and long tasks, it uses a dynamic task quantum. It also splits the ready queue into two sub-queues according to task duration. Tasks are assigned to resources from each sub-queue in a mutually exclusive manner. The performance of the proposed algorithm was evaluated using the CloudSim environment toolkit 3.0.3 and compared against six other scheduling algorithms, namely FCFS, SJF, RR, Improved RR, HFSR (Hybrid First Come First Serve, Shortest job first, Round Robin), and HFSR with Median. The results revealed that FSRmSTS outperformed the other algorithms by reducing waiting and turnaround time, and addressing the issue of long task starvation.
Article
The CPU scheduling technique influences the performance and efficiency of operating systems. Round-robin scheduling algorithm is ideal for time-shared systems, but it is not optimal for real-time operating systems since it yields more context switching, longer waiting time, and high turnaround time. The performance of the algorithm is predominantly influenced by the designated time quantum; however, determining a suitable time quantum is extremely challenging. This paper presents a CPU scheduling algorithm that provides a better tradeoff between waiting time, turnaround time, response time, and number of context switch by using hypothesis-based quanta generation approach. It combines the CPU burst requirements of actual processes with some noisy data and plots them against the presumed CPU quanta to get quanta densities so that a polynomial regression model can fit the data points with the highest adjusted R-squared. Then applying some complex inferential statistic, the required quanta is obtained. The scheduling is dynamic in nature because it generates the next CPU quanta in reference to the quanta that have been used in the previous cycle with remaining CPU burst requirements of the process, and it is also adaptive in nature because, at each cycle, it uses ‘d’ (5, 5, 4, 3, 2) degree of freedom to calculate the Jarque-Bera Statistics to accept/reject the hypothesis. The algorithm is implemented in ‘R’ and the performance has been evaluated on a sample size of five processes with some noisy data which outperforms the conventional RR and significantly reduces the performance parameters mentioned above. Implementing this algorithm to a time-sharing or distributed environment will undoubtedly improve system performance and will help to avoid issues like thrashing, incorporate aging, CPU affinity, and starvation. Since the proposed algorithm is work-conservative, therefore can be implemented in network packet switching, statistical multiplexing, and real-time systems.
Chapter
Processes/tasks are scheduled in order to finish the task on time. CPU Scheduling is a technique that permits one process to utilize the CPU while other is delayed (on standby) due to a lack of resources such as I/O allowing the CPU to be fully utilized. The goal of CPU scheduling is to improve the system's efficiency, speed, and fairness. When the CPU is not being used, the operating system chooses one of the processes in the queue to start. A temporary CPU scheduler performs the selecting process. The scheduler chooses one of the memory processes that are ready to run and assigns CPU to it. Every system software must have scheduling, and practically, all virtual machines are scheduled over before use. To enhance CPU efficiency, CPU utilization, delay, and CPU cycles is the primary objective of all presently available CPU scheduling techniques. There are various ways to tackle this, for example, algorithms like FCFS, SJN, priority scheduling, and many more, but in this paper, we chose to work with Round Robin (RR) Scheduling algorithm. RR algorithm solves the stated challenges as it’s method's reasoning is significantly influenced by the length of the timeslot. In comparison, time slices should be huge than of the context switch time, as it enhances the performance by lowering the load on CPU. In this study, we review an existing technique to reduce context switching and break the fixed. With optimization, the range of the time quantum is utilized through the RR scheduling algorithm. This paper is researching mainly on the employment of context switching, and RR scheduling. Scheduling is discussing about how time slice of one process is completed, processor is allocated to next process, and saving of the state of process is needed because for the next time, it can run the process from the place it was halted. The review of this paper signifies the comparison of context switching based on different scheduling algorithm and the past work significance of study.
Conference Paper
Full-text available
Process Scheduling is the heart of any computer system since it contains decision of giving resources between possible processes. Sharing of computer resources between multiple processes is also called scheduling. The process is a smallest work unit of a program which requires a set of resources for its execution that are allocated to it by the CPU. These processes are many in number and keep coming in a particular fashion, different scheduling techniques are employed that enable faster and efficient process execution, thereby reducing the waiting time faced by each process and increasing CPU utilization. But all algorithms have its own deficiency and limitations. In this paper, we proposed a new approach for Round Robin scheduling algorithm using dispatch latency factor of the process, which helps to improve the efficiency of CPU at certain extent and result after experimentation suggest that it improve the performance of the processor.
Article
Scheduling algorithms plays a significant role in optimizing the CPU in operating system. Each scheduling algorithms [8] schedules the processes in the ready queue with its own algorithm design and its properties. In this paper, the performance analysis of First come First serve scheduling, non-pre-emptive scheduling, Pre-emptive scheduling, Shortest Job scheduling First (SJF) and Round Robin algorithm has been discussed with an example and the results has been analysed with the performance parameters such as minimum waiting time, minimum turnaround time and Response time. This will help the young researchers to analyse algorithms to develop a new optimized algorithm for CPU optimization.
Article
Full-text available
Programmable network paradigm allows the execution of active applications in routers or switches to provide more flexibility to traditional networks, and richer services for users. In this paper, we discuss issues in designing resource schedulers for processing engines at programmable routers. One of the key problems is the inability to determine execution times of packets from information in headers for scheduling. Therefore, we present a suitable packet scheduling algorithm called Start-time Weighted Fair Queueing (SWFQ) that does not require packet processing times in advance. Through analysis and simulations, we show that the proposed scheme can achieve good fairness and predictable delay guarantees.
Article
Full-text available
Since the invention of the movable head disk, people have improved I/O performance by intelligent scheduling of disk accesses. We have applied these techniques to systems with large memories and potentially long disk queues. By viewing the entire buffer cache as a write buffer, we can improve disk bandwidth utilization by applying some traditional disk scheduling techniques. We have analyzed these techniques, which attempt to optimize head movement and guarantee fairness in response time, in the presence of long disk queues. We then propose two algorithms which take rotational latency into account, achieving disk bandwidth utilizations of nearly four times a simple first come first serve algorithm. One of these two algorithms, a weighted shortest total time first, is particularly applicable to a file server environment because it guarantees that all requests get to disk within a specified time window. 1. Introduction Present day magnetic disks are capable of providing I/O bandwidth on...
Article
Programmable network paradigm allows the execution of active applications in routers or switches to provide more flexibility to traditional networks, and richer services for users. In this paper, we discuss issues in designing resource schedulers for processing engines in programmable networks. One of the key problems in programmable networks is the inability to determine execution times of packets from information in headers for scheduling which is in contrast to using packet length in transport resources scheduling. Therefore, this paper focuses on developing CPU scheduling algorithms that could schedule CPU resource adaptively, fairly, and efficiently among all the competing flows. In this paper, we present two scheduling algorithms that could resolve the problem of prior determination of CPU requirement of the data packet. One of the proposed packet scheduling algorithm is called start time weighted fair queueing that does not require packet processing times in advance and the other one is called prediction based fair queueing which uses a prediction algorithm to estimate CPU requirements of packet and it then schedules the packets according to that. The effectiveness of these algorithms in achieving fairness and providing delay guarantee is shown through analysis and simulation work. q 2004 Elsevier B.V. All rights reserved.
Operating system, DCSA-2302. School of sciences and Technology
  • H Shamim
Shamim H M 1998. Operating system, DCSA-2302. School of sciences and Technology. Bangladesh open university Gazipur-1705