ArticlePDF Available

Efficient Resource Utilization Algorithm (ERUA) for Service Request Scheduling in Cloud

Authors:

Abstract and Figures

Cloud computing provides us with the massive pool of resources in terms of pay-as-you-use policy. Cloud delivers these resources on demand through the use of network resources under different load conditions. As the users will be charged based on their usage the effective utilization of resources poses a major challenge. To accomplish this, a service request scheduling algorithm which reduces the waiting time of the task in the scheduler and maximizes the Quality of Service (QoS) is needed. Our proposed algorithm named Effective Resource Utilization Algorithm (ERUA) is based on 3-tier cloud architecture (Consumer, Service Provider and the Resource Provider) which benefits both the user (QoS) and the service provider (Cost) through effective schedule reallocation based on utilization ratio leading to better resource utilization. Performance analysis made with the existing scheduling techniques shows that our algorithm gives out a more optimized schedule and enhances the efficiency rate.
Content may be subject to copyright.
Efficient Resource Utilization Algorithm
(ERUA) for Service Request Scheduling in
Cloud
Ramkumar N, Nivethitha S
rkjeeth@gmail.com, nivethithasomu@gmail.com
School of Computing, SASTRA University, Thanjavur, Tamil Nadu.
Abstract
- Cloud computing provides us with the massive pool of resources in terms of pay-as-you-use
policy. Cloud delivers these resources on demand through the use of network resources under different
load conditions. As the users will be charged based on their usage the effective utilization of resources
poses a major challenge. To accomplish this, a service request scheduling algorithm which reduces the
waiting time of the task in the scheduler and maximizes the Quality of Service (QoS) is needed. Our
proposed algorithm named Effective Resource Utilization Algorithm (ERUA) is based on 3-tier cloud
architecture (Consumer, Service Provider and the Resource Provider) which benefits both the user (QoS)
and the service provider (Cost) through effective schedule reallocation based on utilization ratio leading
to better resource utilization. Performance analysis made with the existing scheduling techniques shows
that our algorithm gives out a more optimized schedule and enhances the efficiency rate.
Keywords
:
Cloud; Service Request; Service Provider; Consumer; Scheduler Units
I INTRODUCTION
Cloud computing an emerging and an enabling technology which made us to think beyond what is possible.
Realizing the services and amenities provided by the cloud many organizations decided to jump into cloud in
order to reduce the infrastructure cost and energy consumption. Cloud makes them to move their business with
different range and style of services. It had the changed the traditional way of using the resource infrastructure.
Service request scheduling is the most crucial area with respect to the profit of the service provider and the QoS
of the user.
Cloud computing services are offered based on 3-tier architecture. The entire architecture of a cloud with
respect to service request scheduling comprises of the resource provider, the service providers and the
consumers. In order to service the request given by the consumer, the service provider needs either to procure
new hardware resources or to rent it from resource provider. However, getting resource on rental basis incurs
less cost than buying a new one.
The service provider hires resources from the resource provider and creates Virtual Machine (VM) instances
dynamically to serve the consumers. Resource provider takes on the responsibility of dispatching the VM’s to
the physical server. Charges for the running instance are based on the flat rate (/time unit). Users submit their
request for processing an application consists of one or more services. These services along with the time and
cost parameters are sent to the service provider. In general the actual processing time of a request is much longer
than its estimated time as there incurs some delay at the service provider site. As the cloud is a form of “pay-as-
you-use” utility, the service provider needs to reduce the response time and delay. Over here service request
scheduling becomes an essential element to reduce maximize the profit of service provider and to improve the
QoS offered to the user.
Earlier research contributions towards service request scheduling algorithms were on SERver CONsolidation
[1], optimized service scheduling algorithm [2], scheduling policy based on priority and admission control [3],
integration of VM for sorting tasks based on the profit [4], multiple pheromone algorithm [5], gang scheduling
on VM [6], utility model to balance the profit between the user and the service provider [7], dynamic service
request resource allocation through gi-First In First Out (FIFO) [8], Service Level Agreement (SLA) creation,
management and usage in utility computing [9], scheduling dynamic user request to maximize the profit of the
service provider [10], Ant Colony Optimization (ACO) [11], Particle Swam Optimization (PSO) [12], dynamic
distribution of user request between the application services in a decentralized way [13], scheduling algorithm
based on genetic algorithm to reduce the waiting time [14], task consolidation heuristics with respect to idle and
active energy consumption [15], pricing model based on processor – sharing through max_profit and
max_utility [16], optimized service request – resource mapping using genetic algorithm [17], dynamic priority
scheduling algorithm [18].
Our algorithm ERUA for service request scheduling schedules the task units based on the utilization ratio of
the queue. It always ensures that the utilization ratio always falls within 1 leading to better resource utilization
Ramkumar N et al. / International Journal of Engineering and Technology (IJET)
ISSN : 0975-4024
Vol 5 No 2 Apr-May 2013
1321
and enhancing the efficiency through enabling the task units to finish up its execution within their deadline.
With our sample set of data, ERUA proves to be more optimal than the existing algorithms for service request
scheduling.
The remainder of the paper is sectioned as follows: Section 2 enlightens the concept of service request
scheduling and our proposed algorithm, Section 3 discusses about the results and interpretation and Section 4
concludes this paper.
II PROPOSED METHODOLOGY
2.1 Scheduling Process
The process of scheduling can be viewed as service request scheduling (service provider and the Consumer) and
resource scheduling (service provider and resource provider). The process of service request scheduling occurs
as:
a) Users submit their request to the service provider.
b) Service provider executes the request.
c) Process the request in the service request architecture.
d) Dynamic VM generation and dispatch at the resource provider site.
2.2 System Architecture
The major components in the service request scheduling are (Figure 1):
i. Classifier: Receives user request, process and classifies into smaller task units. These task units can be
scheduled directly onto the scheduler but before that it needs to get assigned with random priorities.
Priority can either be based on system state or the task characteristics. Once each task gets its unique
priority these task units can be sent to the scheduler component to be scheduled.
Figure 1. System Architecture of ERUA.
ii. Scheduler: Each scheduler contains several schedule units, each having its own priority based on the
system design and the real situation. Scheduler pushes up the task units into appropriate schedule units
based on the idleness and the saturation of each and every schedule unit. Scheduler units execute the
task units based on the algorithm. The task unit with the lower deadline will be scheduled first to
optimize the result.
iii. Compactor: Summarizes the completed task units during each cycle and sends it to the resource provider.
2.3 The Process of Service Request Scheduling
Users submit their request for executing their application which consists of one or more services to the SP. Now
the SP has to perform the service request scheduling process with these requests and has to operate on a massive
set of data. So the SP requires a scheduler to efficiently schedule these request maximizing the QoS to the user
and the profit on the SP site. The process of service scheduling starts here. Each request will be spliced into task
units and are assigned with some random priority in the classifier. Classifier pushes these task units into an
appropriate scheduler units based on the state of the scheduler units. Scheduler units execute the task unit based
on some algorithm. Our algorithm considers utilization ratio as the deciding factor for priority reassignment. Let
us consider an example for priority reassignment (Figure 2).
Ramkumar N et al. / International Journal of Engineering and Technology (IJET)
ISSN : 0975-4024
Vol 5 No 2 Apr-May 2013
1322
Figure 2. Example of scheduling task in prioritized queue.
Task units T2 (6), T3 (14) and T6 (8) having the lowest deadline will be assigned for execution at first in
high, medium and low priority scheduler units respectively. Now after a cycle the remaining task units will be
high queue - T5 (11) and T1 (13), medium queue – T7 (11) and low queue – T4 (9) and T6 (8). If there is a task
T8 (19) with the execution time of 7 ms in the higher priority queue that needs to be executed after T1 (13), it
can be scheduled to execution in the medium priority scheduler unit after T7 (11) as it frees up after 5 ms. This
can be done by analysing the remaining jobs and the completion time of the current jobs scheduled in the queue,
thus minimizing the delay of 1 ms, while enhancing the processor utilization. Now, T8 (14) completes its
execution by 12 ms within its deadline. Whenever the queue frees up irrespective of the priority class, the tasks
can be scheduled onto any one of them based on the state of the scheduler units. Priority reassignment based on
deadline gives us a better way of maximizing the throughput and the performance of the system through
effective resource utilization.
III RESULTS AND DISCUSSIONS
User submits their request to the service provider and it enters the scheduling architecture through the classifier.
The classifier component split up the user request into several independent task units. Let us consider the
following table which consists of several task units of a single request (Table 1). Now, the classifier assigns
some initial priority (least deadline) to each task units and schedules them on to the schedule units. Here, we
will be having three scheduler queues with high, medium and low priority respectively and this depends upon
the design and the current load of the system.
The initial tasks scheduled to execution will be T2 (6) and T6 (11) in high priority queue, T3 (14) and T7
(17) in medium priority queue and T5 (8) and T10 (9) in low priority queue are shown in Figure 3. T8 (13) with
the higher priority will be scheduled next to T6 (11) in the high priority queue (Figure 4). Always be sure about
Utilization Ratioi (Queue) = (Execution Time i / Deadline i) 1 (i)
Table 1. Task Units Schedule.
The task units T2, T6, T8, T3, T7, T5 and T10 were scheduled on to execution within their deadline with
the utilization ratio of 0.89 (T2, T6 and T8) on high priority queue, 0.33 ( T3 and T7) on medium priority queue
and 0.72 (T5 and T10) on low priority queue. To keep the queue busy, always ensure that the queue utilization
Ramkumar N et al. / International Journal of Engineering and Technology (IJET)
ISSN : 0975-4024
Vol 5 No 2 Apr-May 2013
1323
should be within 1. Write down the remaining task that needs to be scheduled (Table 2).
Figure 3. Initial Schedule Based on the Least Deadline.
Figure 4. T8 with high priority scheduled on to high priority queue.
Table 2. Consolidated task details with priority reassignment based on deadline.
The task T4 (12) with the low priority will be scheduled on the medium priority queue after T7 (17) as T7
(17) completes 1ms before T10 (9) in the low priority queue (Figure 5). The task T11 (13) with the low priority
will be scheduled on the low priority queue after T10 (9) (Figure 6). The task T9 (45) with the high priority will
be scheduled on the high priority queue after T8 (13) as T9 (45) will have the least deadline than T1 (50) (Figure
7). The task T1 (50) with the high priority will be scheduled on the medium priority queue after T4 (12) as T4
(12) completes 4ms before T9 (45) (Figure 8).
Figure 5. T4 with the lower priority scheduled on to idle medium priority queue (idle).
Ramkumar N et al. / International Journal of Engineering and Technology (IJET)
ISSN : 0975-4024
Vol 5 No 2 Apr-May 2013
1324
Figure 6. T11 with the lower priority scheduled on to low priority queue.
Figure 7. T9 with the higher priority scheduled on to the high priority queue.
Figure 8. T1 with the higher priority scheduled on to the medium priority queue (idle).
When Dynamic Priority Scheduling Algorithm (DPSA) is used to schedule the same set of tasks the
utilization ratio (Ui = ei / di) 1.12 (T1, T2, T6, T8 and T9) on high priority queue, 0.33 (T3 and T7) on medium
priority queue and 1.28 (T5 and T10) on low priority queue. DPSA violates the condition for effective
utilization by exceeding 1 affecting the QoS by prohibiting most of the tasks to meet their deadline. ERUA
schedules task in such a way that the utilization ratio (Ui) of high priority queue (0.95), medium priority queue
(0.82) and low priority queue (0.95).
VI PERFORMANCE ANALYSIS
The performance analysis made by comparing ERUA with First Come First Serve (FCFS) (no priority),
Static Priority Scheduling Algorithm (SPSA) (fixed priority), Earliest Deadline First (EDF) and DPSA is
shown (Figure 9). The efficiency of our algorithm can be measured using
Number of Tasks Scheduled
Efficiencyi = --------------------------------------- x 100
(ii)
Total Number of Tasks
Ramkumar N et al. / International Journal of Engineering and Technology (IJET)
ISSN : 0975-4024
Vol 5 No 2 Apr-May 2013
1325
Figure 9. The Efficiency Comparison of Five Algorithms.
If the same set of tasks is to be scheduled using FCFS, tasks (T2, T4, T5, T6, T7, T8, T10 and T11) miss
its deadline. For SPSA, tasks (T6 and T8High priority queue and T10Low priority queue) miss its
deadline. For EDF, tasks (T4, T8, T11, T3 and T7) miss its deadline. The efficiency of FCFS (0.27), SPSA
(0.72), EDF (0.54), DPSA (0.81) and ERUA (0.96) are plotted in the graph to illustrate the optimality of
ERUA. ERUA proves to be an optimal service request scheduling algorithm through effective resource
utilization.
As per this schedule,
ALGORITHM EFFICIENCY (%)
FCFS 27%
SPSA 72%
EDF 54%
DPSA 82%
ERUA 98%
Table 3. Efficiency (%)
V CONCLUSION
Users focus on the QoS whereas the service providers rely on maximizing their profit. To satisfy both the user
and the service providers we need an efficient service request scheduling algorithm in a cloud computing
platform. Our algorithm satisfies the requirement of both the users and the service providers through efficient
schedule and priority reassignment. It services the SLA model of the user and the cost model for the service
provider through dynamic resource reuse management. Our future work investigates on evaluating the users
SLA model and the service provider profit model under different load condition.
VI REFERENCES
[1] Ana Juan Ferrer, Francisco Hernández, Johan Tordsson , Erik Elmroth, Ahmed Ali-Eldin, Csilla Zsigri, Raül Sirvent, Jordi Guitart,
Rosa M. Badia, Karim Djemamee, Wolfgang Ziegler, Theo Dimitrakos, Srijith K. Nair, George Kousiouris, Kleopatra Konstanteli,
Theodora Varvarigou, Benoit Hudzia, Alexander Kipp, Stefan Wesnerj, Marcelo Corrales, Nikolaus Forgó, Tabassum Sharif and Craig
Sheridan, “OPTIMIS: A holistic approach to cloud service provisioning”, Future Generation Computer Systems, 2012, pp 66–77.
[2] Aziz Murtazaev, Sangyoon Oh, “Sercon: Server Consolidation Algorithm using Live Migration of Virtual Machines for Green
Computing,” IETE Technical Review, 2011, pp 212-231.
[3] Dr. M. Dakshayini and Dr. H. S. Guruprasad, “An Optimal Model for Priority based Service Scheduling Policy for Cloud Computing
Environment”, International Journal of Computer Applications, 2011, pp 0975–8887.
[4] Geetha J, Rajeshwari S B, Dr. N Uday Bhaska and Dr. P Chenna Reddy, “An Efficient Profit-based Job Scheduling Strategy for
Service Providers in Cloud Computing Systems”, International Journal of Application or Innovation in Engineering & Management
(IJAIEM), 2013, pp 336-338.
[5] 5 R.Gogulan, A.Kavitha and U.Karthick Kumar, “An Multiple Pheromone Algorithm for Cloud Scheduling With Various QOS
Requirements”, International Journal of Computer Science Issues (IJCSI), May 2012, pp 232-238.
[6] Ioannis A. Moschakis and Helen D. Karatza, “Evaluation of gang scheduling performance and cost in a cloud computing system”, The
Journal of Supercomputing, 2012, pp 975-992.
[7] J. Chen, C. Wang, B. Zhou, L. Sun, Y. Lee and A. Zomaya, “Tradeoffs between profit and customer satisfaction for service
provisioning in the cloud”, International Symposium on High Performance Distributed Computing, 2011, pp 229-238.
Ramkumar N et al. / International Journal of Engineering and Technology (IJET)
ISSN : 0975-4024
Vol 5 No 2 Apr-May 2013
1326
[8] Keerthana Boloor, Rada Chirkova and Yannis Viniotis, “Dynamic request allocation and scheduling for context aware applications
subject to a percentile response time SLA in a distributed cloud “, IEEE International Conference on Cloud Computing Technology
and Science, 2011, pp 464-472.
[9] Linlin Wu and Rajkumar Buyya , “Service Level Agreement (SLA) in Utility Computing Systems, Technical Report, 2010.
[10] Linlin Wu, Saurabh Kumar Garg and Rajkumar Buyya, “SLA-based Resource Allocation for Software as a Service Provider (SaaS) in
Cloud Computing Environments”, IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGrid), 2011, pp
195-204.
[11] Linan Zhu, Qingshui Li and Lingna He, “Study on Cloud Computing Resource Scheduling Strategy Based on the Ant Colony
Optimization Algorithm”, International Journal of Computer Science Issues (IJCSI), 2012, pp 54-58.
[12] Noha El.Attar, Wael Awad and Fatma Omara, “Resource Provision for Services Workloads based on (RPOA)”, International Journal
of Computer Science Issues (IJCSI), 2012, pp 553-560.
[13] Rajiv Ranjan, Liang Zhao, Xiaomin Wu, Anna Liu, Andres Quiroz and Manish Parashar, “Peer-to-Peer Cloud Provisioning: Service
Discovery and Load-Balancing”, Cloud Computing Computer Communications and Networks, 2010, pp 195-217.
[14] Sourav Banerjee, Mainak Adhikary, Utpal Biswas, “Advanced Task Scheduling for Cloud Service Provider Using Genetic
Algorithm”, IOSR Journal of Engineering (IOSRJEN), July 2012, PP 141-147.
[15] Young Choon Lee and Albert Y. Zomaya, “Energy Efficient Utilization of Resources in Cloud Computing Systems”, The Journal of
Supercomputing, 2012, pp 268-280.
[16] Young Choon Lee, Chen Wang, Albert Y. Zomaya and Bing Bing Zhou, “Profit-Driven Service Request Scheduling In Clouds“,
IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, pp 15-24.
[17] Zhipiao Liu, Shangguang Wang, Qibo Sun, Hua Zou and Fangchun Yang, “Cost-Aware Cloud Service Request Scheduling for SaaS
Providers”, The Computer Journal, 2013, pp 1-1.
[18] Zhongyuan Lee, Ying Wang and Wen Zhou, “A dynamic priority scheduling algorithm on service request scheduling in cloud
Computing”, International Conference on Electronic & Mechanical Engineering and Information Technology, 2011, pp 4665-4669.
Authors Profile
Ramkumar N B.E., M.Tech.,
He received his degree in Electronic and communication Engineering from Periyar Maniammai University,
Thanjavur, Tamil Nadu, in 2012. He is currently pursuing his Master of Technology in Computer Science and
Engineering at SASTRA University, Thanjavur, Tamil Nadu. His interests include Cloud Scheduling,
Virtualization ,image processing and Wireless Sensor Networks.
Nivethitha S M.Sc., M.Tech., She received her degree in Software Engineering from Anna
University, Chennai, Tamil Nadu, in 2011. She is currently pursuing her Master of Technology in Advanced
Computing at SASTRA University, Thanjavur, Tamil Nadu. Her interests include Cloud Scheduling,
Virtualization and Wireless Sensor Networks.
Ramkumar N et al. / International Journal of Engineering and Technology (IJET)
ISSN : 0975-4024
Vol 5 No 2 Apr-May 2013
1327
... Các nghiên cứu [13,14] lại tập trung vào lập lịch trên các yêu cầu để tiết kiệm điện năng trên các trung tâm dữ liệu. Các nghiên cứu gần đây của Ramkumar N [15] đã lập lịch trên các yêu cầu thời gian thực, sử dụng hàng đợi ưu tiên để ánh xạ yêu cầu vào tài nguyên nhưng chỉ tập trung lập lịch để giải quyết công việc một cách nhanh nhất thỏa mãn deadline của yêu cầu mà không quan tâm đến chi phí và ngân sách. Swarupa Irugurala [16] đưa ra thuật toán lập lịch với mục tiêu đem lại lợi nhuận cao nhất cho nhà cung cấp SaaS nhưng chỉ xem xét giữa hai loại chi phí: chi phí khởi tạo máy ảo và chi phí sử dụng máy ảo đã có để chọn tài nguyên. ...
... Tính ` của các ánh xạ trong T i như công thức (15), ` được tính trên máy ảo j của nhà cung cấp x được ánh xạ bởi t i . ...
... Điều này sẽ làm cho chi phí của thuật toán tuần tự tăng lên và mất một khoảng thời gian rất lớn để đưa ra lịch trình. Còn thuật toán EDF chỉ xem xét đến tỉ số sử dụng: n = ∑ (trong đó C i là thời gian thực hiện và T i tương ứng với deadline) [15,16] để ánh xạ các yêu cầu vào các tài nguyên. Do đó thuật toán EDF chỉ đảm bảo các yêu cầu hoàn thành trước deadline của nó chứ không quan tâm đến ngân sách cho các yêu cầu. ...
Article
Full-text available
The problem of admission control to schedule for user requirements is NP-complete [1] in cloud computing environment. To solve this problem it is usually to put building heuristic algorithms to form a simple algorithm with complex polynomial. In this paper, we propose an algorithm of admission control and a scheduling algorithm for user requirements based on the use of ACO algorithm (Ant Colony Optimization) and take advantage of validity period between the requirements so that the total cost of the system is minimal but still satisfying QoS (Quality of Service) constraints for the requirements. Two algorithms are set up and run a complete test on CloudSim. The experimental results show the effectiveness and superiority of the proposed algorithm in comparing with sequential and EDF (Earliest Deadline First) algorithms.
... It becomes easier to get the standard deviation with respect to the actual load VMs as formula (6) Any VM now has a processing time indicated by the equation (7) The mean processing time for all the VMs is also indicated as formula (8) From the above conclusive equations, it can be stated that whenever S.D. of the full VM becomes less or equal to mean, a balanced state of the system is experienced. In case S.D. is higher than the mean, the imbalance state prevails [7]. ...
... The study [8,9] focuses on the scheduling requirements for power savings on data center. The recent study by N. Ramkumar [10] of schedule in real-time requirements used for priority queues mapped into resource requirements but focused to solve scheduling tasks quickly satisfy most c 2015 Vietnam Academy of Science & Technology of the requirements deadline regardless of cost and its budget. S. Irugurala and K. S. Chatrapati [11] make scheduling algorithm with the objective to bring the highest return for SaaS providers but considering between the two types of costs: the cost of initializing virtual machine (VM) and the fee of virtual machine which are used to select resources. ...
Article
Full-text available
The goal of the SaaS provider is the most profitable; the user's goal is to meet requirements as quickly as possible but still within budget and deadline. In this paper, a heuristic ACO (Ant Colony Optimization) is used to propose an algorithm to admission control, then building a scheduling algorithm based on the overlapping time between requests. The goal of both algorithms is to minimize the total execution time of the system, satisfying QoS constraints for all requirements and provide the greatest returned profit for SaaS providers. These two algorithms are set up and run a complete test on CloudSim, the experimental results are compared with sequential and EDF (Earliest Deadline First) algorithms.
... This basic operation is repeated until either a solution is found or a stopping criterion is reached. So it has two main components a candidate generator which maps one solution candidate to a set of possible successors, and a evaluation criteria which ranks each valid solution, such that improving the evaluation leads to better solutions Ram Kumar et al. [16] proposed Effective Resource Utilization Algorithm (ERUA). This algorithm based upon the 3 tier architecture of cloud. ...
Article
Full-text available
With the rise of cloud figuring, processing assets (i.e., systems, servers, stockpiling, applications, and so forth.) are provisioned as metered on-request benefits over systems, and can be quickly dispensed and discharged with negligible management exertion. In the cloud registering worldview, the virtual machine (VM) is a standout amongst the most usually utilized asset units in which business administrations are epitomized. VM scheduling advancement, i.e., finding ideal position plans for VMs and reconfigurations as indicated by the evolving conditions, winds up testing issues for cloud framework suppliers and their clients.
Technical Report
Full-text available
It gives us immense pleasure to start the Editorial Note for the International Journal of Machine Learning and Networked Collaborative Engineering (IJMLNCE) ISSN No. 2581-3242, a quarterly published, Open Access, peer-reviewed, International Journal. In the New Year 2018, we would like to convey our warm greetings to each one of you. Our wish with the New Year brings happy research outcomes & brings happiness-prosperity in your lives. In the Volume No 02, Issue No 01, we are happy to write that our journal manuscript information available with CrossRef, CiteFactor, DRJI, Google Scholar, Index Copernicus, J-Gate, ROAD, and Scilit. In the Volume No. 02 Issue No. 01, we have five research papers, within the scope of the journal, which covers various aspects of machine learning and collaborative engineering. The first paper in this list is “Eight Legs Rimless Wheel Robot Model Driven on Level Ground Using one Actuator”, authored by Mohammad Farhan Ferdous. This paper explores how a rimless wheel can walk on the level ground with the help of actuators. A control framework has been set up to establish the fact they have proposed. The researchers have created a 4 DOF numerical model of an underactuated rimless while [1]. In the next paper, authored by Surbhi Sharma et al., “An Innovative Approach for Quick Shopping using QR Code”, has been widely demonstrated. This paper focuses on the advancement in virtual shopping via QR Code using smartphones, which can be simple and easily approachable as well as customer friendly. In this paper, the authors have represented an app where using the QR Code the URL can be found as well as the purchase of the product, order inclusion and bill generation after purchase is also possible using the technique. This work is an innovative concept in the world of digital marketing and management area doubtlessly [2]. The next paper in the list, authored by Nguyen Hoang Ha and Nguyen Hoang Nguyen, covers the area of cloud computing. The paper is based on a heuristic algorithm International Journal of Machine Learning and Networked Collaborative Engineering, ISSN: 2581-3242, Vol.2 No. 1 iii on Particle Swarm Optimization (PSO) on cloud computing. The authors have chosen SaaS providers as their target object and they have compared their results with the existing solutions available using CloudSim simulation. The work focuses on the issue of admission control and the schedule for the requirements of users toward of multiobjective optimization. The novelty of the work is due to the specific calculation of fitness function, the local best position of each particle and the global best position of the entire swarm [3]. Our fourth paper, authored by Sunil Kumar Joshi et al., has been selected from Mobile Ad Hoc networking area (MANET), again a highly demanding research area. This paper focuses on Multidimensional performance analysis for packet delivery and calculating the routing overhead. The authors have considered two routing protocols, namely AODV and AOMDV for their proposed work and correlation has been made between Ad-hoc On-Demand Distance Vector convention and Ad-hoc On Multipath Demand Distance Vector convention utilizing system test system. NS2 has been preferred as the simulation tool for the work. [4]. The last paper selected for this version, authored by Vishal Dutt, Akhansha Jain, and Abhilash Parashar, focuses on research centric approach and utilization of Big Data management in case of virtual shopping or window shopping. The main focus of their proposed work is on the consume factorization of contraptions, systems and the most steady information as for The Big data’s works out. They have shown how effective use of Big Data can give an association a centered favored edge and be of respect, rather than being satisfied to simply assemble and have the sensible edifying collection [5]. REFERENCES [1] Mohammad Farhan Ferdous (2018). Eight Legs Rimless Wheel Robot Model Driven on Level Ground Using one actuator. International Journal of Machine Learning and Networked Collaborative Engineering, 2(1), 1-7. https://doi.org/10.30991/IJMLNCE.2018v02i01.001 International Journal of Machine Learning and Networked Collaborative Engineering, ISSN: 2581-3242, Vol.2 No. 1 iv [2]. Surbhi Sharma(2018). An Innovative Approach for Quick Shopping Using QR-Code for Indian Precinct. International Journal of Machine Learning and Networked Collaborative Engineering, 2(1), 8-14. https://doi.org/10.30991/IJMLNCE.2018v02i01.00 [3]. Nguyen Hoang Ha (2018). A Scheduling Algorithm based on PSO Heuristic in Cloud Computing. International Journal of Machine Learning and Networked Collaborative Engineering, 2(1), 15-26. https://doi.org/10.30991/IJMLNCE.2018v02i01.00 [4]. Sunil Kumar Joshi (2018). Multidimensional Performance analysis for Packet delivery and routing overhead in AODV and AOMDV. International Journal of Machine Learning and Networked Collaborative Engineering, 2(1), 27-33. https://doi.org/10.30991/IJMLNCE.2018v02i01.00 [5] Vishal Dutt (2018). Usage of the Big Data Idea in Associations Potential Outcomes, Obstructions, and Difficulties. International Journal of Machine Learning and Networked Collaborative Engineering, 2(1),34-47. https://doi.org/10.30991/IJMLNCE.2018v02i01.00 Editor-in-Chief International Journal of Machine Learning and Networked Collaborative Vicente García-Díaz , Ph.D., University of Oviedo, Spain Vijender Kumar Solanki, Ph.D., CMR Institute of Technology, Hyderabad, TS, India DOI : https://doi.org/10.30991/IJMLNCE.2018v02i01
Chapter
This chapter contains sections titled: Cloud Computing Firewalls in Cloud and SDN Distributed Messaging System Migration Security in Cloud Conclusion
Chapter
Scheduling problem for user requests in cloud computing environment is NP-complete. This problem is usually solved by using heuristic methods in order to reduce to polynomial complexity. In this paper, heuristic ACO (Ant Colony Optimization) and PSO (Particle Swarm Optimization) are used to propose algorithms admission control, then building a scheduling based on the overlapping time between requests. The goal of this paper is (1) to minimize the total cost of the system, (2) satisfy QoS (Quality of Service) constraints for users, and (3) provide the greatest returned profit for SaaS providers. These algorithms are set up and run a complete test on CloudSim, the experimental results are compared with a sequential and EDF algorithms.
Article
Full-text available
Cloud computing refers to the model, which is the pool of resources. Cloud makes on-demand delivery of these computational resources (data, software and infrastructure) among multiple services via a computer network with different load conditions of the cloud network. User will be charged for the resources used based upon time. Hence efficient utilization of cloud resources has become a major challenge in satisfying the user’s requirement (QoS) and in gaining benefit for both the user and the service provider. In this paper, we propose a priority and admission control based service scheduling policy that aims at serving the user requests satisfying the QoS, optimizing the time the service-request spends in the queue and achieving the high throughput of the cloud by making an efficient provision of cloud resources.
Article
Full-text available
In the cloud computing environment, one of the major activities is Job Scheduling. With the concepts of the existing profit-based scheduling mechanism, we have designed an enhanced model which gives better profits to the cloud service providers. Cloud users deploy the jobs requests on the cloud. A set of jobs which are ready to execute will be picked and arranged in non-decreasing order of their profits before execution. By carrying out these operations in parallel the processing time will be reduced. Thereby, we can guarantee the better profits to the service providers with the better Quality of Service (QoS).
Article
Full-text available
Fulfilling the requirements of different applications and services in a real Cloud environment is extremely a big challenge. In this challenge the provision policies have to achieve the availability by allocating the appropriate resource to the customer services without any conflict in resource demands and with determining the right amount of required resources for the execution of services. According to the work in this paper, a Resource Provision Optimal Algorithm (RPOA) based on Particle Swarm Optimization PSO has been introduced and implemented to find the near optimal solution of resource allocation with minimizing both time and cost.
Article
Full-text available
As cloud computing becomes widely deployed, more and more cloud services are offered to end users in a pay-as-you-go manner. Today's increasing number of end user-oriented cloud services are generally operated by Software as a Service (SaaS) providers using rental virtual resources from third-party infrastructure vendors. As far as SaaS providers are concerned, how to process the dynamic user service requests more cost-effectively without any SLA violation is an intractable problem. To deal with this challenge, we first establish a cloud service request model with SLA constraints, and then present a cost-aware service request scheduling approach based on genetic algorithm. According to the personalized features of user requests and the current system load, our approach can not only lease and reuse virtual resources on demand to achieve optimal scheduling of dynamic cloud service requests in reasonable time, but can also minimize the rental cost of the overall infrastructure for maximizing SaaS providers’ profits while meeting SLA constraints. The comparison of simulation experiments indicates that our proposed approach outperforms other revenue-aware algorithms in terms of virtual resource utilization, rate of return on investment and operation profit and provides a cost-effective solution for service request scheduling in cloud computing environments.
Chapter
Full-text available
Clouds have evolved as the next-generation platform that facilitates creation of wide-area on-demand renting of computing or storage services for hosting application services that experience highly variable workloads and requires high availability and performance. Interconnecting Cloud computing system components (servers, virtual machines (VMs), application services) through peer-to-peer routing and information dissemination structure are essential to avoid the problems of provisioning efficiency bottleneck and single point of failure that are predominantly associated with traditional centralized or hierarchical approaches. These limitations can be overcome by connecting Cloud system components using a structured peer-to-peer network model (such as distributed hash tables (DHTs)). DHTs offer deterministic information/query routing and discovery with close to logarithmic bounds as regards network message complexity. By maintaining a small routing state of O (log n) per VM, a DHT structure can guarantee deterministic look-ups in a completely decentralized and distributed manner. This chapter presents: (i) a layered peer-to-peer Cloud provisioning architecture; (ii) a summary of the current state-of-the-art in Cloud provisioning with particular emphasis on service discovery and load-balancing; (iii) a classification of the existing peer-to-peer network management model with focus on extending the DHTs for indexing and managing complex provisioning information; and (iv) the design and implementation of novel, extensible software fabric (Cloud peer) that combines public/private clouds, overlay networking, and structured peer-to-peer indexing techniques for supporting scalable and self-managing service discovery and load-balancing in Cloud computing environments. Finally, an experimental evaluation is presented that demonstrates the feasibility of building next-generation Cloud provisioning systems based on peer-to-peer network management and information dissemination models. The experimental test-bed has been deployed on a public cloud computing platform, Amazon EC2, which demonstrates the effectiveness of the proposed peer-to-peer Cloud provisioning software fabric.
Article
Full-text available
The energy consumption of under-utilized resources, particularly in a cloud environment, accounts for a substantial amount of the actual energy use. Inherently, a resource allocation strategy that takes into account resource utilization would lead to a better energy efficiency; this, in clouds, extends further with virtualization technologies in that tasks can be easily consolidated. Task consolidation is an effective method to increase resource utilization and in turn reduces energy consumption. Recent studies identified that server energy consumption scales linearly with (processor) resource utilization. This encouraging fact further highlights the significant contribution of task consolidation to the reduction in energy consumption. However, task consolidation can also lead to the freeing up of resources that can sit idling yet still drawing power. There have been some notable efforts to reduce idle power draw, typically by putting computer resources into some form of sleep/power-saving mode. In this paper, we present two energy-conscious task consolidation heuristics, which aim to maximize resource utilization and explicitly take into account both active and idle energy consumption. Our heuristics assign each task to the resource on which the energy consumption for executing the task is explicitly or implicitly minimized without the performance degradation of that task. Based on our experimental results, our heuristics demonstrate their promising energy-saving capability. KeywordsCloud computing–Energy aware computing–Load balancing–Scheduling
Article
Cloud computing is one the upcoming latest technology which is developing drastically. Today lots of business organizations and educational institutions using Cloud environment. But one of the most important things is to increase the Quality of Service (QoS) of the system. To improve the QoS in a system one must need to reduce the waiting time of the system. Genetic Algorithm (GA) is a heuristic search technique which produces the optimal solution of the tasks. This work produces one scheduling algorithm based on GA to optimize the waiting time of overall system. The cloud environment is divided into two parts mainly, one is Cloud User (CU) and another is Cloud Service Provider (CSP). CU sends service requests to the CSP and all the requests are stored in a Request Queue (RQ) inside CSP which directly communicates with GA Module Queue Sequencer (GAQS). GAQS perform background operation, like daemon, with extreme dedication and selects the best sequence of jobs to be executed which minimize the Waiting time (WT) of the tasks using Round Robin (RR) scheduling Algorithm and store them into Buffer Queue (BQ). Then the jobs must be scheduled by the Job Scheduler (JS) and select the particular resource from resource pool (RP) which it needs for execution.
Article
Virtualization technologies changed the way data centers of enterprises utilize their server resources. Instead of using dedicated servers for each type of application, virtualization allows viewing resources as a pool of unified resources, thereby reducing complexity and easing manageability. Server consolidation technique, which deals with reducing the number of servers used by consolidating applications, is one of the main applications of virtualization in data centers. The latter technique helps to use computing resources more effectively and has many benefits, such as reducing costs of power, cooling and, hence, contributes to the Green IT initiative. In a dynamic data center environment, where applications encapsulated as virtual machines are mapped to and released from the nodes frequently, reducing the number of server nodes used can be achieved by migrating applications without stopping their services, the technology known as live migration. However, live migration is a costly operation; hence, how to perform periodic server consolidation operation in a migration-aware way is a challenging task. We propose server consolidation algorithm - Sercon, which not only minimizes the overall number of used servers, but also minimizes the number of migrations. We verify the feasibility of our algorithm along with showing its scalability by conducting experiments with eight different test cases.
Article
Cloud Computing refers to the notion of outsourcing on-site available services, computational facilities, or data storage to an off-site, location-transparent centralized facility or “Cloud.” Gang Scheduling is an efficient job scheduling algorithm for time sharing, already applied in parallel and distributed systems. This paper studies the performance of a distributed Cloud Computing model, based on the Amazon Elastic Compute Cloud (EC2) architecture that implements a Gang Scheduling scheme. Our model utilizes the concept of Virtual Machines (or VMs) which act as the computational units of the system. Initially, the system includes no VMs, but depending on the computational needs of the jobs being serviced new VMs can be leased and later released dynamically. A simulation of the aforementioned model is used to study, analyze, and evaluate both the performance and the overall cost of two major gang scheduling algorithms. Results reveal that Gang Scheduling can be effectively applied in a Cloud Computing environment both performance-wise and cost-wise. KeywordsCloud computing–Gang scheduling–HPC–Virtual machines
Conference Paper
Cloud computing has been considered as a solution for solving enterprise application distribution and configuration challenges in the traditional software sales model. Migrating from traditional software to Cloud enables on-going revenue for software providers. However, in order to deliver hosted services to customers, SaaS companies have to either maintain their own hardware or rent it from infrastructure providers. This requirement means that SaaS providers will incur extra costs. In order to minimize the cost of resources, it is also important to satisfy a minimum service level to customers. Therefore, this paper proposes resource allocation algorithms for SaaS providers who want to minimize infrastructure cost and SLA violations. Our proposed algorithms are designed in a way to ensure that Saas providers are able to manage the dynamic change of customers, mapping customer requests to infrastructure level parameters and handling heterogeneity of Virtual Machines. We take into account the customers' Quality of Service parameters such as response time, and infrastructure level parameters such as service initiation time. This paper also presents an extensive evaluation study to analyze and demonstrate that our proposed algorithms minimize the SaaS provider's cost and the number of SLA violations in a dynamic resource sharing Cloud environment.