The P2PFaaS high-level software architecture.

The P2PFaaS high-level software architecture.

Source publication
Article
Full-text available
In Edge and Fog Computing environments, it is usual to design and test distributed algorithms that implement scheduling and load balancing solutions. The operation paradigm that usually fits the context requires the users to make calls to the closer node for executing a task, and since the service must be distributed among a set of nodes, the serve...

Context in source publication

Context 1
... overall architecture of the framework is shown in Fig. 1 which depicts the main ...

Citations

... In such case, processing can be done both on Edge and Cloud devices. The load balancing of such network requires optimization [4]. Large research resources [12]. ...
... In technical terms, the scheduling algorithm schedules a job to a virtual machine (VM) according to the parameters listed in Equation5. The ideal Q-value at any given time, t, shows that the best scheduling strategy has been selected to reduce task execution costs. ...
Preprint
Full-text available
Containerization became indispensable in distributed environments for packaging software and dependencies in a lightweight executable container. In the era of big data and availability of cloud infrastructure, it is more so as distributed applications are more resource and data-intensive. Such High Performance Computing (HPC) applications are deployed in containerized services. However, data-intensive nature of those applications lead to poor performance unless scheduler considers it. In this paper, not only load balancing of containers but also performance of underlying applications is considered. Towards this end, a scheduling algorithm with unified optimization considering load balancing and also application performance is proposed. This algorithm, named Contention-aware Greedy Heuristic Scheduling and Load Balancing for Containers (CGHSLBC), helps in improving performance of containerization in distributed environments. Problems associated with containerization in terms of balancing load and also application performance are NP-hard. CGHSLBC has heuristics to deal with such issues. Empirical study has revealed that CGHSLBC better application performance besides balancing load of containerized services in cloud infrastructure. We also proposed a learning based methodology to schedule and balance load for containers. It is based on Deep Reinforcement Learning (DRL) where state change is continuously monitored while making well informed scheduling decisions. An algorithm named Reinforcement Learning based Dynamic Scheduling (RLbDS) is proposed and empirical study has revealed that it shows better performance over state of the art methods.
... The algorithm given in [15] is focused on transferring the computing load to reduce delays when performing tasks in distributed systems. But this algorithm is not oriented to take into account the characteristics of the computational task, in particular, the computation time in the edge environment. ...
... Unlike the methods reported in [9,13,20,21], the proposed method allows taking into account the heterogeneity of the environment. Also, in contrast to the methods given in [15,18,19], the resource costs of nodes of the edge environment are taken into account. Compared to the methods described in [16,17], when using the method of building a virtual cluster of the IoT edge environment, the performance of the network increases. ...
Article
Full-text available
The object of research is the process of load distribution in the edge environment of the Internet of Things. The task to improve the efficiency of the functioning of the network of computing devices in the Internet of Things edge environment has been solved. Free resources of heterogeneous single-board computers were used to this end. In the process of conducting research, an approach to the construction of an architecture for a virtual cluster of computers with limited resources was devised. The design took into account specific features of the edge environment on the Internet of Things. This has made it possible to propose a four-layer architecture instead of the standard seven-layer architecture of IoT sensor information processing device networks. Stages in the virtual cluster construction in the edge environment on the Internet of Things were also defined. A three-stage procedure to form a virtual cluster was justified. This procedure made it possible to devise a method for the virtual clustering in the Internet of Things edge environment based on the proposed virtual cluster architecture. The proposed method for building a virtual cluster in the Internet of Things edge environment was investigated. With a small network load, a virtual cluster has no advantage over a classic cluster. But with the growth of the network load, the virtual cluster prevails over the classic cluster in total performance; the advantage in total performance can exceed 10 %. It was also proven that for a heterogeneous environment, performance changes at full network load significantly depend on the number of virtual node groups. The research results on the method for building a virtual cluster in the Internet of Things edge environment can be explained by improving the balance of the network load at virtual clustering
... They computed their offloading scheme based ontheminimization of multiple factors that helped to create balance a between network quality and user experience. In [39], the authors presented the P2PFaaS framework, a software suite which enables the testing and benchmarking of scheduling and loadbalancing algorithms among sets of real nodes. [40] this paper explores recent articles to determine the possible research gaps and opportunities to implement an efficient solution for load balancing in fog environments after analyzing and assessing the existing solutions. ...
Article
Full-text available
Recently, to deliver services directly to the network edge, fog computing, an emerging and developing technology, acts as a layer between the cloud and the IoT worlds. The cloud or fog computing nodes could be selected by IoTs applications to meet their resource needs. Due to the scarce resources of fog devices that are available, as well as the need to meet user demands for low latency and quick reaction times, resource allocation in the fog-cloud environment becomes a difficult problem. In this problem, the load balancing between several fog devices is the most important element in achieving resource efficiency and preventing overload on fog devices. In this paper, a new adaptive resource allocation technique for load balancing in a fog-cloud environment is proposed. The proposed technique ranks each fog device using hybrid multi-criteria decision- making approaches Fuzzy Analytic Hierarchy Process (FAHP) and Fuzzy Technique for Order Performance by Similarity to Ideal Solution (FTOPSIS), then selects the most effective fog device based on the resulting ranking set. The simulation results show that the proposed technique outperforms existing techniques in terms of load balancing, response time, resource utilization, and energy consumption. The proposed technique decreases the number of fog nodes by 11%, load balancing variance by 69% and increases resource utilization to 90% which is comparatively higher than the comparable methods.
... The proposed algorithm has been implemented into the P2PFaas framework [17]. In particular, the modules that have been improved are the learner service which implements the RL model and is also responsible for training the model and the decision-making process, and the scheduler service, which implements the actual scheduler algorithm. ...
... The assigned deadlines setting are the Processing Time of the image multiplied by a factor of 1.1, and it sends requests for Image A and Image B with a ratio of 1:1. The benchmark time is set to 1800 seconds, and the nodes are loaded heterogeneously, so every node is experiencing fixed traffic but loaded differently from the other nodes, and this traffic is distributed with the following values (in requests per second): 4,6,8,12,13,14,15,16,17,18,19. The "Dynamic Study" traffic is extracted from open data of New York City data set, which estimates the average taxi traffic across several locations in the city. ...
Conference Paper
Full-text available
Fog and Edge Computing are two paradigms specifically suitable for real-time and time-critical applications, which are usually distributed among a set of nodes that constitutes the core idea of both Fog and Edge Computing. Since nodes are heterogeneous and subject to different traffic patterns, distributed scheduling algorithms are in charge of making each request meet the specified deadline. In this paper, we exploit the approach of Reinforcement Learning based decision-making for designing a cooperative and decentralized task online scheduling approach which is composed of two RL-based decisions. One for selecting the node to which to offload the traffic and one for accepting or not the incoming offloading request. The experiments that we conducted on a cluster of Raspberry Pi 4 show that introducing a second RL decision increases the rate of tasks executed within the deadline of 4% as it introduces more flexibility during the decision-making process, consequently enabling better scheduling decisions.
... In this section, we introduce and describe the implementation of the proposed RL scheduling policy in a pseudo-real setting by using our open-source framework "P2PFaaS" [43] which has been installed on 12 Raspberry Pi 4 (six of which with 4GB of RAM and size with 8GB of RAM) by using the OpenBalena 3 framework, a partial open-source framework which allows deploying sets of Docker containers on a fleet of SBCs (Single-Board Computers). Indeed, the framework is composed of a set of Docker containers that implement scheduling, discovery, and scheduling policy learning capabilities. ...
Article
Full-text available
Fog Computing is a widely adopted paradigm that allows distributing the computation in a geographic area. This makes it possible to implement time-critical applications and opens the study to a series of solutions that permit smartly organizing the traffic among a set of fog nodes, which constitute the core of the Fog Computing paradigm. As a typical smart city setting is subject to a continuous change in traffic conditions, it is necessary to design algorithms that can manage all the computing resources by properly distributing the traffic among the nodes in an adaptive way. In this paper, we propose a cooperative and decentralized algorithm based on Reinforcement Learning that is able to perform online scheduling decisions among fog nodes. This can be seen as an improvement over the power-of-two random choices paradigm used as a baseline. By showing results from our delay-based simulator and then from our framework “P2PFaaS” installed on 12 Raspberry Pis, we show how our approach maximizes the rate of the tasks executed within the deadline, outperforming the power-of-two random choices both in a fixed load condition and with traffic extracted from a real smart city scenario.
... 1) design of three decentralized algorithms which balance the energy consumption of the nodes by relying on cooperative tasks offloading; 2) benchmark of the algorithms in a cluster of 11 Raspberry Pi 4 SBCs by using the FaaS as task model and the P2PFaaS [6] framework for the implementation; 3) definition of a set of performance metrics targeting the behavior of the algorithms with respect to service availability, lifespan and lifespan variance; 4) comparison of the algorithm both in a standalone scenario and in a solar panel assisted scenario [7] which solar energy traces from real panels The rest of the paper is organized as follows. In Section II we compare our work to other similar works in literature which address energy and load balancing in Edge Computing, in Section III we provide the model of the system and the performance evaluation metrics, then in Section IV we illustrate the proposed algorithms and in Section V we provide the performance comparison of the algorithms in the experimental setting. ...
... The algorithms presented in Section IV have been implemented in the P2PFaaS framework [6] which we envisioned and implemented for testing distributed scheduling and load balancing algorithms. The paradigm used as a task model is the Function-as-a-Service (FaaS), therefore we envision that every node i ∈ N generates a rate λ i of function execution requests per second, then the scheduling decision is made per each request upon its generation. ...
... The purpose of this second experiment is to measure how the proposed algorithms behave when, as it could happen in a real environment, the batteries are recharged during the day according to the solar activity. The amount of energy harvested by the solar panels used in the experiment has been taken from real home-designed solar panels 5 over 3 days of activity, the data were re-scaled to match 3 hours under the conditions of the experiment in which we suppose that each node is attached to its own polycrystalline solar panel with a rated maximum power of 14W 6 . In this process, we suppose that the differences in the traces are due to the different geographical positions of the nodes. ...
... The mechanism of transfer of computational load to reduce delays during the execution of tasks in accordance with the characteristics of containers is discussed in [12]. But this algorithm does not take into account the characteristics of the computational problem, in particular the time of calculations in the fog. ...
... The value of the random variable X can be calculated based on expressions (12) and (16): ...
... This becomes possible through the use of an iterative virtual cluster search algorithm. In contrast to [12] where the mechanism of computational load transfer is considered, the proposed method takes into account the characteristics of the computational problem, in particular the time of calculations in the fog. At the same time, the computational complexity of the method is significantly less than in the heuristic approach [13]. ...
Article
Full-text available
This study solves the task to redistribute the load on a geographically distributed foggy environment in order to achieve a load balance of virtual clusters. The necessity and possibility of developing a universal and at the same time scientifically based approach to load balancing has been determined. Object of study: the process of redistribution of load in a foggy environment between virtual, geographically distributed clusters. A load balancing method makes it possible to reduce delays and decrease the time for completing tasks on foggy nodes, which brings task processing closer to real time. To solve the task, a mathematical model of the functioning of a separate cluster in a foggy environment has been built. As a result of modeling, the problem of finding the optimal distribution of tasks across the nodes of the virtual cluster was obtained. The limitations of the problem take into account the characteristics of the physical nodes of support for the virtual cluster. The process of distributing the additional load was also simulated through the graph representation of tasks entering virtual clusters. The task to devise a method for load transfer between virtual clusters within a foggy environment is solved using the proposed iterative algorithm for finding a suitable cluster and placing the load. The simulation results showed that the balance of the foggy environment when using the proposed method increases significantly provided the network load is small. The scope of application of the results includes geographically distributed foggy systems, in particular the foggy layer of the industrial Internet of Things. A necessary practical condition for using the proposed results is the non-exceeding the specified limit of the total load on the foggy medium, usually 70 %
Article
In the present era of seamless connectivity which demands enormous smart devices to be allied and send data to the cloud, it seems imperative to organize and process cloud-based smart Internet of Things (IoT) applications in real time. Hence, to support the continuous demand for the scheduling of real-time latency-sensitive tasks; the adaptability of fog computing is necessary which provide close adjacency to the tasks generating sources. Fog computing ensures optimal scheduling of latency-sensitive tasks by appropriate resource allocation considering dynamic user requirements. But the process of scheduling is an open challenge due to limited availability and processing capacity of fog resources. Further, provisioning of an appropriate fog resource is also necessary for timely execution of tasks. Hence, the papers present a novel task-scheduling heuristic algorithm; Matrix-based Task-Fog Pairing (MTFP) that aim to provide a feasible solution for fog resource provisioning to latency sensitive tasks. The algorithm worked on two different matrixes called compatibility and execution time matrix for scheduling priority tasks in order to achieve the desired Quality of Experience (QoE) to the end-user. Finally, the proposed algorithm MTFP is compared with the present state-of-the-art and shown improvement in term of reducing tasks execution time by 18%, delay by 16% and energy consumption by 14.5%.