Figure 1 - uploaded by Vikash Kumar Singh
Content may be subject to copyright.
General framework of Internet of Things (IoT) + Fog + Cloud.

General framework of Internet of Things (IoT) + Fog + Cloud.

Source publication
Article
Full-text available
In the Internet of Things (IoT) + Fog + Cloud architecture, with the unprecedented growth of IoT devices, one of the challenging issues that needs to be tackled is to allocate Fog service providers (FSPs) to IoT devices, especially in a game-theoretic environment. Here, the issue of allocation of FSPs to the IoT devices is sifted with game-theoreti...

Contexts in source publication

Context 1
... differently from the Cloud, in Fog computing, the processing of data and storage of data is done locally to the available Fog service providers (FSPs). The general framework consisting of Cloud computing, Fog computing, and IoT devices is depicted in Figure 1. Talking in terms of Fog computing, currently, the number of FSPs are not ample. ...
Context 2
... the other hand, in the second case, the results obtained on comparing MTM-FSA and TM-FSA are given in Figure 10a-c. ...
Context 3
... Case 2: t i = ∞. In Figure 10, MTM-FSA is compared with TM-FSA based on the second parameter, i.e., Best allocation. In Figure 10, it can be seen that the number of best allocations made in the case of MTM-FSA is more as compared to that of TM-FSA in all three different scenarios. ...
Context 4
... Figure 10, MTM-FSA is compared with TM-FSA based on the second parameter, i.e., Best allocation. In Figure 10, it can be seen that the number of best allocations made in the case of MTM-FSA is more as compared to that of TM-FSA in all three different scenarios. This is due to the reason that, in the case of MTM-FSA, no FSP is exclusively given access to a particular user and a single FSP can be allocated to multiple users. ...
Context 5
... is due to the reason that, in the case of MTM-FSA, no FSP is exclusively given access to a particular user and a single FSP can be allocated to multiple users. If this is the case, then the number of users getting their best preference among the available FSPs gets increased as depicted in Figure 10. Also, these simulation results are supporting the claim made in Lemma 3 that the expected number of allocation of the first preference of all the agents will increase as the available number of slots is increased. ...
Context 6
... Figures 11a and 12b, the comparison between MTM-FSA and FMTM-FSA is done on the basis of lateness incurred on scheduling the tasks of the users. The x-axis represents the number of users, and the y-axis represents the total maximum lateness (in hours). ...
Context 7
... x-axis represents the number of users, and the y-axis represents the total maximum lateness (in hours). In Figure 11a,b, the simulations are performed on a small data set, whereas in Figure 12a,b, the simulations are performed on a large data set. In the case of MTM-FSA, it can be seen that the tasks of the users are scheduled based on the random number assigned to the users. ...
Context 8
... x-axis represents the number of users, and the y-axis represents the total maximum lateness (in hours). In Figure 11a,b, the simulations are performed on a small data set, whereas in Figure 12a,b, the simulations are performed on a large data set. In the case of MTM-FSA, it can be seen that the tasks of the users are scheduled based on the random number assigned to the users. ...
Context 9
... simulations are done for two different distributions, namely random distribution (RD) and normal distribution (ND). In Figure 11a, for the ND case, the mean (µ) and standard deviation (σ) for the processing time are 5 and 2, respectively. The mean and standard deviation for the deadline are taken as 8 and 3, respectively. ...
Context 10
... mean and standard deviation for the deadline are taken as 8 and 3, respectively. In Figure 11b, for the RD case, the processing time and deadline are generated randomly between [3,10] and [5,12], respectively. In Figure 11a,b, it can be seen that the total maximum lateness in the case of FMTM-FSA is less as compared to MTM-FSA. ...
Context 11
... Figure 11b, for the RD case, the processing time and deadline are generated randomly between [3,10] and [5,12], respectively. In Figure 11a,b, it can be seen that the total maximum lateness in the case of FMTM-FSA is less as compared to MTM-FSA. This is due to the reason that, in the case of FMTM-FSA, the tasks of the users are scheduled based on the earliest deadline first, whereas in the case of MTM-FSA, the tasks of the users are scheduled based on the random number assigned to the users. ...
Context 12
... it can be inferred that FMTM-FSA performs better than MTM-FSA on the basis of maximum lateness. In Figure 12a, for the ND case, the mean and standard deviation for the processing time are 75 and 10, respectively. The mean and standard deviation for the deadline are taken as 85 and 10, respectively. ...
Context 13
... mean and standard deviation for the deadline are taken as 85 and 10, respectively. In Figure 12b, for the RD case, the processing time and deadline are generated randomly between [50,100] and [75,110], respectively. In Figure 12a,b, it can be seen that the total maximum lateness in the case of FMTM-FSA is less as compared to MTM-FSA due to same reason as above. ...
Context 14
... Figure 12b, for the RD case, the processing time and deadline are generated randomly between [50,100] and [75,110], respectively. In Figure 12a,b, it can be seen that the total maximum lateness in the case of FMTM-FSA is less as compared to MTM-FSA due to same reason as above. ...

Similar publications

Article
The emergence of the Internet of Things (IoT) has paved the way for numerous activities leading to smart life, such as health care, surveillance, and smart cities. Since many IoT applications are real-time, they need prompt processing and actuation. To enable this, a network of Fog devices has been developed to provide services close to the data ge...

Citations

... Moreover, the dynamic nature of fog computing environments, characterized by varying resource availability and user demand, necessitates adaptive resource allocation approaches. [1] [2] Additionally, pricing mechanisms play a crucial role in incentivizing efficient resource utilization and optimizing the allocation of resources. By appropriately pricing resources, fog nodes can encourage users to make optimal decisions regarding resource consumption, considering the quality of service requirements, available resources, and demand fluctuations. ...
... The calculation of minimums includes finding the first and second minimum price from the list of requests in L * and returning the per unit price of the minima found.2 This wouldn't happen when the sample data increases and becomes denser.VOLUME 4, 2016 ...
Article
Full-text available
Fog computing is a promising and challenging paradigm that enhances cloud computing by enabling efficient data processing and storage closer to data sources and users. This paper introduces a game-theoretic approach called GTRADPMFC (Game-Theoretic Resource Allocation and Dynamic Pricing Mechanism in Fog Computing) to address resource allocation and dynamic pricing challenges in fog computing environments with limited resources. The proposed model features non-cooperative competition among fog nodes for resources and dynamic pricing mechanisms to encourage efficient resource utilization. Theoretical analysis and simulations demonstrate that GTRADPMFC improves resource efficiency and overall fog computing system performance. Additionally, the paper discusses how to handle situations with insufficient samples and provide flexibility for users unable to meet completion time requirements. GTRADPMFC effectively manages resource allocation by establishing pricing in fog computing, considering potential delays in completion time. This is achieved through research, simulations, convergence analysis, complexity evaluation, and optimization guarantees.
... In light of the fact that applications for intelligent transportation systems and other delay-critical services sometimes need answers in milliseconds, this poses a severe difficulty. Fog computing [16], [17] is an emerging method that addresses latency problems by enabling data analysis at the edge [18], [19] in close proximity to IoT device sources [20], [21]. It is important to note that IoT devices have limited computational network, and storage capabilities [22], [23]. ...
Article
Full-text available
For an extended period, a technological architecture known as cloud IoT links IoT devices to servers located in cloud data centers. Real-time data analytic are made possible by this, enabling better, data-driven decision making, optimization, and risk reduction. Since cloud systems are often located at a considerable distance from IoT devices, the rise of time-sensitive IoT applications has driven the requirement to extend cloud architecture for timely delivery of critical services. Balancing the allocation of IoT services to appropriate edge nodes while guaranteeing low latency and efficient resource utilization remains a challenging task. Since edge nodes have lower resource capabilities than the cloud. The primary drawback of current methods in this situation is that they only tackle the scheduling issue from one side. Task scheduling plays a pivotal role in various domains, including cloud computing, operating systems, and parallel processing, enabling effective management of computational resources. In this research, we provide a multiple-factor autonomous IoT-Edge scheduling method based on game theory to solve this issue. Our strategy involves two distinct scenarios. In the first scenario, we introduced an algorithm containing choices for the IoT and edge nodes, allowing them to evaluate each other using factors such as delay and resource usage. The second scenario involves both a centralized and a distributed scheduling approach, leveraging the matching concept and considering each other. In addition, we also introduced a preference-based stable mechanism (PBSM) algorithm for resource allocation. In terms of the execution time for IoT services and the effectiveness of resource consolidation for edge nodes, the technique we use achieves better results compared with the two commonly used Min-Min and Max-Min scheduling algorithms.
... A problem of allocation of FSPs to IoT devices is studied in [67]. Taking into account the heterogeneity of system, the IoT devices are assumed to have different services requested periodically. ...
Article
Full-text available
Fog computing has been widely integrated in the IoT-based systems, creating IoT-Fog-Cloud (IFC) systems to improve the system performances and satisfy the quality of services (QoS) and quality of experience (QoE) requirements for the end users (EUs). This improvement is enabled by computational offloading schemes, which perform the task computation nearby the task generation sources (i.e., IoT devices, EUs) on behalf of remote cloud servers. To realize the benefits of offloading techniques, however, there is a need to incorporate efficient resource allocation frameworks, which can deal effectively with intrinsic properties of computing environment in the IFC systems such as resource heterogeneity of computing devices, various requirements of computation tasks, high task request rates, and so on. While the centralize optimization and non-cooperative game theory based solutions are applicable in a certain number of application scenarios, they fail to be efficient in many of cases, where the global information and control might be unavailable or cost-intensive to achieve it in the large-scale systems. The need of distributed computational offloading algorithms with low computation complexity has motivated a surge of solutions using matching theory. In the present review, we first describe the fundamental concept of this emerging tool enabling the distributed implementation in the computing environment. Then the key solution concepts and algorithmic implementations proposed in the framework of literature are highlighted and discussed. Given the powerful tool of matching theory, its full capability is still unexplored and unexploited in the literature. We thereby discover and discuss existing challenges and corresponding solutions that the matching theory can be applied to resolve them. Furthermore, new problems and open issues for application scenarios of modern IFC systems are also investigated thoroughly.
Article
Fog computing alleviates the cloud-centric limitations of Internet of Things (IoT). However, in the dynamic landscape of fog computing, the uneven distribution of workload among fog nodes emerges as a substantial obstacle to both, data latency and network profit. To mitigate workload imbalances, data packet offloading offers a twofold benefit. The offloading fog node leverages latency satisfaction, while the recipient fog node gains a financial advantage by leasing out its available processing resources. Motivated by the aforementioned advantages, in this work, we propose a novel load-balancing method to maximize monetary gains without affecting the Quality-of-Service (QoS) constraints of the subscribed IoT users in a biased fog network. The proposed method introduces an Optimized Matching Theory (OMAT)-guided data offloading framework, employing many to many matching without externalities. The method returns a novel matching among disparate fog nodes thereby achieving uniform workload distribution. The obtained results demonstrate that the proposed method attains improved performance in terms of inverse latency, throughput, and non-matchings, when compared to existing methods in the literature.
Chapter
This chapter provides a state-of-the-art review regarding matching theory-based distributed computation offloading frameworks for IoT-fog-cloud (IFC) systems. In this review, the key solution concepts and algorithmic implementations proposed in the literature are highlighted and discussed thoroughly. Given the powerful tool of matching theory, its full capability is still unexplored and unexploited in the literature. We thereby discover and discuss existing challenges and corresponding solutions that the matching theory can be applied to resolve them. Furthermore, new problems and open issues for application scenarios of modern IFC systems are also investigated.
Article
Internet of Things (IoT) devices have become part of our daily life. IoT applications are used in vast domains such as smart healthcare, smart cities, smart transportation, Industry 4.0, and so forth. However, many IoT applications come under the ultra‐reliable and low‐latency communications category; minimal execution time is crucial for such applications. Limitations such as network reliability and the cloud's multi‐hop distance to the IoT devices can affect providing efficient solutions for IoT applications. Fog computing has emerged as an important paradigm that extends cloud computing by delivering cloud‐like services nearer to the end‐users. Placement of IoT applications onto the appropriate fog nodes has an important influence on the overall execution time of applications and energy consumption of fog nodes. Efficiently deploying IoT applications to fog nodes is difficult due to two factors: fog nodes have varying processing capacity and are geographically located in different places from IoT devices. Hence, this article proposes a decentralized bi‐objective optimization application placement policy, that is, DMAP, to minimize IoT applications' overall execution time and energy consumption of fog nodes. The matching game methodology is used for mapping applications to fog nodes. The performance of DMAP is verified using large‐scale simulation experiments. Experimental results show significant improvement in overall execution time, energy consumption, and scalability compared to the existing solutions.