ArticlePDF Available
This work is licensed under a Creative Commons Attribution 4.0 International License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the
original work is properly cited.
ech
T
PressScience
DOI: 10.32604/cmc.2024.050862
ARTICLE
Exploring Multi-Task Learning for Forecasting Energy-Cost Resource
Allocation in IoT-Cloud Systems
Mohammad Aldossary1,*, Hatem A. Alharbi2and Nasir Ayub3
1Department of Computer Engineering and Information, College of Engineering, Prince Sattam Bin Abdulaziz University,
Wadi Al-Dawasir, 11991, Saudi Arabia
2Department of Computer Engineering, College of Computer Science and Engineering, Taibah University,
Madinah, 42353, Saudi Arabia
3Department of Creative Technologies, Air University Islamabad, Islamabad, 44000, Pakistan
*Corresponding Author: Mohammad Aldossary. Email: mm.aldossary@psau.edu.sa
Received: 20 February 2024 Accepted: 22 April 2024
ABSTRACT
Cloud computing has become increasingly popular due to its capacity to perform computations without relying
on physical infrastructure, thereby revolutionizing computer processes. However, the rising energy consumption
in cloud centers poses a significant challenge, especially with the escalating energy costs. This paper tackles this
issue by introducing efficient solutions for data placement and node management, with a clear emphasis on the
crucial role of the Internet of Things (IoT) throughout the research process. The IoT assumes a pivotal role in
thisstudybyactivelycollectingreal-timedatafromvarioussensorsstrategicallypositionedinandarounddata
centers. These sensors continuously monitor vital parameters such as energy usage and temperature, thereby
providing a comprehensive dataset for analysis. The data generated by the IoT is seamlessly integrated into the
Hybrid TCN-GRU-NBeat (NGT) model, enabling a dynamic and accurate representation of the current state of
the data center environment. Through the incorporation of the Seagull Optimization Algorithm (SOA), the NGT
model optimizes storage migration strategies based on the latest information provided by IoT sensors. The model is
trained using 80% of the available dataset and subsequently tested on the remaining 20%. The results demonstrate
the effectiveness of the proposed approach, with a Mean Squared Error (MSE) of 5.33% and a Mean Absolute Error
(MAE) of 2.83%, accurately estimating power prices and leading to an average reduction of 23.88% in power costs.
Furthermore, the integration of IoT data significantly enhances the accuracy of the NGT model, outperforming
benchmark algorithms such as DenseNet, Support Vector Machine (SVM), Decision Trees, and AlexNet. The NGT
model achieves an impressive accuracy rate of 97.9%, surpassing the rates of 87%, 83%, 80%, and 79%, respectively,
for the benchmark algorithms. These findings underscore the effectiveness of the proposed method in optimizing
energy efficiency and enhancing the predictive capabilities of cloud computing systems. The IoT plays a critical
role in driving these advancements by providing real-time data insights into the operational aspects of data centers.
KEYWORDS
Cloud computing; energy efficiency; data center optimization; internet of things (IoT); hybrid models
Published Online: 16 May 2024
2CMC, 2024
1Introduction
Cloud computing has gained popularity as a storage option in the age of rapidly developing
technology, providing businesses with the opportunity to reduce hardware and acquisition costs [1].
The need for data centers has grown significantly as a result of this change, which is being caused by
the exponential expansion in data consumption. However, rising demand for data centers comes at a
price: These establishments currently account for a sizeable 3% of global energy usage. The logistics
industry is carefully investigating how distributed computing, virtualization, and the IoT could boost
efficiency as companies navigate this new environment.
Virtualized servers utilize up to 30% of their available resources, whereas non-virtualized servers’
function at a meager 6%–15% of their capability [2]. Data center operators intentionally distribute
their facilities over many sites to guarantee dependability, employing replication techniques to ensure
smooth operations. The integration of IoT sensors in data centers becomes imperative to actively
monitor critical parameters like energy usage and temperature, providing a comprehensive dataset for
informed decision-making. Researchers are addressing the need for sustainable practices in the face of
swift advancements in technology [3]. They investigate various methods, such as determining the cost
of installing servers in various places and arranging nodes and data transmission routes optimally.
Energy forecasting and model planning are critical applications of machine learning, which uses
techniques like random forest, naive bayes, and decision trees. The utilization of IoT-generated real-
time data seamlessly integrated into machine learning models enhances the accuracy of predictions
and aids in optimizing energy efficiency.
Large-scale cloud data center construction is becoming increasingly prevalent in the logistics
sector, where big data, cloud computing, and the Internet of Things (IoT) play pivotal roles. However,
the high energy consumption associated with these institutions poses a significant environmental chal-
lenge. Scholars are currently exploring various approaches to mitigate energy usage while maintaining
peak efficiency and reliability. One area of research focuses on Virtual Machine (VM) consolidation,
which seeks to reduce energy consumption by consolidating idle virtual machines onto fewer servers.
Despite its potential benefits, the effectiveness of this approach varies depending on the nature of the
workload and may encounter challenges in unforeseen circumstances [4].
Another innovation aimed at reducing energy consumption in data centers is Dynamic Voltage
and Frequency Scaling (DVFS), which adjusts processor voltage and frequency in real-time. However,
accurately evaluating the characteristics of each workload remains a challenging task. Energy-efficient
job arranging seeks a balance between resource needs and utilization reduction. Suggested algorithms
include particle swarm optimization and genetic algorithms [5]. Researchers investigate innovative
uses such as distributed fault-tolerant storage, VM consolidation, energy-conscious task organization,
and learning algorithms for data centers in clouds that are adaptable. By using the linkages between
handling resources and energy price estimation activities, multi-task learning in conjunction with IoT
is a viable method for improving performance [6]. The development of novel energy-saving strategies
becomes critical given the increasing need for computing services. In the face of f luctuating deregulated
energy costs, cloud providers, entrusted with achieving government requirements and profit objectives
via service level agreements, oversee the intricacies of energy utilization [7]. To lower data center
operating costs and take advantage of energy price variations, researchers are investigating Machine
Learning (ML) techniques.
In this work, we have developed a complete framework intended to improve data center energy
efficiency and save power costs by integrating incoming IoT data strategically. Our study is a leading
CMC, 2024 3
contribution to the area as it highlights how important IoT-generated data is to enhancing data center
operating efficiency and obtaining a significant reduction in power costs.
1. IoT-driven data integration: Our work stands out for actively integrating IoT data through
strategically placed sensors around data centers. This unique approach allows us to monitor
energy usage and temperature, providing a foundational basis for optimizing energy efficiency
and elevating predictive capabilities.
2. Innovative Hybrid TCN-GRU-NBeat (NGT) model: We present the NGT model, a novel
framework adept at handling diverse data and optimizing storage in data center environments.
By seamlessly integrating TCN, GRU, and N-Beat capabilities, NGT emerges as a powerful
solution for efficient data processing.
3. Optimization with Seagull Optimization Algorithm (SOA): Notably, we fine-tune NGT param-
eters using SOA. This optimization process maximizes the potential of incoming IoT data,
significantly enhancing NGT’s performance in data centers, forecasting, and electricity con-
sumption reduction.
4. Exceptional performance results: Through active IoT monitoring and NGT data processing,
our approach achieves a remarkable 2.83% reduction in MAE error and a 5.33% reduction in
MSE error. This translates to an impressive 24.87% average reduction in electricity expenses.
Importantly, these outcomes surpass existing literature models by 10% to 15%, validating
the efficacy of our proposed methodology in optimizing energy efficiency and predictive
capabilities.
2Related Work
Energy consumption and its environmental impact have become a growing concern, driven by
the increasing focus on sustainability across various industries. This study provides a concise review
of previous approaches to electricity demand prediction, acknowledging their limitations and aiming
to overcome them through the application of a Neural Network with Multi-Layer (MLNN) model
[8]. The study utilizes an ensemble method, combining several ML models to improve the precision
of power load and total consumption of electricity projections in logistics processes by utilizing the
capabilities of the IoT. The IoT-based method has issues with lengthier processing cycles and notable
loss rates in actual testing, despite its competitive accuracy.
In a hybrid approach named EPNet, combining Convolutional Neural Network (CNN) and Long
Short Term Memory (LSTM) models, presented in [9], energy price prediction is explored with a
reliance on the IoT infrastructure. While the models required substantial processing power for real-
time predictions and exhibited significant error rates, they produced favorable outcomes. However, the
model’s applicability to real-time data was limited due to extensive dataset standardization. Another
model in [9], incorporating support vector regression with various optimization techniques, reported
a 6.82 MAE but faced significant computational costs and limited reliability for forecasts beyond
one day. In [10], an assessment of methods that use deep learning for calculating electricity and green
energy consumption was carried out. Although the results were competitive, the real-time applications
resulted from high computational costs and testing losses. An inventive hybrid strategy for electricity
price forecasting, merging SVM and Kernel Principal Component Analysis (KPCA), showed promise
with low error rates [11]. However, its application to large datasets posed challenges and incurred
substantial processing costs.
In the logistics area, the suggested approach considers seasonal and regional f luctuations in
energy costs, utilizing NN-based and autoencoder models, as well as location-specific data gathering
4CMC, 2024
[12]. Harnessing the capabilities of the IoT, despite promising results, sophisticated deep learning
methods discussed in [13] face challenges with relatively high MAE and MSE values, leaving room
for improvement. A model in [14] employed dimension reduction to overcome over-fitting issues but
encountered difficulties in accurately estimating electricity costs. Strategies including DVFS and the
relocation of inactive VMs on lesser number of servers are used in the pursuit of energy-efficient cloud
data centers [15]. Nevertheless, there are issues with these methods, such is the curvilinear link that
connects the workload’s rate and voltage parameters.
With multi-task learning showing potential to enhance accuracy and efficiency, researchers
explore ML-based techniques for cutting energy use [16]. Feature selection-focused models, like the
one, achieved an MAE of 3.18 but were limited to offline prediction using a large dataset.
Studies combining power cost estimation and energy demand prediction with different algorithms,
such as Artificial Bee Colony and SVM [17], weighted kernel hybrid methodology [18], and Artificial
NN based strategy [19], have embraced the integration of the IoT in their methodologies. These studies,
while advancing the field, encountered challenges like computational costs, imprecise forecasts, and
inefficiency for real-time applications. Despite the abundance of research on energy price prediction,
most existing approaches are not computationally cost-effective for real-time application and struggle
to yield good results for the entire market with low error rate. Researchers emphasize the need for
ongoing research to precision improvement, effectiveness, and application of energy consumption
forecasting models in real-time scenarios [20].
3Theoretical Framework and Methodological Approach
This section encapsulates the core concepts that underpin our research projects, elucidating
the systematic approaches employed in conducting our investigations. The theoretical foundations
serve as guiding principles, facilitating our comprehension as we delve into the realms of cloud
computing, virtualization, content distribution delivery networks, intelligent systems, and the IoT.
Simultaneously, our methodology delineates the techniques applied to unravel the intricacies inherent
in these technological fields, as illustrated in Fig. 1.
Figure 1: Theoretical framework: a visual representation
CMC, 2024 5
3.1 Cloud Infrastructure and Virtualized Environments
With cloud computing, users can access pooled computing resources, revolutionizing the IT indus-
try. This paradigm enables rapid allocation and deallocation of resources with minimal management
overhead. Departing from the traditional reliance on local servers, cloud computing offers on-demand
access to configurable resources, resulting in cost savings, scalability improvements, and enhanced
flexibility in resource allocation [21]. This shift in focus from capital expenditures to pay-as-you-go
models based on actual usage contributes to reduced server expenses. Furthermore, the integration of
the Internet of Things (IoT) further amplifies the transformative potential of cloud computing in the
modern technological landscape. To meet the demands of today’s data-driven businesses, data center
organizations must continuously enhance their processing capabilities, software programs, and data
processing capabilities while incorporating IoT functionalities to keep pace with evolving technological
demands. The integration of IoT and cloud computing, particularly in Infrastructure as a Service
(IaaS), revolutionizes infrastructure management. IaaS billing is tied to computing power usage, and
various cloud platforms, such as open, closed, and mixed types (e.g., VMware and Nutanix), offer
diverse infrastructure approaches [22]. Mixed-data centers, blending private and public cloud services,
demonstrate efficiency by seamlessly integrating resources. Hybrid data centers, which integrate IoT
with shared and private clouds, enhance data processing and communication. In shared data centers,
where multiple organizations utilize shared resources, IoT optimizes resource allocation with real-time
usage data, promoting efficient sharing. In private data centers dedicated to a single organization,
IoT enhances security, monitors equipment health, and optimizes energy usage, thereby improving
overall performance. Hybrid data centers, combining shared and private elements, benefit from IoT’s
comprehensive approach, enabling seamless communication and data processing across the hybrid
environment [23,24]. This results in an adaptive, efficient model that combines the advantages of both
setups.
Virtualization streamlines server management by allowing multiple operating systems to run con-
currently, thereby improving efficiency. Collaborative IoT integration into the NGT model optimizes
energy efficiency and resource allocation in cloud computing, leading to significant cost reductions.
The IoT’s role involves collecting real-time data from strategically placed sensors to inform decisions,
enhancing predictive capabilities, and optimizing energy usage.
3.2 Content Distributed Delivery Network (CDN), Intelligent Systems, and Multi-Task Approaching
Implementation
Content Distributed Delivery Network (CDN) and Intelligent Systems: CDNs are integral for
optimizing data delivery, strategically placing edge servers to reduce latency and enhance reliability.
Major platforms like Netflix and Amazon leverage CDNs, efficiently managing bandwidth costs
through clever algorithms. In cloud-based industrial IoT systems, integrating power price predictions
and resource management is crucial. Multi-tasking proves beneficial, enhancing performance by
collaboratively addressing dependencies, reducing training time, computing complexity, and improving
resilience [25]. The research focuses on leveraging multi-task learning to navigate energy price forecast
complexities in cloud-based industrial IoT systems, demonstrating its effectiveness in handling real-
world occurrences. The study explores integrating ML and deep learning skills in cloud computing to
anticipate future electricity prices and improve energy management. Neural network classifiers and
primary ML classifiers are evaluated, highlighting the potential of Artificial Intelligence (AI) and ML
techniques in addressing challenges in diverse scenarios.
6CMC, 2024
4Proposed Model
This section discusses the application of forecasting methods for hourly and day-ahead power
price forecasting, employing single and hybrid models, as illustrated in Fig. 2. The Nord Pool spot
market, a crucial data source, provides hourly time series data, collected through IoT devices [26].
An in-depth data examination focuses on understanding information properties, with a particular
emphasis on insights derived from IoT-generated data. The testing and training process involves
meticulous data selection, incorporating the enriched dataset from IoT sources. Various ML models,
both single and hybrid, are deployed for evaluation, leveraging IoT-enhanced data to enhance model
robustness. Section 4 briefly covers the study of each model, highlighting the pivotal role of IoT-
generated data in advancing power price forecasting accuracy.
Figure 2: Data components of the time series data used
4.1 Dataset Description (Time Series Price Data)
The dataset under consideration, crucial for energy price forecasting (EPF) in Denmark and
surrounding nations, is acquired through IoT devices, enhancing its relevance and richness. It adheres
to a time series structure, with consecutive observations grouped chronologically at regular intervals,
such as hourly or daily readings. Fig. 2 portrays the decomposition of a time series dataset representing
spot prices in Euros (EUR) into its essential components: Trend and residuals. At the top of
the plot, the original time series data is depicted, showcasing the observed spot prices over time
without any decomposition. This visual representation offers insights into the f luctuations in spot
prices across different periods. Moving to the middle plot, the trend component is illustrated. This
component reflects the underlying long-term behavior or tendency in the data, smoothing out short-
term fluctuations to reveal the overall direction of spot prices over time. In the bottom plot, the
residuals are displayed. These residuals capture the variability or randomness in the original data
after removing the trend and any seasonal patterns, highlighting the unexplained fluctuations or noise
present in the dataset. By examining these residuals, analysts can gain deeper insights into the random
elements influencing spot price fluctuations.
The historical time series dataset encompasses hourly averages of day-ahead spot prices in both
DKK and EUR, with timestamps presented in Danish and UTC time zones. Geographical boundaries,
categorized by pricing regions (DK1 and DK2) based on the Great Belt, make this dataset notable
[26,27]. Its significance lies in its pivotal role in EPF, influencing decisions within the energy industry
over the last fifteen years. The dataset elements encompass Hour DK and Hour UTC, representing
CMC, 2024 7
time intervals in Danish and UTC time zones, respectively, and the integration of IoT data further
enriches its potential applications in energy analytics and forecasting.
These prices represent the fragile equilibrium between demand and supply in the market. This
dataset is a significant source for in-depth analysis of time series and multivariate single and multistep
power price forecasting because it has a huge count of 50,000 instances. The abundance of cases opens
the possibility to a thorough investigation of patterns, developments, and variances, providing a deep
understanding of the dynamic retail energy market environment.
4.2 Proposed Hybrid Model
N-Beats Component: The N-Beats component is constructed as a collection of fully connected
blocks, each adept at capturing diverse temporal patterns across different time scales. The core
equation governing the output of a single block is articulated as follows [28]:
Y
(k)
t=G(k)(Xt)
k
j=1
F(k)
jˆ
Y
(j)
thj(1)
where,
Y
(k)
tdenotes the predicted output at time tfor the k-th block. This output amalgamates
information from both the generic neural network G(k)and the forecast functions F(k)
j.G(k)(Xt)
signifies the contribution from the generic neural network linked with the k-th block. This component
captures overarching patterns and trends in the time series data. The summation term k
j=1F(k)
j
Y
(j)
thj
encapsulates forecasts from individual forecast functions F(k)
j, each applied to a lagged version of the
output
Y
(j)
thjfrom a specific block j. The inclusion of multiple forecast functions allows the model to
comprehend various aspects of temporal patterns at diverse time horizons [28]. hjdenotes the backcast
horizon linked with the j-th forecast. This parameter specifies how far back in time the model looks
to make predictions, accommodating various time scales [28].
In essence, the N-Beats component exhibits a modular and adaptive structure, with each block
contributing to the overall prediction by capturing patterns specific to different temporal contexts.
This adaptability makes the N-Beats component highly effective for forecasting tasks in time series
analysis.
Gated Recurrent Unit (GRU) Component : A particular kind of Recurrent Neural Network (RNN)
called the GRU is made to successfully capture temporal relationships while overcoming issues like
vanishing gradients and computing efficiency. The GRU achieves this through the incorporation of
an update gate, a reset gate, and a memory cell, facilitating a dynamic balance between retaining past
information and adapting to new input [29].
Update Gate Ut: The update gate Utfunctions as a controller for determining the extent to which
the previous hidden state Ht1should be retained based on the current input Xt. Mathematically,
Utis computed as:
Ut=tan h(WU ·[Gt1,Lt])(2)
Reset Gate Rt: The gate that resets in accordance of the current input Lt,Rtdetermines how
much of the previous hidden state Ht1can be reset or forgotten. Rtis scientifically computed as:
Rt=sigmoid (WR ·[Gt1,Gt])(3)
8CMC, 2024
New Memory Content Mt: By combining data from the reset gate, previous hidden state, and
current input, the updated storage capacity Mtserves as a potential hidden condition. This can
be stated as:
Mt=tan h(WM ·[Rt·Lt1,Lt])(4)
Update State_Hidden Ht: The last concealed condition Htis the product of the newly acquired
memory storage (Mt) scaled by Utand the old concealed state (Ht1) scaled by 1 Ut.Htis
determined by:
Ht=(1Ut)·Ht1+Ut·Mt(5)
This equation illustrates how the GRU selectively updates and retains information over sequential
data, with the weight matrices WU,WR and WM being trainable parameters optimized during the
training process to enhance the model’s performance on specific tasks.
TCN Component: At the core of the NGT model lies the Temporal Convolutional Network
(TCN), a crucial component that utilizes causal dilated convolutions for robust temporal dependency
modeling. The TCN generates the output Ztby applying a sequence of convolutions with dilated filters.
The expression for the TCN output is defined as follows [29]:
zt=ReLU L
k=1
Fk×Htdk(6)
where ReLU introduces non-linearity through the Rectified Linear Unit (ReLU ) activation function.
L
k=1: Signifies the summation across Lconvolutional layers. Fk×Htdk denotes the convolution
operation, Fkdenotes the learnable convolutional filters and Htdk denotes the input sequence shifted
by a dilation factor dk.
Key Considerations:
Convolution Operation (): This operation involves sliding a filter (Fk) across the input sequence
(Htdk), performing element-wise multiplication, and summing the results. This operation is
executed for each convolutional layer.
Number of Layers (L): L signifies the depth or the count of convolutional layers within the
TCN. Each layer captures distinct levels of temporal information.
Learnable Convolutional Filters (Fk): These filters represent trainable parameters that the
model adapts during the training phase. They play a pivotal role in extracting temporal features
from the input sequence.
Dilation Factors (dk): These factors dictate the spacing between values in a convolutional filter.
Dilated convolutions extend the receptive field without a surge in parameters.
The TCN component amplifies the NGT model’s proficiency in modeling prolonged dependen-
cies, capturing intricate temporal patterns in input data. Thoughtful adjustments to dilation factors
and the layer count contribute to the model’s prowess in understanding temporal dynamics, offering
adaptability across diverse time series forecasting scenarios.
Integrated NGT Model (A Fusion of Strengths): The Integrated NGT model brings together
three powerful components—N-Beats, GRU and TCN—to create a robust forecasting framework for
electricity prices. This integration is a strategic decision aimed at harnessing the unique strengths of
each component.
CMC, 2024 9
N-Beats Contribution: In the energy price period series, the N-Beats component performs
exceptionally well at identifying relationships over time and worldwide trends. Its predictions
yt(k)contribute a high-level understanding of overarching trends.
GRU’s Temporal Insights: GRU, focusing on temporal dynamics, provides contextual informa-
tion through its hidden state (Ht). This encapsulates short to medium-term patterns, enriching
the model’s temporal understanding.
TCN’s Local Feature Extraction: TCN, with its strong ability to extract local features, con-
tributes the output (Zt) that captures intricate details within the electricity price time series.
The integration function, denoted as IntegratedNGT, orchestrates the combination of N-Beats
predictions, GRU hidden state and TCN output. The integration equation, Yt=IntegratedNGT
(yt,Ht,Zt), unifies these diverse components strategically. The integration is not a simple summation;
it’s a strategic blending of global, temporal, and local perspectives. This holistic approach ensures a
comprehensive understanding of intricate patterns within the time series.
Crucial Benefits of Integration: The success of integrated NGT lies in its ability to capture a broad
spectrum of patterns. This integration allows the model to address both overarching trends and subtle
local features, leading to superior forecasting performance. The integrated NGT model stands as
a testament to the power of integration, combining diverse components into a unified and potent
forecasting tool. This model is designed to provide accurate and robust predictions for electricity price
forecasting.
4.4 Optimizing Parameters with Seagull Optimization Algorithm (SOA)
In our integrated NGT model, fine-tuning hyperparameters and optimizing parameters are crucial
for peak forecasting performance. The SOA [15], inspired by seagulls’ foraging behavior, excels in
navigating intricate solution spaces, proving effective in various optimization tasks. SOA mimics
seagulls’ strategic foraging, adapting search patterns based on resource availability. Integrated with
the NGT model, SOA becomes vital in exploring the hyperparameter space, employing a population
of seagull agents to represent potential solutions, as in Algorithm 1.
Algorithm 1: SOA parameter optimization algorithm
1. Initialization:
Initialize a population of seagull agents with different sets of hyper parameters for the NGT model.
2. Initialize Iteration Counter:
Set iteration to 0.
3. While Not Converged:
a. Objective Function Evaluation:
i. Assess the performance of each seagull agent using the NGT model’s forecasting accuracy as the
objective function.
ii. Adaptation and Exploration: Dynamically adjust the positions of seagull agents based on
evaluation results, striking a balance between exploiting promising areas and exploring uncharted
regions.
b. Communication Mechanism:
i. Agents communicate and share information about promising regions, fostering collaborative
exploration of the parameter space.
ii. Update Iteration Counter: iteration + 1.
4. Optimal Solution Identification:
(Continued)
10 CMC, 2024
Algorithm 1 (continued)
Converged to optimal or near-optimal solutions, leading to finely tuned hyper parameters for the NGT
model.
4.5 Benefits of SOA in Parameter Optimization
Global Search Capability: SOA’s exploration strategy enables a global search in the hyper
parameter space, avoiding local minima.
Adaptability: The approach to searching of the algorithm is continually adjusted, enabling it to
explore complicated and dynamic solution spaces with efficiency.
Convergence Speed: SOA often demonstrates faster convergence to optimal solutions compared
to traditional optimization algorithms.
The incorporation of the Seagull Optimization Algorithm (SOA) in parameter optimization
enhances the overall performance of the NGT model, contributing to its accuracy and robustness
in electricity price forecasting. The proposed framework process is shown in Algorithm 2.
Algorithm 2: Integrated NGT model
>Input: Time series dataset for electricity price forecasting.
>Processed Input: Preprocessed time series dataset.
>Output: Forecasted electricity prices.
1. N-Beats Component: Function fNBEATS(Y(k)
t,Xt).
2. GRU Component: Function fGRU H(t1),Xt.
3. TCN Component: Function fTCN (Zt,Xt).
4. Integrated NGT Model: Function IntegratedNGT (yt,Ht,Zt).
5. SOA Optimization: SeagullOptimization Algorithm (parameters).
6. Procedure Data Preprocessing:
(a) Address missing values.
(b) Apply necessary imputation strategies.
(c) Conduct data scaling or normalization as required.
(d) Handle outliers or anomalies in the dataset.
7. Procedure N-Beats Component:
(a) Define the N-Beats forecasting function: fNBEATS(Y(k)
t,Xt).
8. Procedure GRU Component:
(a) Define the GRU forecasting function: fGRU H(t1),Xt.
9. Procedure TCN Component:
(a) Define the TCN forecasting function: fTCN (Zt,Xt).
10. Procedure IntegratedNGT model:
(a) Define the integration function: IntegratedNGT (yt,Ht,Zt).
(b) Combine the outputs of the NBEATS,GRU and TCN components.
11. Procedure SOA Optimization:
(a) Initialize parameters for the SOA.
12. Procedure Optimization Objective Function:
(a) Independent Function Def: OJwiNGT =V[nJ,IntegratedNGT (yt,Ht,zt)].
(b) Evaluate the performance of the Integrated NGT model.
13. Procedure Optimization Algorithm:
(Continued)
CMC, 2024 11
Algorithm 2 (continued)
(a) Implement the SOA optimization: SOA (parameters).
(b) Fine-tune weights and parameters of the Integrated NGT model.
14. Procedure Ensemble Prediction:
(a) Combine predictions from the optimized Integrated NGT models.
(b) Ensemble prediction: zhyb =1
M×IntegratedNGT (yt,Ht,Zt).
15. Procedure Ensemble Weights:
(a) Define weightval ensemble: whyb =[we
GRU ,we
NBEATS,we
TCN ].
(b) Optimize ensemble weights using the Seagull Optimization Algorithm.
16. Procedure Optimization Objective Function for Ensemble:
(a) Define the ensemble objective function:
OJwe
hyb =V(bi,k
t=0we
L×[nJ,IntegratedNGT (yt,Ht,zt)].
17. Procedure Final Integration:
(a) Combine outputs of N-Beats, GRU and TCN with optimized ensemble weights.
(b) Final integration: IntegratedNGT (yt,Ht,Zt)=wNBEATS ×fNBEATS +wGRU ×fGRU +wTCN ×fTCN .
Output: Obtain the final forecasted electricity prices using the integrated and optimized NGT model.
5Cost-Effective Data Center Management
This paper examines, considering varying node measurements, the economic value of capacity
discharge to nodes inside one data center system. Data downloads to nodes are constantly more
economical than other options, as demonstrated by hourly energy cost analysis. Updates to the model
can be accommodated by adding additional data sources, such as message traffic from widely utilized
social networks. Using a cell phone as an input device, the technique mimics real-world settings; Fig. 3
illustrates the framework’s cost-effective setup and adaptability.
Eq. (7) describes constraints related to server data storage capacity and offloading data volume
to nodes, considering integer and decimal values, respectively. Eq. (8) represents the objective function
for minimizing overall costs, ensuring the server data storage capacity matches predefined values.
Constraints involve off loading data to nodes with specified integer (αi) and decimal (βJ) values. The
optimization model employs P0 formulas to address the problem, where αirepresents server i’s sto r ag e
capacity and βJsignifies offloading volume to node J. The cost factors include Γfor electricity and γk
for node data storage. Symbols designate limits for servers (K) and nodes (k). The value spike threshold
(γk) determines dataset storage decisions. The objective function (γ,δ) minimizes cost estimates
for optimal γand δ. The optimization process integrates algorithmic structures. These techniques
anticipate electricity prices by considering P0 outputs and real time prices of electricity. The efficacy
of optimization is influenced by price forecast precision. A strength of pricing framework that records
hourly expenses is based on anticipated prices [17].
minimizeCost =
N
i=1
αiβ··106,Subject to αiαk,i=1, ...., N
βjγk,j=1, ...., N(7)
N
i=1
αi+
M
j=1
βj=
N
i=1
αk,αi,βj0, Z+,i=1, ...,N,j=1, ...,M(8)
cost =
N
i=1
αiβ··106(9)
12 CMC, 2024
Figure 3: Interconnection of data centers on cloud
The energy usage of a server every hour per P2 space on the server is represented by the symbol
in Eq. (9), where electricity denotes the modified cost in EUR. To make comparisons with the hourly
price of electricity in EUR/MWh easier, is converted to EUR/Wh.
5.1 Performance Evaluation
A set of metrics and visuals is employed in the performance assessment for the regression analysis
conducted on time series information to gauge the forecasting models’ accuracy and efficacy. The goal
is to assess the models’ ability to accurately represent the temporal trends and variances in the power
price time series. The performance evaluation metrics are shown in Fig. 4.
Figure 4: Performance metrics formulas used for evaluation the proposed model
CMC, 2024 13
6Simulation Results
This section details the analysis conducted to achieve the stated objective and describes the
comprehensive experimental setup. The experiments executed using Python on the Google Colab
platform aimed to assess the efficiency of parallel execution of task for resource management and
power price predictions in fog based industrialized mechanism with IoT real-time data. Employing
a time series dataset from Nord Pool Spot market data, with 50,000 instances of hourly power price
observations in DKK and EUR. The collection of data was divided between 20% testing and 80%
training samples in a chronological order. The NGT architecture that is integrated, incorporating
N-Beats, GRU, and TCN components, was implemented based on the proposed hybrid model, with
hyperparameters and configurations outlined in Table 1.
Table 1: Obtained optimized parameter values
Component Parameter Configuration
N-Beats Block structure, learning rate, training
epochs
4 blocks, 2 fully connected layers, 0.001,
100
GRU Hidden units, learning rate, training epochs 64 hidden units in GRU layer, 0.01, 50
TCN Number of layers, dilation factors, learning
rate, training epochs
3 layers in the TCN, for the three layers,
0.005, 80
The Seagull Optimization Algorithm (SOA) fine-tuned the model’s hyperparameters with a
population size of 20, terminating at a convergence threshold of 0.001. Ensuring robustness, exper-
iments were repeated around ten times, introducing new parameter sets and dataset shuffles in each
iteration, capturing inherent variability for statistically significant conclusions. An individual CSV
file including data from the past from 2017 to 2022 had been generated for data analysis, and Fig. 5
shows the combined data as a time sequence, forming the basis for a three-stage study involving data
investigation, price prediction, and process optimization.
Figure 5: Data distribution 2017 to 2022
Key dataset stats: Mean 69.84, std. dev. 89.55, min 50.00, max 871.00. Fig. 5 depicts price
trends, emphasizing forecasting utility and pricing volatility. Occasional spikes above 871 EUR suggest
14 CMC, 2024
data storage outsourcing opportunities during lower power cost periods. Examining power price
behavior stresses optimizing storage offloading for cost savings. The heatmap in Fig. 6 identifies
closely connected factors, emphasizing the model’s effectiveness in forecasting a single hour based
on date-related variables.
Figure 6: Autocorrelation of the dataset lags
Price Forecasting Results: Initially, the most important characteristic that was found that it was
used to build the model repeatedly. While the data sets in the next section show the results of choosing
the feature method, Tabl e 2 summarizes the various feature sets employed in the study [26,28]. Table 2
uses the notation R for the matching attribute and Spn, which stands for the lag. The time interval
that allows the following task to start before the previous one is finished is known as the negative lag.
Tasks that would normally be sequential can overlap or be combined using this approach if they would
not otherwise be incompatible.
Table 2: Set of features
Notation Set of features Notation Set of features
αA=−1εE=−1, E =1, E =−2, E =2, E =−3
βB=−1, B =1ζF=1, F =−1, F =−2, F =2, F =−3, F =24
γC=−1, C =1, C =−2ηG=−1, G =1, G =−2, G =2, G =−3, G =24, G =3
δD=−1, D =1, D =−2, D =2θH=−1, H =1, H =−2, H =2, H =−3, H =24, H =3,
hour
Enhanced functionality is not always a guarantee of better model performance. More features
from set A are added to set B in NGT, which raises Mean Squared Error (MSER) but decreases Mean
Absolute Error (MAER)asshowninTable 3. Hours and delays are important characteristics of the
best options for set H. Different from SVR and DenseNet, NGT-SOA demonstrates that many times
adding more features does not improve accuracy. When NGT-SOA is optimized using the SOA, the
fewest characteristics often yield the best results. Our model shows better accuracy when compared to
existing approaches.
CMC, 2024 15
Table 3: Computed forecasting error in according to different lags
Techniques Feature set αβγδ ε ζ ηθ
Decision
Trees
MSER40.638 30.318 28.998 27.678 26.358 26.356 26.35 26.352
MAER32.873 22.553 21.233 19.913 18.593 18.591 18.59 18.587
DenseNet
[16]
MSER24.918 23.598 22.278 20.958 19.638 19.636 19.63 19.632
MAER17.152 15.832 14.512 13.192 11.872 11.87 11.87 11.866
AlexNet
[27]
MSER26.533 25.213 23.893 22.573 21.253 21.251 21.25 21.247
MAER18.768 17.448 16.128 14.808 13.488 13.486 13.48 13.482
SVR MSER38.372 28.052 26.732 25.412 24.092 24.09 24.09 24.086
MAER30.606 20.286 18.966 17.646 16.326 16.324 16.32 16.32
NGT-
SOA
MSER4.331 3.799 3.119 2.688 2.108 1.997 1.886 1.775
MAER2.46 1.885 1.244 0.997 0.664 0.553 0.442 0.331
This study employed Decision trees, DenseNet, AlexNet, Support Vector Regression (SVR), and
other power price forecasting methods. The NGT-SOA model, with optimized parameters, displayed
accurate forecasts as seen in Fig. 7 for December 2023. Fig. 8 compares various regression techniques,
highlighting NGT-SOA’s consistent superiority with higher R2, lower MAE and MSE values, and
improved accuracy and recall. These results affirm the effectiveness of NGT-SOA in providing precise
and reliable hourly spot price forecasts.
Figure 7: Forecasted price of last month of 2023
Optimization of Cloud Datacenters: Our cost-saving model, employing NGT-SOA and incorporat-
ing the IoT real-time data, achieved a notable 25.31% reduction (CAD 1805.66). The model introduced
randomness in server and node capacities, implementing a One-and-Off strategy for efficient server
shutdowns during downloads. TN(4.2) GB capacity nodes, optimized for energy efficiency, were
utilized. The analysis excluded data execution and transfer costs, underscoring the importance of a
comprehensive understanding of cost savings through storage offloading. Despite limitations, our
results demonstrated better-than-expected outcomes, yielding up to 24.21% cost savings over two
weeks for a four-server data center. Scaling to larger data centers (4,000 servers) could potentially
result in annual savings of CAD 48.3 million, accounting for potential alterations in actual energy
16 CMC, 2024
usage. Our findings underscore the impact of increasing server size on computation time in large
data centers, discouraging node and server rescaling as it would not significantly influence outcomes.
Maintaining consistent power consumption during data download is crucial for reducing storage-
related power consumption. Reducing nodes does not change estimated expenses or savings but
ensures high throughput. For example, with Ji TN(2,1) and YN =3 and ji YN(910,20), a 25.21% cost
savings of CAD $1,000 was achieved, emphasizing the efficiency gains facilitated by IoT integration.
Figure 8: Performance metrics results
Table 4 demonstrates the effectiveness of offloading storage to specific nodes, exploring varying
storage node prices and emphasizing potential cost savings. Our research suggests that increasing
standard deviation (std.) leads to significant cost savings. For instance, increasing it from 0.2 to
0.5 removed roughly a half of the resources that were saved. The NGT-SOA technique, proposed
here, holds substantial potential, especially in logistics and industrial IoT, benefiting sectors like
manufacturing, transportation, and energy with real-time electricity price forecasting for optimized
operations and informed decisions. The scalable model offers a promising future for cloud-based
resource management systems.
Table 4: Optimized results of cost of every node
Standard dev. Cost_saving (%) Cost_saving (EUR)
0.3 19.23 1450.22
0.7 9.22 710.22
7Conclusion
This study introduces a specialized deep learning model for predicting energy prices in Oslo’s
unique Norwegian energy market, incorporating the transformative potential of the IoT. The focus is
on using the upward trajectory of power expenses to enhance energy efficiency in cloud data centers
through strategic data offloading. Addressing the challenges posed by fluctuating power prices in the
Nord Pool market, the study estimates Ontario’s power revenues, specifically the typical spot price,
for 2016–2023. Our cost-saving model, rigorously tested in various default settings, demonstrates a
significant 65% cost reduction. Innovative data storage technologies, coupled with IoT real-time data
integration, yield accurate price projections (MAE 5.42, MSE 2.89), resulting routinely in energy cost
reductions of up to 29.66% in data centers. While these outcomes are from a smaller-scale platform,
they suggest even higher savings in a larger scenario. Our framework will evolve by incorporating
additional variables and addressing shortcomings, enhancing its applicability to a broader range of
CMC, 2024 17
real-world situations. Including load and energy forecasts, particularly leveraging IoT data, holds
potential to further enhance forecasting performance in future iterations.
Acknowledgement: The authors would like to thank the Deanship of Scientific Research, Prince Sattam
bin Abdulaziz University, Al-Kharj, Saudi Arabia, for supporting this research.
Funding Statement: The authors extend their appreciation to Prince Sattam bin Abdulaziz University
for funding this research work through the Project Number (PSAU/2023/01/27268).
Author Contributions: The authors confirm contribution to the paper as follows: Study conception
and design: M.A, H.A.A, and N.A; data collection: N.A; analysis and interpretation of results: M.A,
H.A.A, and N.A; draft manuscript preparation: M.A, H.A.A, and N.A. All authors reviewed the
results and approved the final version of the manuscript.
Availability of Data and Materials: The authors confirm that the data supporting the findings of this
study are available within the article.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the
present study.
References
[1] A. A. Almazroi and N. Ayub, “Multi-task learning for electricity price forecasting and resource manage-
ment in cloud-based industrial IoT systems,” IEEE Access, vol. 1, no. 1, pp. 1–10, 2023. doi: 10.1109/AC-
CESS.2023.3280857.
[2] H. Shukur, S. Zeebaree, R. Zebari, D. Zeebaree, O. Ahmed and A. Salih, “Cloud computing virtualization
of resources allocation for distributed systems,” J. Appl. Sci. Technol. Trends., vol. 1, no. 3, pp. 98–105,
2020. doi: 10.38094/jastt1331.
[3] N. R. Moparthi, G. Balakrishna, P. Chithaluru, M. Kolla, M, and M. Kumar, “An improved energy-
efficient cloud-optimized load-balancing for IoT frameworks,” Heliyon, vol. 9, no. 11, pp. 1–13, 2023. doi:
10.1016/j.heliyon.2023.e21947.
[4] Y. Saadi and S. El Kafhali, “Energy-efficient strategy for virtual machine consolidation in cloud environ-
ment,” Soft. Comput., vol. 24, no. 19, pp. 14845–14859, 2020.
[5] A. G. Gad, “Particle swarm optimization algorithm and its applications: A systematic review,” Arch.
Comput. Methods Eng., vol. 29, no. 5, pp. 2531–2561, 2022. doi: 10.1007/s11831-021-09694-4.
[6] S. Albahli, M. Shiraz, and N. Ayub, “Electricity price forecasting for cloud computing using an
enhanced machine learning model,” IEEE Access, vol. 8, pp. 200971–200981, 2020. doi: 10.1109/AC-
CESS.2020.3035328.
[7] N. Ayub et al., “Big data analytics for short and medium-term electricity load forecasting using an AI
techniques ensembler,” Energies, vol. 13, no. 19, pp. 5193, 2020. doi: 10.3390/en13195193.
[8] M. Mishra, J. Nayak, B. Naik, and A. Abraham, “Deep learning in electrical utility industry: A com-
prehensive review of a decade of research,” Eng. Appl. Artif. Intell., vol. 96, pp. 104000, 2020. doi:
10.1016/j.engappai.2020.104000.
[9] H. Hamdoun, A. Sagheer, and H. Youness, “Energy time series forecasting—Analytical and empirical
assessment of conventional and machine learning models,” J. Intell. Fuzzy Syst., vol. 40, no. 6, pp. 12477–
12502, 2021. doi: 10.3233/JIFS-201717.
[10] P. W. Khan et al., “Machine learning-based approach to predict energy consumption of renewable and
nonrenewable power sources,” Energies, vol. 13, no. 18, pp. 4870, 2020. doi: 10.3390/en13184870.
[11] X. Zhao et al., “A novel short-term load forecasting approach based on kernel extreme learning machine:
A provincial case in China,” IET Renew. Power Gener., vol. 16, no. 12, pp. 2658–2666, 2022. doi:
10.1049/rpg2.12373.
18 CMC, 2024
[12] B. D. Dimd, S. Völler, U. Cali, and O. M. Midtgård, “A review of machine learning-based photovoltaic
output power forecasting: Nordic context,” IEEE Access, vol. 10, pp. 26404–26425, 2022. doi: 10.1109/AC-
CESS.2022.3156942.
[13] R. Wazirali, E. Yaghoubi, M. S. S. Abujazar, R. Ahmad, and A. H. Vakili, “State-of-the-art review on
energy and load forecasting in microgrids using artificial neural networks, machine learning and deep
learning techniques,” Elect. Power Syst. Res., vol. 225, pp. 109792, 2023. doi: 10.1016/j.epsr.2023.109792.
[14] N. Abbasabadi, M. Ashayeri, R. Azari, B. Stephens, and M. Heidarinejad, “An integrated data-driven
framework for urban energy use modeling (UEUM),” Appl. Energy, vol. 253, no. 1, pp. 113550, 2019. doi:
10.1016/j.apenergy.2019.113550.
[15] M. Noshy, A. Ibrahim, and H. A. Ali, “Optimization of live virtual machine migration in cloud
computing: A survey and future directions,” J. Netw. Comput. Appl., vol. 110, pp. 1–10, 2018. doi:
10.1016/j.jnca.2018.03.002.
[16] X. Wang, S. X. Wang, Q. Y. Zhao, S. M. Wang, and L. W. Fu, “A multi-energy load prediction model
based on deep multi-task learning and ensemble approach for regional integrated energy systems,” Int. J.
Electrical Power Energy Syst., vol. 126, pp. 106583, 2021. doi: 10.1016/j.ijepes.2020.106583.
[17] L. Melgar-García, D. Gutiérrez-Avilés, C. Rubio-Escudero, and A. Troncoso, “A novel distributed forecast-
ing method based on Information Fusion and incremental learning for streaming time series,” Inf. Fusion,
vol. 95, pp. 163–173, 2023.
[18] H. Peng, W. S. Wen, M. L. Tseng, and L. L. Li, “A cloud load forecasting model with nonlinear changes
using whale optimization algorithm hybrid strategy,” Soft. Comput., vol. 25, no. 15, pp. 10205–10220, 2021.
[19] A. Nespoli, S. Leva, M. Mussetta, and E. G. C. Ogliari, “A selective ensemble approach for accuracy
improvement and computational load reduction in ANN-based PV power forecasting,” IEEE Access,vol.
10, pp. 32900–32911, 2022.
[20] C. Deb, F. Zhang, J. Yang, S. E. Lee, and K. W. Shah, “A review on time series forecasting techniques for
building energy consumption,” Renew Sustain Energ. Rev., vol. 74, pp. 902–924, 2017.
[21] B. Alankar, G. Sharma, H. Kaur, R. Valverde, and V. Chang, “Experimental setup for investigating the
efficient load balancing algorithms on virtual cloud,” Sensors, vol. 20, no. 24, pp. 7342, 2020.
[22] A. Katal, S. Dahiya, and T. Choudhury, “Energy efficiency in cloud computing data centers:
A survey on software technologies,” Cluster Comput., vol. 26, no. 3, pp. 1845–1875, 2023. doi:
10.1007/s10586-022-03713-0.
[23] Y. Wen, Y. Chen, M. L. Shao, J. L. Guo, and J. Liu, “An efficient content distribution network archi-
tecture using heterogeneous channels,” IEEE Access, vol. 8, pp. 210988–211006, 2020. doi: 10.1109/AC-
CESS.2020.3037164.
[24] A. Javadpour, G. Wang, and S. Rezaei, “Resource management in a peer to peer cloud network for IoT,”
Wireless Pers. Commun., vol. 115, no. 3, pp. 2471–2488, 2020. doi: 10.1007/s11277-020-07691-7.
[25] L. Nie et al., “Network traffic prediction in industrial internet of things backbone networks: A
multitask learning mechanism,” IEEE Trans. Ind Inform., vol. 17, no. 10, pp. 7123–7132, 2021. doi:
10.1109/TII.2021.3050041.
[26] Nord Pool, “Electricity load dataset,” 2023. Accessed: Jan. 14, 2024. [Online]. Available:
https://www.nordpoolgroup.com/en/
[27] T. Van der Heijden , J. Lago, P. Palensky, and E. Abraham, “Electricity price forecasting in European day
ahead markets: A greedy consideration of market integration,” IEEE Access, vol. 9, pp. 119954–119966,
2021. doi: 10.1109/ACCESS.2021.3108629.
[28] B. N. Oreshkin, G. Dudek, P. Pełka, and E. Turkina, “N-BEATS neural network for mid-term electricity
load forecasting,” Appl. Energy, vol. 293, pp. 116918, 2021. doi: 10.1016/j.apenergy.2021.116918.
[29] Z. Zou, J. Wang, E. N., C. Zhang, Z. Wang, and E. Jiang, “Short-term power load forecasting: An integrated
approach utilizing variational mode decomposition and TCN-BiGRU,” Energies, vol. 16, no. 18, pp. 6625,
2023. doi: 10.3390/en16186625.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
As wireless communication grows, so does the need for smart, simple, affordable solutions. The need prompted academics to develop appropriate network solutions ranging from wireless sensor networks (WSNs) to the Internet of Things (IoT). With the innovations of researchers, the necessity for enhancements in existing researchers has increased. Initially, network protocols were the focus of study and development. Regardless, IoT devices are already being employed in different industries and collecting massive amounts of data through complicated applications. This necessitates IoT load-balancing research. Several studies tried to address the communication overheads produced by significant IoT network traffic. These studies intended to control network loads by evenly spreading them across IoT nodes. Eventually, the practitioners decided to migrate the IoT node data and the apps processing it to the cloud. So, the difficulty is to design a cloud-based load balancer algorithm that meets the criteria of IoT network protocols. Defined as a unique method for controlling loads on cloud-integrated IoT networks. The suggested method analyses actual and virtual host machine needs in cloud computing environments. The purpose of the proposed model is to design a load balancer that improves network response time while reducing energy consumption. The proposed load balancer algorithm may be easily integrated with peer-existing IoT frameworks. Handling the load for cloud-based IoT architectures with the above-described methods. Significantly boosts response time for the IoT network by 60 %. The proposed scheme has less energy consumption (31 %), less execution time (24\%), decreased node shutdown time (45 %), and less infrastructure cost (48\%) in comparison to existing frameworks. Based on the simulation results, it is concluded that the proposed framework offers an improved solution for IoT-based cloud load-balancing issues.
Article
Full-text available
Accurate short-term power load forecasting is crucial to maintaining a balance between energy supply and demand, thus minimizing operational costs. However, the intrinsic uncertainty and non-linearity of load data substantially impact the accuracy of forecasting results. To mitigate the influence of these uncertainties and non-linearity in electric load data on the forecasting results, we propose a hybrid network that integrates variational mode decomposition with a temporal convolutional network (TCN) and a bidirectional gated recurrent unit (BiGRU). This integrated approach aims to enhance the accuracy of short-term power load forecasting. The method was validated on load datasets from Singapore and Australia. The MAPE of this paper’s model on the two datasets reached 0.42% and 1.79%, far less than other models, and the R2 reached 98.27% and 97.98, higher than other models. The experimental results show that the proposed network exhibits a better performance compared to other methods, and could improve the accuracy of short-term electricity load forecasting.
Article
Full-text available
The IT industry is witnessing a rapid rise in the popularity of cloud computing. This technology streamlines computing processes by eliminating the need to procure physical equipment for calculations. Instead, cloud-based computing services are offered by companies that specialize in such services. These companies heavily rely on computers and servers, which consume a significant amount of energy, making the availability of reliable and cost-effective electricity essential for their design and management. Cloud centres consume a lot of electricity, and reducing their energy consumption has become one of the most challenging tasks due to rising power costs. Efficient data placement and node management are commonly used strategies to address this issue. An AlexNet model has been designed to optimize storage relocation and predict power prices. The outcome of this initiative has resulted in a considerable reduction in energy consumption at data centres. The model uses Dwarf Mongoose Optimization Algorithm (DMOA) to produce an optimal solution for the AlexNet and increase its performance with a real-world dataset from IESO in Ontario, Canada. 75% of the available data was used for training to assure the model’s precision, with the remaining 25% allocated to testing purposes. The model forecasts power prices with an MAE of 2.22% and an MSE of 6.33%, resulting in an average reduction of 22.21% in electricity expenses. Our proposed method has an accuracy of 97% compared to 11 benchmark algorithms, including CNN, DenseNet, and SVM having an accuracy of 89%, 88%, and 82%, respectively.
Article
Full-text available
Cloud computing is a commercial and economic paradigm that has gained traction since 2006 and is presently the most significant technology in IT sector. From the notion of cloud computing to its energy efficiency, cloud has been the subject of much discussion. The energy consumption of data centres alone will rise from 200 TWh in 2016 to 2967 TWh in 2030. The data centres require a lot of power to provide services, which increases CO2 emissions. In this survey paper, software-based technologies that can be used for building green data centers and include power management at individual software level has been discussed. The paper discusses the energy efficiency in containers and problem-solving approaches used for reducing power consumption in data centers. Further, the paper also gives details about the impact of data centers on environment that includes the e-waste and the various standards opted by different countries for giving rating to the data centers. This article goes beyond just demonstrating new green cloud computing possibilities. Instead, it focuses the attention and resources of academia and society on a critical issue: long-term technological advancement. The article covers the new technologies that can be applied at the individual software level that includes techniques applied at virtualization level, operating system level and application level. It clearly defines different measures at each level to reduce the energy consumption that clearly adds value to the current environmental problem of pollution reduction. This article also addresses the difficulties, concerns, and needs that cloud data centres and cloud organisations must grasp, as well as some of the factors and case studies that influence green cloud usage.
Article
Full-text available
Throughout the centuries, nature has been a source of inspiration, with much still to learn from and discover about. Among many others, Swarm Intelligence (SI), a substantial branch of Artificial Intelligence, is built on the intelligent collective behavior of social swarms in nature. One of the most popular SI paradigms, the Particle Swarm Optimization algorithm (PSO), is presented in this work. Many changes have been made to PSO since its inception in the mid 1990s. Since their learning about the technique, researchers and practitioners have developed new applications, derived new versions, and published theoretical studies on the potential influence of various parameters and aspects of the algorithm. Various perspectives are surveyed in this paper on existing and ongoing research, including algorithm methods, diverse application domains, open issues, and future perspectives, based on the Systematic Review (SR) process. More specifically, this paper analyzes the existing research on methods and applications published between 2017 and 2019 in a technical taxonomy of the picked content, including hybridization, improvement, and variants of PSO, as well as real-world applications of the algorithm categorized into: health-care, environmental, industrial, commercial, smart city, and general aspects applications. Some technical characteristics, including accuracy, evaluation environments, and proposed case study are involved to investigate the effectiveness of different PSO methods and applications. Each addressed study has some valuable advantages and unavoidable drawbacks which are discussed and has accordingly yielded some hints presented for addressing the weaknesses of those studies and highlighting the open issues and future research perspectives on the algorithm.
Article
Full-text available
Day-ahead power forecasting is an effective way to deal with the challenges of increased penetration of photovoltaic power into the electric grid, due to its non-programmable nature. This is significantly beneficial for smart grid and micro-grids application. Machine learning and hybrid approaches are well assessed techniques, able to provide effective forecasting with a data-driven approach based on previous measurements from existing power plants. Ensemble methods can be employed to increase solar power forecasting accuracy, by running several independent forecasting models in parallel. In this paper, a novel selective approach is proposed and assessed, where independently trained neural networks are evaluated in terms of accuracy, in order to properly select a suitable forecasting. Moreover, in order to reduce the associated computational burden, suitably developed new normalization approaches are proposed and evaluated. The considered experimental case study shows that the combination of the proposed procedures is able to increase accuracy and to mitigate the overall computational load, resulting in a simple and lightweight algorithm. Additionally, a comparison with other commonly used techniques has shown that the proposed approach is robust with respect to dataset limited size and discontinuities.
Article
Full-text available
Motivated by factors such as the reduction in cost and the need for a shift towards achieving UN’s Sustainable Development Goals, PV (Photovoltaic) power generation is getting more attention in the cold regions of the Nordic countries and Canada. The cold climate and the albedo effect of snow in these regions present favorable operating conditions for PV cells and an opportunity to realize the seasonal matching of generation and consumption respectively. However, the erratic nature of PV brings a threat to the operation of the grid. PV power forecasting has been used as an economical solution to minimize and even overcome this limitation. This paper is therefore a comprehensive review of machine learning-based PV output power forecasting models in the literature in the context of Nordic climate. The impact of meteorological parameters and the soiling effect due to snow, which is unique to this climate, on the performance of a prediction model is discussed. PV power forecasting models in the literature are systematically classified into multiple groups and each group is analyzed and important suggestions are made for choosing a better model for these regions. Ensemble methods, optimization algorithms, time-series decomposition, and weather clustering are identified as important techniques that can be used to enhance performance. And notably, this work proposed two conceptual approaches that can be used to incorporate the effect of snow on PV power forecasting. Future research needs to focus on this area, which is crucial for the development of PV in these regions.
Article
Full-text available
With the rapid development of re-electrification, traditional load forecasting faces a significant increase of influencing factors. Existing literature focuses on examining the influencing factors related to load profiles in order to improve the prediction accuracy. However, a large number of redundant features may lead to the overfitting of the forecasting engine. To enhance the performance of extreme learning machine (ELM) under massive data scale, this paper presents a kernel extreme learning machine (KELM) based method which can be used for short-term load prediction. First, a feature dimensionality reduction is performed using a kernelized principal component analysis, which aims to eliminate redundant input vectors. Then, the hyperparameters of KELM are optimized to improve the prediction accuracy and generalization. Case studies based on a province-level power system in China demonstrate that the presented method can significantly improve the accuracy of load forecasting by 3.14% in contrast to traditional ELM.
Article
Forecasting renewable energy efficiency significantly impacts system management and operation because more precise forecasts mean reduced risk and improved stability and reliability of the network. There are several methods for forecasting and estimating energy production and demand. This paper discusses the significance of artificial neural network (ANN), machine learning (ML), and Deep Learning (DL) techniques in predicting renewable energy and load demand in various time horizons, including ultra-short-term, short-term, medium-term, and long-term. The purpose of this study is to comprehensively review the methodologies and applications that utilize the latest developments in ANN, ML, and DL for the purpose of forecasting in microgrids, with the aim of providing a systematic analysis. For this purpose, a comprehensive database from the Web of Science was selected to gather relevant research studies on the topic. This paper provides a comparison and evaluation of all three techniques for forecasting in microgrids using tables. The techniques mentioned here assist electrical engineers in becoming aware of the drawbacks and advantages of ANN, ML, and DL in both load demand and renewable energy forecasting in microgrids, enabling them to choose the best techniques for establishing a sustainable and resilient microgrid ecosystem.