ArticlePublisher preview available

K-Predictions Based Data Reduction Approach in WSN for Smart Agriculture

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

Nowadays, climate change is one of the numerous factors affecting the agricultural sector. Optimising the usage of natural resources is one of the challenges this sector faces. For this reason, it could be necessary to locally monitor weather data and soil conditions to make faster and better decisions locally adapted to the crop. Wireless sensor networks (WSNs) can serve as a monitoring system for these types of parameters. However, in WSNs, sensor nodes suffer from limited energy resources. The process of sending a large amount of data from the nodes to the sink results in high energy consumption at the sensor node and significant use of network bandwidth, which reduces the lifetime of the overall network and increases the number of costly interference. Data reduction is one of the solutions for this kind of challenges. In this paper, data correlation is investigated and combined with a data prediction technique in order to avoid sending data that could be retrieved mathematically in the objective to reduce the energy consumed by sensor nodes and the bandwidth occupation. This data reduction technique relies on the observation of the variation of every monitored parameter as well as the degree of correlation between different parameters. This approach is validated through simulations on MATLAB using real meteorological data-sets from Weather-Underground sensor network. The results show the validity of our approach which reduces the amount of data by a percentage up to 88% while maintaining the accuracy of the information having a standard deviation of 2 degrees for the temperature and 7% for the humidity.
This content is subject to copyright. Terms and conditions apply.
Computing (2021) 103:509–532
https://doi.org/10.1007/s00607-020-00864-z
SPECIAL ISSUE ARTICLE
K-predictions based data reduction approach in WSN
for smart agriculture
Christian Salim1
·Nathalie Mitton1
Received: 15 September 2020 / Accepted: 23 October 2020 / Published online: 30 October 2020
© Springer-Verlag GmbH Austria, part of Springer Nature 2020
Abstract
Nowadays, climate change is one of the numerous factors affecting the agricultural
sector. Optimising the usage of natural resources is one of the challenges this sector
faces. For this reason, it could be necessary to locally monitor weather data and soil
conditions to make faster and better decisions locally adapted to the crop. Wireless sen-
sor networks (WSNs) can serve as a monitoring system for these types of parameters.
However, in WSNs, sensor nodes suffer from limited energy resources. The process of
sending a large amount of data from the nodes to the sink results in high energy con-
sumption at the sensor node and significant use of network bandwidth, which reduces
the lifetime of the overall network and increases the number of costly interference.
Data reduction is one of the solutions for this kind of challenges. In this paper, data
correlation is investigated and combined with a data prediction technique in order to
avoid sending data that could be retrieved mathematically in the objective to reduce
the energy consumed by sensor nodes and the bandwidth occupation. This data reduc-
tion technique relies on the observation of the variation of every monitored parameter
as well as the degree of correlation between different parameters. This approach is
validated through simulations on MATLAB using real meteorological data-sets from
Weather-Underground sensor network. The results show the validity of our approach
which reduces the amount of data by a percentage up to 88% while maintaining the
accuracy of the information having a standard deviation of 2for the temperature and
7% for the humidity.
Keywords Smart agriculture ·Data correlation ·Data reduction ·Data prediction ·
Pearson coefficient ·WSN
Mathematics Subject Classification 68
BChristian Salim
christian.salim@inria.fr
Nathalie Mitton
nathalie.mitton@inria.fr
1Inria, Villeneuve D’Ascq, France
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... Furthermore, this research provides an estimation of the computing cost of the method that is being suggested. This is because [24][25][26][27] requires trainee data, which places an excessive processing burden on the fusion centre, but our suggested technique does not require any training data, which means that it uses the least amount of computational resources. Therefore, it can be deduced that the computational cost associated with the suggested method is minimal in comparison to [24][25][26][27]. ...
... This is because [24][25][26][27] requires trainee data, which places an excessive processing burden on the fusion centre, but our suggested technique does not require any training data, which means that it uses the least amount of computational resources. Therefore, it can be deduced that the computational cost associated with the suggested method is minimal in comparison to [24][25][26][27]. To the best of my knowledge, none of the solutions that are currently available are as efficient and scalable as the system that we have presented. ...
... The entire cost is estimated for the proposed method. [27] The entire cost of the proposed remedy is yet unknown. ...
Article
Full-text available
One central area of current Internet of Things (IoT) research is developing methods for objects to sense their environment autonomously and then link them together so that they can exchange their findings quickly. In response to this urgent need, a new paradigm has emerged that is known as the Cognitive Internet of Things (CIoT). It adds cognitive capability in the form of sophisticated intelligence to the existing IoT. With the generation of huge amounts of data by CIoT applications, there is a compelling need to derive valuable insights from data in a computationally efficient way. Therefore, this research proposes the modified plausible reasoning for extracting knowledge from massive heterogeneous datasets. In the first step, the data is passed onto total variance regularizers to regularise the variance. Subsequently, the clusters are created with probabilistic clustering, and the plausibility theory is redefined at the cluster and the cluster-member levels. The experimental assessment of the environmental data spanning 21.25 years and the cross-validation using a variety of measures demonstrate that the proposed method is more effective than other competing approaches.
... It is reliable and does not create traffic bottlenecks and delays in transmission or congestion, particularly in areas across gateway nodes compared to [27,28]. The overall computation time is reduced with the use of decentralized algorithms in comparison to [32,35,38,40]. Currently, no works are available that propose the achievement of scalable cognition-type intelligence. ...
... It selects the best suitable copula with the help of AIC calculation, probabilistically measuring the amount of information for copula-modelled data, and subsequently, it finds an interesting pattern. Finally, it designs a reasoning system using a Bayesian network that responds to the query Proposed reasoning system positive capability [30] The absence of cost information The cost is estimated [31] The whole cost is unknown The cost is assessed [32] There is no way to know how much the suggested solution will actually cost The cost is projected [33] Unfortunately, the study's recommended method cannot be adequately tested without a prototype ...
Article
Full-text available
Current Internet of Things (IoT) research focuses on inserting cognition into its system architecture and design. Therefore, Cognitive IoT (CIoT) has emerged. CIoT inherits several features and challenges from IoT. Since IoT generates huge amounts of heterogeneous data, a cognitively inspired technique is required to extract meaningful insight from these data in less computation time. Keeping this requirement as a main goal, this research proposes a novel algorithm which executes the total variance regularization, probabilistic clustering, and the alternating direction method of multiplier (ADMM) of robust principal component analysis (RPCA) at cluster node and rest of the computation, i.e., copula modelling, the measurement of the amount of information to each copula-modelled sensory data for interesting patterns extraction, and Bayesian network formation, is executed at the fusion centre. Experimental evaluation across 21 years of environmental data and the cross-validation with different measures reveals its efficacy over competing approaches.
... The experimental evaluation of the proposed algorithm is conducted on twenty-one years of environmental data. In order to examine the efficacy of the [19] There must be a trainee dataset ...
Article
Full-text available
Recent research on the Internet of Things (IoT) focuses on the insertion of cognition into its system architecture and design, which introduces a new field known as Cognitive IoT (CIoT). Therefore, the CIoT inherits several features and challenges from IoT. The Cognitive IoT encompasses billions of devices that generate large amounts of heterogeneous, volatile, and time-dependent data. To ensure the smooth functioning of CIoT applications, meaningful insight must be obtained from the massive amounts of data. Thus, in order to uncover the hidden knowledge from these massive data sets, there needs to be a cognitively intelligent data analysis technique that is computationally efficient and cost-effective. Keeping this in mind, this research proposes inductive reasoning for extracting the concept and patterns from twenty-one years of environmental data. In the first phase of the proposed algorithm, the inductive value is computed for each chunk of the dataset, and it is transformed into a binary dataset for concept lattice generation. Furthermore, a weight assignment is performed for each generated concept, and the minimal inductive-valued concept is selected for inductive reasoning. Following the extraction of the generalized concept, the highest entropy row is selected by combining its corresponding concept data. As a result, this pattern is referred to as significant. An evaluation of the proposed algorithm on different scales demonstrates its efficiency over competing approaches.
... Widespread data reduction techniques are employed to maintain acceptable service efficiency in the delivery of sensed data. Therefore, the lifespan of the network is the most important factor in WMSN data minimization [27,35]. Figure 3 shows the data reduction-based IoT architecture [36]. ...
Article
Full-text available
The potential of Internet of Things and wireless sensor networks technologies can be used to build a picture of a future intelligent surveillance system. Because of the small size of the sensor nodes and their ability to transmit data remotely, they can be deployed at locations that are difficult or impossible to access. Wireless multimedia sensor network represents a distinct subdomain within the broader scope of wireless sensor networks. It has found diverse applications in the context of future smart cities, particularly in areas such as healthcare monitoring, home automation, transportation systems, and vehicular networks. A wireless multimedia sensor network is a resource-constrained network in which the nodes are small battery-powered devices. In addition, sending a large amount of collected data by wireless multimedia sensor network across the Internet of Things network imposes important challenges in terms of bandwidth, storage, processing, energy consumption, and wireless multimedia sensor network lifespan. One of the solutions to these kinds of problems is data transmission reduction. Therefore, this systematic literature review will review various techniques used in data transmission reduction, ranging from data transmission redundancy reduction to machine learning algorithms. Additionally, this review investigates the range of applications and the challenges encountered within the domain of wireless multimedia sensor networks. This work can serve as a basic strategy and a road map for scholars interested in data reduction techniques for intelligent surveillance systems using WMSN in Internet of Things networks.
Conference Paper
Water scarcity is a major global problem and directly impacts agriculture. Both growth in the population and the need for food exacerbate this issue. Therefore, mitigating water scarcity in irrigation systems is one of the most important challenges to be faced in the current era, for smart irrigation systems based on the critical function of Internet of Things (IoT) important role in optimizing water use in agriculture and landscaping. This article proposes an energy-efficient distributed transmission optimization for monitoring smart farming irrigation based on the Kalman–Zlib approaches (EDiTOK) in IoT networks. The EDiTOK approach operates at the sensor device level. This approach involves a prediction method to determine whether or not to transmit the most recent set of data readings depending on the similarity threshold value and then implement a lossless compression method to decrease the size of a given data set before transmitting them to the gateway node. Numerous experiments utilize actual sensor data from farming fields. The results demonstrate that the proposed EDiTOK approach achieved satisfactory results compared to other proposed methods regarding energy use, the quantity of sent readings, the proportion of data reduction, and the accuracy of the data.
Article
The development of the Internet of Things (IoT) paradigm and its significant spread as an affordable data source has brought many challenges when pursuing efficient data collection, distribution, and storage. Since such hierarchical logical architecture can be inefficient and costly in many cases, Data Reduction (DR) solutions have arisen to allow data preprocessing before actual transmission. To increase DR performance, researchers are using Artificial Intelligence (AI) techniques and models towards reducing sensed data volume. AI for DR on the edge is investigated in this study in the form of an Systematic Literature Review (slr) encompassing major issues such as data heterogeneity, AI-based techniques to reduce data, architectures, and contexts of usage. An SLR is conducted to map the state-of-the-art in this area, highlighting the most common challenges and potential research trends in addition to a proposed taxonomy.
Conference Paper
The more crucial process and one of the world's major consumers of water in agriculture is irrigation, which has been growing as a result of population growth and the resulting rise in demand for food. The irrigation system is one of the most important components of a system of agriculture. The crop's productivity will be directly influenced by the efficiency and applicability of the irrigation system. Water scarcity is one of the greatest challenges for an irrigation system. The efficient use of land, water, and energy resources depends on Internet of Things (IoT)-based smart agriculture systems. In this article, a distributed data reduction and decision-making (DiDaReD) approach for IoT-based smart farming irrigation systems is proposed. The DiDaReD approach is implemented on two levels: sensor devices and edge gateway. At the sensor device level, we implemented a lightweight scoring method for removing the redundant collected soil moisture readings before sending them to the edge gateway. At the edge of the gateway, a voting technique is applied to the scores of received readings from sensor devices to produce a decision regarding irrigation of the monitored farming field according to the soil moisture state of this field. Finally, since the DiDaReD approach is periodic and works in real-time, we implement a decision reduction algorithm to prevent sending the same decision notifications to the actuator, thus saving the energy of the IoT network. Several experiments are implemented using real sensed data from the farming field, and the conducted results show that the proposed DiDaReD outperforms other methods in terms of data reduction ratio, number of sent readings, energy consumption, network lifetime, data integrity, decisions, and total energy consumed by the edge-gateway node.
Article
Full-text available
The rapid growth of the Internet of Things (IoT) has led to its widespread adoption in various industries, enabling enhanced productivity and efficient services. Integrating IoT systems with existing enterprise application systems has become common practice. However, this integration necessitates reevaluating and reworking current Enterprise Architecture (EA) models and Expert Systems (ES) to accommodate IoT and cloud technologies. Enterprises must adopt a multifaceted view and automate various aspects, including operations, data management, and technology infrastructure. Machine Learning (ML) is a powerful IoT and smart automation tool within EA. Despite its potential, a need for dedicated work focuses on ML applications for IoT services and systems. With IoT being a significant field, analyzing IoT‐generated data and IoT‐based networks is crucial. Many studies have explored how ML can solve specific IoT‐related challenges. These mutually reinforcing technologies allow IoT applications to leverage sensor data for ML model improvement, leading to enhanced IoT operations and practices. Furthermore, ML techniques empower IoT systems with knowledge and enable suspicious activity detection in smart systems and objects. This survey paper conducts a comprehensive study on the role of ML in IoT applications, particularly in the domains of automation and security. It provides an in‐depth analysis of the state‐of‐the‐art ML approaches within the context of IoT, highlighting their contributions, challenges, and potential applications.
Article
Real data plays a fundamental role in determining various features related to the collection site such as monitoring, controlling, predictions, etc. Several systems in the Internet of Things (IoT) environment produce millions of data from the sensing node, and transmitting each of the data is costly in terms of bandwidth requirements, energy consumption, and protecting data from being corrupt or from potential threats such as man-in-the-middle attacks. In many environmental applications such as agriculture, the absolute change in the consecutive data points is usually very small (we call it slow changing environment). So, there is a need for a system that can predict the next data point within a predefined tolerable limit, then transmitting each data point can be avoided. To address this issue, we proposed an Analytical Prediction Algorithm using Estimations (APAE). The algorithm is deployed and runs simultaneously in the three layers of architecture: sensing, fog, and cloud layer. The algorithm predicts the next data sensed by the sensor. If the difference between the actual sensed data point and the predicted data point is beyond the pre-defined tolerance, then the sensed value is sent to the fog node and further to the cloud, otherwise, the estimated value is accepted. We have implemented the proposed algorithm on a real testbed and also tested it on two datasets. We compare the amount of data points transmitted with the state of the state-of-the-art scheme. We also highlighted the reduction in energy consumption and high accuracy of our algorithm on the two datasets and a real testbed.
Article
PurposeFood security is the most concerning term nowadays with the increase in world population. The increase in population leads to the conversion of farmland into houses, and unpredictable natural disasters bring down the production of food. This might have increased the usage of smart agriculture with the employment of the Internet of Things (IoT) and big data solutions.Methods Traditional agriculture methods are being migrated by requiring smart IoT technologies. Monitoring, maintenance performance, and cost are controlled using modern technologies. In modern agriculture, aerial imagery and satellites play a significant role. The agriculture-related information, such as water level, soil nutrition levels, soil PH, humidity, and temperature, is measured via an accurate agriculture sensor monitoring network in which the computers and phones remotely monitor their crop, and the details are updated to the farmers. This smart agriculture might have increased productivity and operational efficiency to a great extent. The IoT combines most of the traditional technologies and thus increases productivity.ResultsIn this paper, we present the literature review of an IoT-based energy-efficient secured routing protocol applied to the smart agriculture field. We have taken papers from different publishers like IEEE, Springer, Elsevier, and others and reviewed their limitations and advantages. The reviewed papers include protocols such as MAC, cross-layer, LEACH, multi-hop, and artificial intelligence (AI).Conclusions This study will help guide the researchers to contribute their works to this most trending and needed topic to enhance the productivity and energy efficiency of agricultural products securely for a sustainable future.
Article
Full-text available
The increase of network size and sensory data leads to many serious problems to the wireless sensor networks due to the limited energy. Data prediction method is helpful to reduce network traffic and increase the network lifetime accordingly, especially by exploring data correlation among the sensory data. Data prediction can also be used to recover abnormal / lost data in case these sensor nodes fail to work. The current prediction methods in wireless sensor networks do not make full usage of the spatial-temporal correlation between wireless sensor nodes, and thus leads to higher prediction error relatively. This paper proposes a novel model for multi-step sensory data prediction in wireless sensor network. Firstly, we introduce the artificial neural networks based on 1-D CNN (One-Dimensional Convolutional Neural Network) and Bi-LSTM (Bidirectional Long and Short-Term Memory) to get the abstract features of different attributes via the pre-processed sensory data. Then, these abstract features are used to obtain one-step prediction. Finally, the multi-step prediction is introduced by using historical data and the prediction results of the previous step iteratively. Experiment results show that after selecting suitable node combinations in which the spatial-temporal correlation is highlighted, the proposed multi-step predictive model can predict multi-step (short and medium term) sensory data, and its performance is better compared with other related methods.
Article
Full-text available
In a resource-constrained Wireless Sensor Networks (WSNs), the optimization of the sampling and the transmission rates of each individual node is a crucial issue. A high volume of redundant data transmitted through the network will result in collisions, data loss, and energy dissipation. This paper proposes a novel data reduction scheme that exploits the spatial-temporal correlation among sensor data in order to determine the optimal sampling strategy for the deployed sensor nodes. This strategy reduces the overall sampling/transmission rates while preserving the quality of the data. Moreover, a back-end reconstruction algorithm is deployed on the workstation (Sink). This algorithm can reproduce the data that have not been sampled by finding the spatial and temporal correlation among the reported data set, and filling the “non-sampled” parts with predictions. We have used real sensor data of a network that was deployed at the Grand-St-Bernard pass located between Switzerland and Italy. We tested our approach using the previously mentioned data-set and compared it to a recent adaptive sampling based data reduction approach. The obtained results show that our proposed method consumes up to 60% less energy and can handle non-stationary data more effectively.
Article
Full-text available
Wireless Video Sensor Networks (WVSNs) are composed of small embedded video and camera motes capable of extracting the surrounding envi- ronmental information. Those sensor nodes can locally process the information and then wirelessly transmit it to the coordinator and to the sink to be fur- ther processed. As a consequence, more abundant video and image data are collected. In such densely deployed networks, the problem of data redundancy arises when information are gathered from neighboring nodes. To overcome this problem, one important enabling technology for WVSN is data aggrega- tion, which is essential to be cost-efficient. In this paper, we propose a new approach for data aggregation in WVSN based on images and shot similarity functions. It is deployed on two levels: the video-sensor node level and the coor- dinator level. At the sensor node level the proposed algorithms aim at reducing the number of frames sensed by the sensor nodes and sent to the coordina- tor. At the coordinator level, after receiving shots from different neighbouring sensor nodes, the similarity between these shots is computed to eliminate redundancies and to only send the frames which meet a certain condition to the sink. The similarity between shots is evaluated based on their color, edge and motion information. We evaluate our approach on a live scenario and com- pare the results with another approach from the literature in terms of data reduction and energy consumption. The results show that the two approaches have a significant data reduction to reduce the energy consumption, thus our approach tends to overcome the other one in terms of reducing the energy consumption related to the sensing process, and to the transmitting process while guaranteeing the detection of all the critical events at the node and the coordinator levels.
Article
Pipeline leakages may conduct large excessive costs for reparation, infrastructure damage combined with environmental pollution. Consequently, the security and maintenance of the pipeline infrastructure are one of the major preoccupations for searchers. The most suitable are these solutions based on wireless sensors networks. However, WSNs are susceptible to noise effect, material defaults and malicious attacks from intruders. Therefore, it is essential to identify potential events like a leak, as well as erroneous and malicious attacks as defected sensors, occurred on the network. For that propose, we have presented a distributed one class classification technique for outliers detection based on WSNs. In addition, we have searched to improve our classifier by using a centered ellipsoidal technique to classify data. Then, we have demonstrated the importance of our improved technique comparing to other classifiers. Likewise, we have investigated the notion of the relationship existing between closed neighboring nodes and the existing correlation between historical observations to identify damages sources. These improvements led to increase the detection accuracy level and decrease the false alarm percentage by respecting WSNs constraints such as energy consumption.
Conference Paper
Nowadays, the agriculture domain faces a lot of challenges for a better usage of its natural resources. For this purpose, and for the increasing danger of climate change, there is a need to locally monitor meteorological data and soil conditions to help make quicker and more adapted decision for the culture. Wireless Sensor Networks (WSN) can serve as a monitoring system for those types of features. However, WSN suffer from the limited energy resources of the motes which shorten the lifetime of the overall network. Every mote periodically captures the monitored feature and sends the data to the sink for further analysis depending on a certain sampling rate. This process of sending big amount of data causes a high energy consumption of the sensor node and an important bandwidth usage on the network. In this paper, a Machine Learning based Data Reduction Algorithm (MLDR) is introduced. MLDR focuses on environmental data for the benefits of agriculture. MLDR is a data reduction approach which reduces the amount of transmitted data to the sink by adding some machine learning techniques at the sensor node level by keeping data availability and accuracy at the sink. This data reduction helps reduce the energy consumption and the bandwidth usage and it enhances the use of the medium while maintaining the accuracy of the information. This approach is validated through simulations on MATLAB using real temperature data-sets from Weather-Underground sensor network. Results show that the amount of sent data is reduced by more than 70% while maintaining a very good accuracy with a variance that did not surpass 2 degrees.
Article
In wireless sensor networks, missing data is an inevitable phenomenon due to the inherent limitations of the sensor nodes, such as battery power constraints of nodes, missing communication links, bandwidth limitation, etc. Missing data adversely affects the quality of data received by the sink node. Since the data acquired by the sensor nodes in a multimodal environmental sensor network are spatially and temporally correlated, these correlations play a pivotal role in missing data recovery and data prediction. This paper proposes an analytical framework to characterize the correlation between two different pairs of modalities in an environmental sensor network using a set of classical and robust measures of correlation coefficient estimates. Monte Carlo simulation is performed to approximately model sensed environmental data characteristics. Three classical estimates (Pearson’s correlation coefficient, Spearman’s rank correlation coefficient, and Kendall’s-tau rank correlation coefficient), and four robust estimates of correlation coefficients are used to establish the correlation between different pairs of sensed modalities in the data characteristics. The efficacy of these estimates is obtained using the two performance metrics, mean-squared error (MSE) and relative estimation efficiency (RE). Stationarity analysis among the acquired environmental variables shed light upon the best estimates of the correlation coefficient, which could be used for prediction of temperature modality in a known region of slope/stationarity in the data characteristics. The robustness of the correlation coefficient estimates in the presence of outliers present in the data due to noise, errors, low residual battery power of sensor nodes, etc. is also investigated.
Article
A useful approach to increase the lifetime of wireless sensor networks is clustering. Exchange of messages due to successive and recurrent reclustering burdens the sensor nodes and causes power loss. This paper presents a modified clustering methodology that diminishes the overhead in clustering and message exchanges thereby effectively scheduling the clustering task. The network is clustered subject to the remaining energy of sensor nodes. Energy based parameters decide cluster head nodes and ancillary nodes and the member nodes are linked with them. The roles of the head nodes of the cluster are interchanged depending on the nodes’ states. Reclustering of nodes is accomplished to achieve minimum energy consumption by calculating the update cycle using a fuzzy inference system. The average sensed data rate of cluster members, the distance at which the member nodes are from the sink and the power of cluster head nodes are counted to achieve better energy saving. Cluster member nodes apply machine learning at regular intervals to classify data based on their similarity. The classified data are transmitted to the cluster head after a reduction in the number of message transfers. The proposed method improves the energy usage of clustering and data transmission.
Article
Environmental monitoring is a practical application where a Wireless Sensor Network (WSN) may be utilized effectively. However, the energy consumption issues have become a major concern in using a WSN, particularly in remote locations without readily accessible electrical power supply. In general, the activities of data transmission among sensor nodes and the gateway can be a significant fraction of the total energy consumption within a WSN. Hence, reducing the number and the duration of transmissions as much as possible while maintaining a high level of data accuracy can be an effective strategy for saving energy. To achieve this objective, a Least Mean Square (LMS) filter is used for a dual prediction scheme, in this paper. The dual prediction scheme is data quality-based, allowing both the sensor nodes and the gateway to predict the data simultaneously. Only when the error between the predicted data and the real sensed data exceeds a pre-defined threshold, the sensor nodes will send the sensed data to the gateway/another node and consequently will update the coefficients of the filter. It is observed that, with this scheme, the total number of transmissions and their overall duration can be effectively reduced, and therefore, further energy savings can be realized. With the developed methodologies, at least 62.3% of the total energy for data transmission could be saved while achieving a 93.1% prediction accuracy.