Conference PaperPDF Available

Exploring Sustainable Printed Paper Sensors for Analyzing Cure Behavior and Detecting Cracks in Composites

Authors:
Sensors and Electronic
Instrumentation Advances:
Proceedings of the 9
th
International Conference
on Sensors and Electronic Instrumentation Advances
20-22 September 2023
Funchal (Madeira Island), Portugal
Edited by Sergey Y. Yurish
Sergey Y. Yurish, Editor
Sensors and Electronic Instrumentation Advances
SEIA’ 2023 Conference Proceedings
Copyright © 2023
by International Frequency Sensor Association (IFSA) Publishing, S. L.
E-mail (for orders and customer service enquires): ifsa.books@sensorsportal.com
Visit our Home Page on http://www.sensorsportal.com
All rights reserved. This work may not be translated or copied in whole or in part without the written permission
of the publisher (IFSA Publishing, S. L., Barcelona, Spain).
Neither the authors nor International Frequency Sensor Association Publishing accept any responsibility or
liability for loss or damage occasioned to any person or property through using the material, instructions, methods
or ideas contained herein, or acting or refraining from acting as a result of such use.
The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not
identifies as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary
rights.
ISBN: 978-84-09-53746-4
BN-20230915-XX
BIC: TJFC
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
3
Contents
Foreword ........................................................................................................................................................... 5
Fiber Optic Current Sensor Based on 22.5° Faraday Rotator and Polarizing Beam Splitter .................. 6
A. Madaschi, P. Martelli and P. Boffi
Detection of Trafficable Areas in Outdoors with a Downward Looking 2D LiDAR .................................. 9
A. Olivas and F. Torres
Hardware Acceleration of Pulse Analysis using FPGAs in MicroTCA ..................................................... 12
C. Gonzalez, M. Ruiz, A. Carpeño, A. Pinas, D. Cano-Ott, J.Plaza, T. Martinez, D.Villamarin
Advanced Polymer Materials for Real-time Sensing of Inflammation and Infection .............................. 16
M. Hrubý, H. Zhukouskaya and E. Tomšík
Software Defined Radio Based Concept for Extending Orthogonal Multi-tone Time Domain
Reflectometry Method to Analyze Electrical Power Grids ......................................................................... 18
A. Faschingbauer
Traffic Signaling and Cooperative Trajectories based on Visible Light Communication ...................... 23
M. A. Vieira, G. Galvão, M. Vieira, M. Véstias P. Vieira, and P. Louro
Visible Light: An Identifier (ID) System for Building Guidance ................................................................ 29
M. Vieira, M. A. Vieira, P. Vieira, and P. Louro
Classification of Sports Exercises and Repetition Counting based on Inertial Measurement Data ....... 35
P. Krutz, M. Rehm, Z. Lang, M. Dix and J. Patalas-Maliszewska
Difference in Sensor Placement Position of Insole-type Pressure Transducers ........................................ 40
Y. Uchida, T. Funayama, E. Ohkubo and Y. Kogure
Electrochemical Determination of Cd2+, Pb2+, Cu2+ and Zn2+ in Liquids
using Modified Titanium Dioxide .................................................................................................................. 44
Vorobets V. S., Fomanyuk S. S., Medyk I. A., Kolbasov G. Ya., Karpenko S. V.
Near-Field Microwave Probe Technique for Local Broadband Characterization
of Nanocomposite Materials........................................................................................................................... 48
H. Bakli and M. Makhlouf
Comparison of the Depth Accuracy of a Plenoptic Camera and a Stereo Camera System
in Spatially Tracking Single Refuse-derived Fuel Particles in a Drop Shaft ............................................. 52
M. Zhang, R. Streier, M. Vogelbacher, S. Wirtz, V. Scherer, and J. Matthes
Impact of Solvent on Ammonia Detection Performance of Polyaniline-based Sensors ........................... 58
S. Vassaux, N. Redon, E. A. da Silva and C. Duc
Feasibility of Gait Change Detection using Smart Footwears .................................................................... 60
T. Funayama, Y. Uchida, Y. Kogure, D. Souma, R. Kimura
Exploring the Hidden Complexity: Approximate Entropy and Sample Entropy Analysis
in Pulse Oximetry of Female Athletes ........................................................................................................... 64
A. M. Cabanas, D. Catalán, N. Sáez, C. Flores, and P. Martín-Escudero
Development of a Smart Irrigation System for Apple Fields using a LoRaWAN Network .................... 70
R. Mendicino, S. Tritini, A. Mejia-Aguilar, and R. Monsorno
The use of Azure Cloud Tools for Monitoring Indoor Air Quality ............................................................ 74
L. C. Eduardo, C.R.S. Alexandre and A. S. F. Tercio
Visible Light Communication for Indoors Automated Guidance Vehicles ............................................... 76
P. Louro, M. Vieira, M. A. Vieira
Wind Estimation via UAV Parameters and Artificial Intelligence related to Ultrasonic
Anemometer Measurements .......................................................................................................................... 80
Michael Kurz, Federico Mothes, Markus Kreuzer, Alexander Knoll
Digital Twin-based Models of Human Activities, Localization, and Energy Consumption
of WBAN Network using IMU Sensors ......................................................................................................... 83
Noureddine Boujnah, Rafika Brahmi and Ridha Ejbali
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
4
Sensing the Mechanical Properties of AlN Thin Films using Micromechanical Membranes ................. 90
Aditya, T. Sommer, M. Althammer, and M. Poot
Video Stream Processing for an Autonomous Tunnel Drainage Rover ..................................................... 94
A. L. Giordano, T. Schachinger, V. Micic Batka, and B. G. Zagar
An IoT Communication Platform for Interactive Buildings Energy Management System ..................... 99
L. Mihet-Popa
Geospatial Sensor-based Approach to Provide Defibrillators by using Drones in Mountain Areas:
A Study Case in South Tyrol, Italy ............................................................................................................. 104
E. Fajardo-Figueroa, R. Mendicino, S. Tritini, M. van Veelen, G. Vinetti, G. Ristorto, S. Mayrgündter, G.
M. Bianco, L. Meng and A. Mejia-Aguilar
Zinc Tin Oxide Nanostructures Synthesized by the Microwave Hydrothermal Method
Applied to Gas Sensors ................................................................................................................................. 107
R. A. Silva, M. G. Masteghin and M. O. Orlandi
Replication of a DSC Device Using 3D Computational Modelling: Correction of Heat Flow
Diagrams of Selected Geopolymers by Processing the Experimental Data ............................................. 109
V. Kočí
Exploring Sustainable Printed Paper Sensors for Analyzing Cure Behavior and Detecting Cracks
in Composites ................................................................................................................................................ 111
A. Mahendran, N.Gupta, C. Koren and H. Lammer
Exploration of Phage Display Peptides as Novel Sensing Materials for Highly Sensitive
and Selective Biomimetic Optoelectronic Nose .......................................................................................... 114
V. Escobar, C. Hurot, S. Brenet, M. El Kazzy, N. Scaramozzino, R. Mathey, A. Buhot, and Y. Hou
Calibration of a Hail-Impact Sensor based on Piezoelectric Transducers .............................................. 117
F. Blasina, A. Echarri and N. Pérez
Terahertz Sensor System with Dual Mode Operation ............................................................................... 120
Janez Trontelj, Andrej Švigelj, Domen Višnar, Janez Trontelj jr.
Physiological Assistance by Climate Comfort: Measurements and Indicators ....................................... 124
Bernhard Kurz, Christoph Russ
Internet of Things-based Geo-awareness System for Civilian Drones ..................................................... 128
S. Kunze
Hyperspectral Imaging Microscopy for Single-cell-analysis ..................................................................... 133
Wolfgang Kurz, Aaron Flügge Arus, Emre Kariper, Olcay Akgün, Edwin Adisoemarta, Martin Jakobi,
Alexander W. Koch
Development of Bimetallic Zn/Ti-BMOF Thin Film Composite Optical Waveguides
for Ethylenediamin Detection at Ambient Temperature .......................................................................... 136
Patima Nizamidin , Huifang Chen
Routine Measurement and Monitoring System for the Activity of Elderly People with Dementia:
A Systematic Review ..................................................................................................................................... 139
Júlia D. Rodrigues, Pedro Morais and Vítor Carvalho
Virtual Reality and Artificial Intelligence as Tools to Aid the Management of Chronic Pain:
A Comprehensive Literature Review .......................................................................................................... 145
Arthur Gomes, Anabela Marques, Vítor Carvalho and Duarte Duque
Using Machine Learning to Classify Network Abnormalities into Legitimate or Assault
in IoT-based Cyber Physical System ........................................................................................................... 150
Stephen Afrifa, Vijayakumar Varadarajan, Peter Appiahene and Tao Zhang
Vehicle Speed Measurement through Ground Vibrations Induced by Transverse Rumble Strips ..... 154
D. Thanglerdsumpan, P. Wardkein and L. Kirasamuthranon
Static and Dynamic Calibration of Pneumatic Pressure Sensors and Instruments ............................... 160
José Dias Pereira, Octavian Postolache
APHRODITE: Design and Preliminary Tests of an Autonomous and Reusable Photo-sensing
Device for Immunological Test aboard the International Space Station ................................................ 164
L. Nardi, N. Maipan Davis, S. Sansolini, T. B. De Albuquerque, M. Laarraj, D. Caputo, G. de Cesare,
S. R. Shariati Pour, M. Zangheri, D. Calabria, M. Guardigli, M. Balsamo, E. Carrubba, F. Carubia,
M. Ceccarelli, M. Ghiozzi, L. Popova, A. Tenaglia, M. Crisconio, A. Donati, A. Nascetti, M. Mirasoli
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
5
Foreword
On behalf of the SEIA’ 2023 Organizing Committee, we introduce with pleasure these proceedings
devoted to contributions from the 9th International Conference on Sensors and Electronic
Instrumentation Advances 20223 held in Funchal (Madeira Island), Portugal. The conference is
organized by the International Frequency Sensor Association (IFSA) in technical cooperation with IFSA
Publishing, S. L. (Barcelona, Spain) and media partners MDPI ‘Sensors’, ‘Chemosensors’ and
‘Biosensors’ and open access journals (Switzerland).
The proceedings contain all papers of both: oral and poster presentations, which were presented at the
conference. We hope that these proceedings will give readers an excellent overview of important and
diversity topics discussed at the conference.
We thank all authors for submitting their latest work, thus contributing to the excellent technical
contents of the Conference. Especially, we would like to thank the individuals and organizations that
worked together diligently to make this Conference a success, and to the members of the International
Program Committee for the thorough and careful review of the papers. It is important to point out that
the great majority of the efforts in organizing the technical program of the Conference came from
volunteers.
Prof., Dr. Sergey Y. Yurish,
SEIA’ 2023 Conference Chairman
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
6
(001)
Fiber Optic Current Sensor Based on 22.5° Faraday Rotator
and Polarizing Beam Splitter
A. Madaschi, P. Martelli and P. Boffi
Politecnico di Milano, Policom Lab, DEIB Department, via Ponzio 34/5, 20133 Milan, Italy
E-mail: andrea.madaschi@polimi.it
Summary: In this work a novel implementation of a polarimetric fiber optic current sensor scheme based on a polarizing beam
splitter and a 22.5° Faraday rotator will be presented. The developed sensor employs a compact and pigtailed component
realized by joining a polarizing beam splitter and a Faraday rotator, it does not require any external adjustment to work and
shows very high immunity to external electromagnetic and mechanical interference. The configuration has been experimentally
validated using electrical current values up to 580A. The experimental results also prove good sensitivity of the sensor allowing
the measurement of small values of current down to 100 mA.
Keywords: Faraday effect, Fiber optic current sensor, Verdet constant, Sensor integration.
1. Introduction
Electrical current measurements, especially in
power plants, still today remain a very challenging
task. In order to satisfy, in an efficient way, the
growing demand of electrical power, all the processes
involved from the production to the transport and even
often the final usage are getting a shifting up of the
voltages, up to hundreds of kV. Classical methods to
carry out electrical current measurements are usually
based on Current Transformers (CTs). CTs must be not
only adequately shielded against electromagnetic
interference (EMI), but must also fulfil all the rigorous
security standard in terms of insulation, requested in
high voltage application. The complexity and the cost
of CTs consequently increase exponentially. Fiber-
Optic current Sensors (FOSs) could potentially be
exploited in several high voltage applications where
the electromagnetic immunity is a mandatory
requirement [1,2]. Moreover, thanks to the intrinsic
dielectric property of the optical fiber, they can
guarantee a very high degree of insulation, at no
additional cost in term of complexity of the sensor
itself and also in term of production cost. Almost all
the developed solutions rely on the magneto-optic
Faraday effect. The Faraday effect describes the
relation between light and magnetic field. The
exposure of a magneto-optical medium to a magnetic
field induces in it a non-reciprocal circular
birefringence directly proportional to its magnitude
[3, 4].
2. Sensor Configuration
The proposed sensor takes advantage of a new
compact and pigtailed component that encloses a
polarizing beam splitter/combiner directly attached to
a 22.5° Faraday rotator. This component is employed
in the configuration reported in Fig. 1. Its working
principle relies on the basic property of the polarizing
beam splitter (PBS), which divides the incident light
into horizontal (H) and vertical (V) polarization states,
transmitting H and reflecting V. In this configuration a
low-coherence light source must be used in order to
avoid additional noise due to the internal beating
between the forward and the backward beams in the
component. In addition, the component has been
designed to work with unpolarized light as light
source, so that it does not require the use of
polarization maintaining fiber at its input. It is also
self-aligned, hence increases the robustness of the
configuration. The beam outgoing from the PBS gets a
nonreciprocal rotation of 22.5°, thanks to the Faraday
rotator, then it exits from the component by port 3,
propagates in the fiber coil and it is reflected back by
the Faraday rotator mirror (FRM). The light beam at
the entrance of the component from port 3 has
accumulated a SOP rotation equals to 112.5° with
respect to the SOP of the light at the exit of the PBS.
This is a key feature of the setup, because after the back
propagation of the light in the 22.5° Faraday rotator the
SOP of the light is properly rotated with respect to a
PBS transmission axis of an angle α = 45° (that is 𝛼
𝜋/4 in radians), corresponding to the point of
maximum sensitivity and linearity of the PBS transfer
functions. The SOP of light after the back and forth
propagation in the fiber coiled around the electrical
conductor gets a polarization rotation by an angle [5]:
𝜗󰇛𝑡󰇜2𝑉𝑁𝑖󰇛𝑡󰇜,, 󰇛1󰇜
where V is the Verdet constant of the fiber, N is the
number of spires in the coil and i is the current
intensity. The total rotation of the SOP before hitting
the PBS in the backwards direction is then 𝛼𝜋/4
𝜗󰇛𝑡󰇜 and the intensities of the beams exiting from the
component are respectively:
𝐼𝐼𝑐𝑜𝑠󰇡𝜋4
𝜗󰇛𝑡󰇜󰇢
𝐼𝐼𝑠𝑖𝑛󰇡𝜋4
𝜗󰇛𝑡󰇜󰇢
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
7
Fig. 1. Scheme of the polarimetric fiber optic current sensor.
The detection of both components of the light
allows to obtain a signal fully independent from
fluctuations of the optical power of the source and
variation of the losses in the circuit. Calculating the
ratio between the difference and the sum of the
intensities of the two output components and applying
the approximation for small value of ϑ, it is possible to
obtain:
𝑆𝐼𝐼
𝐼𝐼𝜗󰇛𝑡󰇜 󰇛3󰇜
The SOP of the light for detecting the electrical
current in an efficient manner must remain as linear as
possible during the propagation in the sensing coil.
Residual and bending-induced linear birefringence
tends to alter the SOP of the light in the coil. One of
the most effective solutions for limiting the detrimental
impact of birefringence is the use of a spun high
birefringence (Hi-Bi) fiber [6]. Hi-Bi fibers, thanks to
their particular spinning process, are able to preserve
the entrance SOP of the light in the coil in absence of
the magnetic field.
3. Experimental Results
The unpolarized and low-coherence light source
employed in the experimental tests is the amplified
spontaneous emission of an Erbium doped fiber
amplifier at wavelength of 1550 nm. The sensing head
consists in a coil of Hi-Bi fiber of 19 turns with 10 cm
of diameter. The beams outgoing from port 1 and 2 of
the component have been detected by two identical
photodiodes, then the two signals were acquired and
sampled by a National Instrument DAQ board and
finally post-processed. The electrical current signal to
be measured, flowing in a wire surrounded by the fiber
coil, has been generated at the frequency of 50 Hz by a
programmable AC power source. The electrical
current is also monitored by a current clamp that
generates a voltage proportional to the peak amplitude.
A set of measurements have been carried out, aiming
to evaluate the performance of the sensor for a current
intensity, up to 580 A. In Fig. 2a are reported the
measurements obtained by the clamp and the
measurements obtained by the FOS, where the 50 Hz
tone amplitude of the Fourier transform of the post-
processed signals obtained by Eq. 3 is scaled to the
current clamp values. The best model found that fits
the FOS and the clamp values is the linear model. Fig.
2b shows the linear fitting curve and the corresponding
experimental data, each point is the average of the
value corresponding to a current step.
Fig. 2 (a) Electrical current measurement results by
using the FOS and current clamp, (b) fitting curve between
FOS and clamp measurements.
Another set of measurements have been carried out
with the object to evaluate the goodness of the
configuration for low values of current intensity. In
this second set, the range of electrical current analyzed
is between 0 and 3 A with a granularity of 300 mA.
In Fig. 3(a) has been summed up the measurements
obtained by the FOS and they are compared with the
values measured by the current clamp.
The standard deviation of the small noise
fluctuations calculated by averaging the deviations
obtained for all the steps is 0.034 A, leading to an
accuracy of the sensor equal to about 0.1 A.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
8
Fig. 3 (a) Fiber optic sensor (FOS) measurement in
range 0-3 A scaled to the corresponding value measured by
current clamp, (b) Fitting curve between FOS and current
clamp measurements.
As in the previous set of measurements the values
of the FOS are fitted to the electrical clamp values as
reported in Fig. 3b. Even in this case the response of
the configuration is perfectly linear.
4. Conclusions
In this work a novel configuration of a polarimetric
FOS, with excellent performance for a wide range of
electrical current values, has been presented. The
proposed solution does not require components
sensitive to the polarization, it is self-stabilized and the
measurements are independent from the light source
intensity. The experimental results have confirmed the
linear response of the FOS to the electrical current, as
expected from the theory. It has been also proven that,
the it can measure small electrical current values, down
to 100 mA.
Reference
[1]. Bohnert, Optical fiber sensors for the electric power
industry, Optics and Lasers in Engineering, Vol. 43,
3-5, 2005, pp. 511–526.
[2]. Farnoosh Rahmatian, Abraham Ortega, Applications
of Optical Current and Voltage Sensors in High-
Voltage Systems, in Proceedings of the IEEE/PES
Transmission & Distribution Conference and
Exposition: Latin America, 2006, pp. 1-4.
[3]. A. Papp and H. Harms, Magnetooptical current
transformer. 1: Principles, Applied Optics, Vol. 19, 22,
1980, pp. 3729–3734.
[4]. K. Bohnert, Fiber-Optic Current Sensor for
Electrowinning of Metals, Journal of Lightwave
Technology, Vol. 25, No. 11, 2007, pp. 3602-3609.
[5]. Yuefeng Qi et al., Novel Fiber Optic Current
Transformer with New Phase Modulation Method,
Photonic Sensors, Vol. 10, 2020, 275–282.
[6]. R. I. Laming et al., Electric current sensors employing
spun highly birefringent optical fibers, Journal of
Lightwave Technology, Vol. 7, 1989, pp. 2084–2094.
(a)
(b)
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
9
(005)
Detection of Trafficable Areas in Outdoors with a Downward
Looking 2D LiDAR
A. Olivas and F. Torres
University of Alicante, Group of Automation, Robotics and Computer Vision, 03690, Alicante, Spain
E-mail: alejandro.olivas@ua.es, fernando.torres@ua.es
Abstract: Detecting and evading obstacles are essential components of mobile robots. In this paper, we use only a 2D Light
Detection and Ranging (LiDAR) sensor to detect the trafficable areas in the terrain, analysing it. Therefore, the sensor was
placed downward looking to detect obstacles with different heights and also potholes. Firstly, the detected points are segmented
into lines, and a 3D map is generated with them. For classifying them into the ground, obstacles or potholes, different
trajectories of the vehicle are generated, and depending on the height of the lines, the lines are classified. This process is done
dynamically while the robot is moving for two reasons. The first one is to update the trafficable areas when there are dynamic
obstacles. The second is to analyse the height variation during the trajectory to classify correctly sloping terrains. The
experiments in simulated environments show that our method classifies successfully the areas.
Keywords: Mobile robots, LiDAR, Downward looking, Low-cost.
1. Introduction
The perception of the environment is a
fundamental part of robotics. Robots can adapt to
changes and be autonomous because they can detect
and perceive these changes. This part is more
important when the robot is in an unstructured
environment because it has to detect unknowing
objects.
Cameras have been used to perceive the
environment and detect obstacles. Nowadays, there
are many methods that use deep learning to detect
obstacles. The obstacles can be classified using a
convolutional trained neural network, so it can be
identified the cars, pedestrians and other obstacles [1].
Other methods deal with obstacle detection in driving
environments using unsupervised learning algorithms
[2].
LiDARs are the other main type of sensor used to
detect obstacles. Using the 2D LiDARs horizontally is
the more common way to use them in mobile robots.
They are used to generate 2D maps of the environment
and to locate the robot by comparing the measures
with the map. The occupancy grid maps [3] discretize
the space in cells, which have a value that determines
the probability of the cell being occupied. The state of
each cell is estimated using a binary Bayes filter. This
is an efficient algorithm for mapping assuming that the
pose is known, and it is also robust against dynamic
objects. Using the sensor in this way, it can only detect
obstacles with the same height as the sensor or higher.
The 2D LiDARs can be placed downward looking,
with the advantage to detect obstacles of lower height
as well as potholes. Han et al. [3] present a method to
divide the scanned points into segments searching
consecutive points with a difference in height greater
that a given threshold. From this division, obstacles
can be detected. Pang et al. [4] extend this type of
methods for sloping roads by calculating the height
and the vector of the road surface, so it can effectively
handle the changes in the road condition. Detecting the
changes in the height of the road allows the correct
identification the road surface on ramps and other
passable slopes.
In our previous work [6], a 3D map of lines was
made with a downward LiDAR and the lines were
classified using their height. This classification works
in structured indoor environments. However, the
classification fails in terrains with slopes, and the
method has problems detecting dynamic obstacles,
that were saved in the 3D map, but they could not be
reconsidered in future scans. In this paper, we present
a classification that works also in unstructured outdoor
environments and also a map that can handle dynamic
objects.
2. Approach
In this research, the 2D LiDAR sensor is placed
downward looking with some inclination since the
objective is to detect obstacles or potholes in the
ground. The data are represented in a 3D map where
the origin of coordinates of the global axis is in the
ground, just below the sensor at the start of the sensor.
To obtain the points in the global coordinate system,
the robot's position is needed. The problem of
localization is considered resolved. In the used mobile
robot, the localization was determined with the fusion
of odometry and GPS.
The localization of the robot used in this research
only returns three values: the position of the robot in
x
and
y
, and the direction of the robot θ. It was needed
to approximate the value of
z
, which will be called
height henceforth, to make a 3D map and classify the
environment correctly. The nearest ground line to the
actual position of the robot is used to estimate it as the
mean between the previous height and the line’s
height. With this approximation instead of using
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
10
directly the line’s height, the height is smoothed and
there is less error in the estimation. However, there is
still an accumulative error. This can cause problems
when the robot navigates through explored areas. To
solve this, the 3D map, which contains the detected
lines, has a limited number of lines.
2.1. Line Detection
The point cloud detected in one measurement of
the LiDAR will be segmented into lines, which will be
used to determine what areas are trafficable. A point pi
belongs to a line if it satisfies two constraints:
|𝑝𝑝|𝛿 󰇛1󰇜
𝑝𝑝𝑑 󰇛2󰇜
The constraint (1) means that the point has to be in
a threshold 𝛿 around the Z axis of the first point of the
line. This is done because big changes in this axis
mean the existence of an obstacle or pothole. The
constraint (2) considers the distance between two
consecutive points, because a big gap between them
means that there is no information of that zone. The
segmented lines with fewer points than n are removed
to filter the number of little lines.
Then, the lines are refined because there are a lot
of lines that were segmented because of the height
difference with the first point. This happens for
example in lateral walls. If the normalized vector from
the first point to the last point of a line is equal to the
normalized vector of the next line and to the vector
from the first point of the first line to the last point of
the second line, both lines are grouped into one line
because they belong to the same straight. In Fig. 1, the
result of the refinement can be observed. In this
refinement, a slight error 𝛾 is allowed. The detected
lines are added to the 3D map. As this map has a
limited number of lines, the oldest lines are erased.
Besides, before adding a new line, its nearest line is
searched and if it is close enough, the old one is erased
because they are considered the same line. Therefore,
the map does not save the same line more than once.
Fig. 1. Since the normalized vectors are sufficiently
similar, the two lines have been joined into one.
2.1. Line Classification
The lines are classified dynamically while the
robot is moving, determining where the robot can go
and where not. For doing this, firstly some possible
trajectories are generated to analyse if there are
obstacles or potholes. In this research, it is used a
mobile robot with Ackermann geometry, so the
generated trajectories fit this type of vehicle. For each
point of the trajectory, the lines of the map are
analysed, determining the distance from the point to a
line by resolving the equation (3).
𝑝𝛼 ⋅𝑛󰇍
𝑓
𝛽⋅𝑣 󰇍
󰇍
󰇍
, 󰇛3󰇜
where 𝑝 is the point of the trajectory, 𝑛󰇍
is the
normalized vector obtained from the trajectory’s
direction, 𝑓 is the first point of the line, 𝑣 󰇍
󰇍
󰇍
is the vector
from the first point to the last one of the line, and 𝛼
and 𝛽 are the variables. The value of 𝛼 is the distance
between the line and the robot, and if 𝛽 is between 0
and 1, it crosses the line. If vectors 𝑛󰇍
and 𝑣 󰇍
󰇍
󰇍
are almost
parallel, the normal vector of 𝑛󰇍
is used to get a more
realistic distance.
Using the value of 𝛽 it can be obtained the height
of the line in that part. If the height is in a threshold
𝜁 around the height of the robot, the line is considered
ground, otherwise an obstacle or pothole. The ground
lines are taken into account only if 𝛽 is between 0 and
1. However, the obstacle lines are lengthened by a
distance equal to the width of the robot, so the value
of 𝛽 is between -𝑎 and 1𝑎, which depends on the
magnitude of 𝑣 󰇍
󰇍
󰇍
.
The terrain may have slopes, so the z position is
updated during the trajectory using the height of the
nearest ground line, estimating the slope of the
trajectory and analysing correctly the lines.
A 2D occupancy map is constructed using the
classified lines. One region is considered untrafficable
if there is an obstacle, but also if there are not enough
ground lines, as in this case there may be a pothole.
With this map, the robot can navigate avoiding
obstacles and potholes and through environments with
slopes like ramps. As it is updated while navigating,
the dynamic obstacles are taken into account.
3. Experiments
The method was tested on a simulator, Gazebo, in
environments with ramps and obstacles. Firstly, the
parameters of the line segmentation and classification
were tuned manually, and they are shown in Table 1.
For the occupancy map, we set a resolution of 0.5 𝑚,
which means that the area of each cell is 0.5 𝑚. This
resolution was set considering the size of our robot,
which is 1.08 𝑚 long and 1 𝑚 wide, and after checking
that a smaller resolution did not improve notably the
method despite of the increase in memory use. The
maximum number of lines in the 3D map was
configured as 200.
Firstly, we test our method in a ramp with an
inclination of 10º and a pothole, shown in Fig. 2. The
map generated after exploring it is shown in Fig. 3.
The limits of the ramp are detected correctly except in
a region at the top left of the image because the robot
did not go near enough there to detect that it is
untrafficable. The pothole is also detected correctly,
and there are untrafficable cells near it because when
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
11
the robot is turning, the sensor has blind spots inside
the radius of curvature.
Table 1. Parameters for line segmentation
and classification.
Parameters Description Value
𝜹 Threshold in the Z axis to
consider a
p
oint in a line. 0.05
𝒅 Maximum distance between
two consecutive
p
oints of line. 0.2
𝒏 Minimum number of
p
oints in a line. 10
𝜸 Maximum allowed difference
during the line refinement. 0.11
𝜻
Threshold in the Z axis around
the height to consider a line as
ground.
0.2
Fig. 2. Simulated sloping ground in Gazebo.
Fig. 3. Occupancy map of the ramp. White cells are
trafficable, black are not and grey are unexplored.
Then, it has been tested in an environment with a
dynamic obstacle, a pedestrian who is walking. Fig. 4
shows this simulation and the pedestrian path. The
occupancy maps of two different moments are shown
in Fig. 5, showing that our method can handle dynamic
objects as it updates the map when the robot goes
through an explored area.
Fig. 4. Simulation of a dynamic object (pedestrian).
Fig. 5. Occupancy maps in two different moments. The
pedestrian has another position and some cells of the
previous position have been classified as trafficable.
4. Conclusions and Future Work
In this paper, we present a novel method to classify
the environment in trafficable and untrafficable areas.
It detects both obstacles and potholes in unstructured
environments and the method is robust to the slopes in
the terrain. This presents an advantage against past
methods that have problems with sloping roads, don't
take potholes into account or use more expensive
sensors. The method is also robust against dynamic
obstacles because it reconsiders the occupancy map
when the robot navigates through explored areas.
This approach uses a 2D LiDAR and other sensors
for the localization of the robot, such as encoders and
GPS. However, the robot has no information for
moving backwards, which is necessary for safety
reasons. Nevertheless, adding such a sensor will
continue to be more economic than using a 3D LiDAR
sensor.
In future works, we have to test the method in the
real robot to analyse the noise in the perception and
localization, and how it impacts to our approach.
References
[1]. A. Dairi et al, Unsupervised obstacle detection in
driving environments using deep-learning-based
stereovision, Robotics and Autonomous Systems,
Vol. 100, 2018, pp. 287-301.
[2]. G. Prabhakar et al, Obstacle detection and
classification using deep learning for tracking in high-
speed autonomous driving, in Proceedings of the 2017
IEEE region 10 symposium (TENSYMP), 2017,
pp. 1-6.
[3]. H. Moravec and A. E. Elfes, High resolution maps
from wide angle sonar, in Proceedings of the IEEE
International Conference on Robotics and
Automation, 1985, pp. 116 - 121.
[4]. J. Han et al, Enhanced road boundary and obstacle
detection using a downward-looking LIDAR sensor,
IEEE Transactions on Vehicular Technology, Vol. 61,
2012, pp. 971-985.
[5]. C. Pang et al, Adaptive Obstacle Detection for Mobile
Robots in Urban Environments Using Downward-
Looking 2D LiDAR, Sensors, Vol. 18, 6, 2018, 1749.
[6]. A. Olivas González and F. Torres Medina, Detection
and Classification of Obstacles Using a 2D LiDAR
Sensor, in Proceedings of the 5
th
International
Conference on Advances in Sensors, Actuators,
Metering and Sensing (ALLSENSORS’ 2020), 2020,
pp. 63-69.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
12
(006)
Hardware Acceleration of Pulse Analysis using FPGAs in MicroTCA
C. Gonzalez 1, M. Ruiz 1, A. Carpeño 1, A. Pinas 1, D. Cano-Ott 2, J. Plaza 2,
T. Martinez 2 and D. Villamarin 2
1 Universidad Politécnica de Madrid, Grupo de Instrumentación y Acústica Aplicada, Spain
2 Ciemat, Spain
Tel.: +34 91 0678950, fax: +34 91 0678950
E-mail: c.gonzalezb@alumnos.upm.es
Summary: Real-time signal processing in large-scale scientific experiments requires specialized hardware with high
computational capacity and speed. This paper describes the implementation of a digital pulse shape analysis (DPSA) system
on an FPGA. The signals are acquired at 1 GS/s from a BC501A liquid scintillator. The algorithm consists of an FIR filter and
a constant fraction discriminator (CFD) function to detect the pulses and estimate the time stamp at which each pulse occurs,
a peak detection function to determine the number of pulses per signal and pileups, and finally, an energy calculation function
to estimate the pulse energy. The proposed hardware architecture uses the JESD204B specification for high-speed ADCs on a
XILINX FPGA device. The practical implementation is done with a MicroTCA system, including a NAMC-ZYNQ-FMC
board with a Xilinx ZYNQ Ultrascale-MP SoC. The JESD204B physical and data link layers have been developed in Hardware
Description Language (HDL), while the Xilinx High-Level Synthesis Language (HLS) has been used for the transport and
application layers and processing blocks. This architecture achieves an analysis time of 53us per signal with an FPGA resource
utilization of about 50 %.
Keywords: DPSA, JESD204B, HLS, Hardware acceleration.
1. Introduction
In particle physics and radiation detection,
BC501A liquid scintillators are widely used as
detectors to measure the energy and characteristics of
particles and radiation sources [1] [2]. By harnessing
the power of field-programmable gate arrays (FPGAs)
and the hardware process acceleration techniques,
digital pulse shape analysis (DPSA) [3] can be
performed with reduced latencies, potentially enabling
real-time application. However, the feasibility of
realtime analysis is highly dependent on the available
FPGA resources and the complexity of the algorithm
used.
This paper proposes a hardware implementation
for DPSA of signals from a BC501A liquid scintillator
digitized at a rate of 1GS/s and 14-bit resolution as a
real-time analysis system.
The proposed solution is based on a Micro
Telecommunications Computing Architecture
(MicroTCA) chassis [4] that provides physical,
mechanical, electrical and thermal support for a
NAMC-ZYNQ-FMC Advanced Mezzanine Card
(AMC) [5]. NAMC-ZYNQ-FMC is a SoC-based
AMC whose main component is a Xilinx ZYNQ
Ultrascale+MP SoC [6]. In the SoC programming
logic (PL) area, the JESD204B standard [7] is
implemented to interface the digitizer and the DPSA
application.
The JESD204 specification defines a serial
interface that connects high-speed converters to logic
devices such as FPGAs and ASICs. Analog Devices
provides an open-source IP interface framework
distributed under the GPL2 license to implement the
JESD204 interface and the software to configure all
the hardware elements [8]. The letter B in JESD204B
refers to the second revision of the standard. The
distribution of the hardware elements, operating
system and DPSA application is shown in Fig. 1.
Fig. 1. Hardware elements, OS and DPSA application stack.
As described by Guerrero et al. in [3], the goal of
the DPSA is to extract the parameters of physical
relevance. In the case of the BC501A detector, the
parameters are the time, amplitude, integrals over
various time intervals, and the type of the incident
particle. The DPSA application layer, located at the
top of the stack, fulfills this objective. Within the
Processing System (PS) of the Xilinx ZYNQ
Ultrascale+MP SoC, a Linux Embedded System
enables the execution of the DPSA application, which
coordinates the execution of the kernels and reads the
analysis results. The SoC PL area contains the
hardware functions (kernels) that perform the signal
analysis and achieve the goal of the DPSA.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
13
2. Methodology
2.1. JESD204B Implementation
The JESD204B specification defines four layers
based on functionality (Harris and Fan provide details
in [9] and [10], respectively). The JESD204B
implementation on the Xilinx Ultrascale+ MPSoC
consists of the FPGA design implemented in the PL
using Hardware Description Language (HDL) for the
physical and data link layers and High-Level Synthesis
(HLS) for the transport and application layers. An
embedded Linux distribution running on the PS is used
to configure the peripherals implemented in the PL and
support the host software.
The solution avoids the use of a physical ADC to
implement and validate a processing algorithm in the
FPGA. A JESD204B interface emulates a DAC that
generates 1GS/s and 14-bit resolution signals. The
output of this interface is connected to another
JESD204B interface that emulates the ADC (the
JESD204B of the DAC is connected to the JESD204B
of the ADC with an external loopback). The signals
reproduced by the emulated DAC are stored in a file in
the host system (coming from real captures of the
experiment). These signals are sent from the host to
the global memory accessible by the FPGA PL. Fig. 2
shows the DAC and ADC interfaces of the JESD204B
connected by an external loopback, the implemented
kernels on the PL, and the interactions of the PS and
PL with the global memory.
The PL clock speed is 200MHz, while the
acquisition rate is 1 Gs/s. This means that 5 samples
are acquired per system clock cycle. To solve this
problem, the JESD204B is implemented to send a line
link 128-bit data frame per cycle to the DPSA. The
data frame is coupled by the transport application layer
of the JESD204B (RX kernel) so that the DPSA
receives a constant sample stream per clock cycle.
Fig.
3 shows the data stream that couples the
difference between the data acquisition and processing
clocks.
Fig. 2. ZYNQ-Ultrascale+ implementation. The PL implements the DPSA using the streaming data read with the
JESD204B interface. An external loopback connects the signal generation with the data acquisition.
Fig. 3. Stream from 1 Gs/s ADC to 200MHz PL.
2.2. DPSA Application
To achieve the goal described by Guerro et al. in [3], a
kernel called DPSA with three main functions is
implemented in the FPGA in the PL area of the Xilinx
Ultrascale+ MPSoC using HLS. These functions are
designed to work with the pipeline technique.
In the first function (Compute RC & CFD process
in Fig. 5), the signals are low-pass filtered by a 20tap
FIR filter. This significantly increases the latency (it is
necessary to perform 20 operations per sample), so the
filter is not applied to the whole signal, but an area of
interest that is found before filtering. The area of
interest is threshold detected and starts n samples
before the threshold is triggered (starting point). The
filter parameters and the n value are set in the hardware
configuration. If no other pulse is detected, the end
point of the signal of interest is equal to the starting
point plus a pre-set time interval to detect the energy.
If another pulse is detected, the filter is applied to the
entire signal from the starting point.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
14
Fig. 4. DPSA application flowchart.
A Constant Factor of Discrimination (CFD) [11] is
applied simultaneously as the FIR filter and only in the
interest region of the signals. Thus, at the end of this
function, a filtered signal and a CFD signal are
streamed to the next function.
The second function detects the exact point where
the peak occurs (Peak detection process in Fig. 5). This
is the point where the filtered signal crosses the
threshold and the CFD signal crosses the baseline. For
the system to be considered reliable, it is necessary to
interpolate to obtain the exact point where the CFD
signal crosses the baseline. This point is the time at
which the pulse occurs and must be stored as one of
the results of the analysis. The function also
determines the number of peaks per signal and whether
there are pileups.
Finally, in a third function, the filtered signal is
integrated over several time intervals and an array of
energies is obtained (Energies Calculation process in
Fig. 5).
In the PS area of the Xilinx Ultrascale+ MPSoC,
the host software is implemented in C++,
crosscompiled for an ARM architecture, and runs on
an embedded Linux deployed with Petalinux. The
functions of the host software are to: read signals from
a file and write them to a buffer that the PL can access;
coordinate the execution of application layer functions
(kernels) on the PL; and finally, read the results stored
by the kernels in global memory (GM).
(a)
(b)
Fig. 4. A. Input signal and B. Filtered signal (purple)
and CFD signal (light green).
2.3. Results Analysis
In order to determine the reliability and to obtain
the performance characteristics of the system, an 8Gb
database of signals was analyzed, showing similar
results to an analysis implemented in C++, which is
the starting point of this project. This is discussed in
more detail in the Discussion section.
The analysis of execution times and FPGA
resources used was performed with VIVADO tools.
3. Results
The result is a high-performance DPSA system
that can be connected directly to a 1Gs/s via its
JESD204B interface. Fig. 6 shows two screenshots
taken from VIVADO analysis tool. Fig. 6 A. shows the
percentage of FPGA resource utilization of the
application layer and Fig. 6 B. the execution time of
53.476 us for analyzing a signal.
Fig. 6. A. Percentage of FPGA resource utilization.
B. Application layer execution time.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
15
In this paper, an MTCA-based high-speed data
acquisition and processing system, including an AMC
with an FPGA implementing a JESD204B interface,
has been implemented. The pulse analysis application
of a scintillator has been developed using hardware
acceleration techniques based on HLS. The results
show the feasibility of introducing this hardware
implementation in data acquisition systems to obtain
real-time results under certain experimental
conditions. It is important to note that the baseline
calculation is not implemented yet and will be added
in a future revision. In order to compare the original
DPSA implemented in C++ with this new
implementation, the baseline used is the one available
in the dataset obtained from real experiments with an
AD14 digitizer (this DAQ device includes a specific
firmware providing the baseline).
5. Conclusions
A high-performance embedded system to meet the
DSPA goals has been implemented on a Xilinx ZYNQ
Ultrascales+MP SoC supported by a MicroTCA
architecture. As shown in Fig. 6, a typical analysis
time is about 53us and the FPGA resources utilization
of about 50 %.
Using the JESD204B interface to emulate the
behavior of signal generation and data acquisition in
conditions similar to the real experiment allows faster
algorithm validation. The results in terms of execution
time and latency allow us to explore the feasibility of
integrating the solution on real experimental
platforms, using tools that allow us to adapt the
firmware of the acquisition devices.
The DPSA application is developed in HLS,
which significantly reduces the development time.
References
[1]. Jianguo Quin, Caifeng Lai, Bangjiao Ye, Rong Liu,
Xinwei Zhang, Li Jiang. Characterizations of BC501A
and BC537 liquid scintillator detector, Applied
Radiation and Isotopes, Vol. 104, 2015, pp. 15-24.
[2]. F. Arneodo, P. Benetti, A. Bettini, et al., Calibration of
BC501A liquid scintillator cells with monochromatic
neutron beams, Nuclear Instruments and Methods in
Physics Research Section A: Accelerators,
Spectrometers, Detectors and Associated Equipment,
Vol. 418, Issues 2-3, 1998, pp. 285-299.
[3]. C. Guerrero, D. Canno-Ott, M. Fernández-Ordoñez,
E. González-Romero, T. Martínez, D. Villamarín.
Analysis of the BC501A neutron detector signals
using the true pulse shape, Nuclear Instruments and
Methods in Physics Research Section A: Accelerators,
Spectrometers, Detectors and Associated Equipment,
Vol. 597, Issues 2-3, 2008, pp. 212- 218.
[4]. Micro Telecommunications Computing Architecture
Short Form Specification, PICMG, 2006.
[5]. NAT, NAT_AMC_ZYNQ_FMC Technical Reference
Manual.
[6]. Xilinx, ZYNQ Ultrascale+ MPSoC, [Online],
https://www.xilinx.com/products/silicondevices/
soc/zynq-ultrascalempsoc.html, Accessed 31 May
2023.
[7]. J. Harris, What is JESD204 and Why Should We Pay
Attention to It, Analog Devices, 2019.
[8]. JESD204 Interface Framework
(https://www.analog.com/en/designcenter/
evaluation-hardware-and-software/jesd204- interface-
framework.html)
[9]. J. Harris, Understanding layers in the JESD204B
specification: A high speed ADC perspective, Analog
Devices, 2017.
[10]. H. Fan, Quickly Implement JESD204B on a Xilinx
FPGA, Analog Dialogue, 49-02, 2015.
[11]. M. Beuzekom, Identifying fast hadrons with silicon
detectors, Thesis, University of Groningen, 2006.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
16
(007)
Advanced Polymer Materials for Real-time Sensing
of Inflammation and Infection
M. Hrubý, H. Zhukouskaya and E. Tomšík
Institute of Macromolecular Chemistry CAS, Heyrovsky Sq. 2, 162 06 Prague 6, Czech Republic
Tel.: + 420 608 559 641 (mobile), +420 296 809 130 (office) fax: +420 296 809 410
E-mail: mhruby@centrum.cz
Summary: We describe new polymeric materials for use in an electrochemical multisensor, which will provide a
potentiometric response upon contact with selected analytes, such as pH changes, presence of reactive oxygen species, free
iron ions, etc., selectively detecting and distinguishing multiple biomarkers of bacterial and sterile pathologies in real time.
The sensor electrodes also include a non-biofouling layer to minimize interferences. The target application of the new materials
is the integrated potentiometric biosensor for early indication of the presence and the type of inflammation. They are to be
used in devices for in vitro sensing of inflammation-related analytes in body fluids, such as synovial liquid, directly on-site in
the operating room during the surgery. Alternatively, electrodes in such a sensor might be miniaturized and integrated into an
implant with a wireless data transfer device with an external readout to allow in situ sensing without the necessity of invasive
intervention.
Keywords: Polymer, Biosensor, Electrode, PEDOT, Polyoxazoline.
1. Introduction
Orthopedic surgery of the total hip and total knee
arthroplasty are surgeries with the primary purpose of
restoring joint function of persons affected by
osteoarthritis. The growing life expectancy of the
human population also increases the number of
patients in need of such surgery, which can
significantly improve one’s quality of life.
Unfortunately, orthopedic implants are highly
susceptible to peri-implant sterile inflammation or
microbial infections (prosthetic joint infections, PJI).
These complications, which manifest as pain,
erythema, swelling and discharge from the wound site,
require long hospitalizations and can lead to
osteomyelitis, implant failure, sepsis, multiorgan
dysfunction, amputation or even death.
Early detection of sterile/bacterial inflammation
and its type is of key importance for successful
therapeutic intervention. We describe new polymeric
materials for use in an electrochemical biosensor
simultaneously continuously measuring multiple
inflammation/infection biomarkers. It will provide a
set of potentiometric responses upon contact with
selected analytes, such as pH changes, presence of
reactive oxygen species, free iron ions etc. selectively
detecting and distinguishing these biomarkers of
bacterial and sterile infection in real time.
Such multisensor to be used in devices for in vitro
sensing of inflammation-related analytes in body
fluids, such as synovial liquid, directly on-site in the
operating room during the surgery. Alternatively,
electrodes in such a sensor might be miniaturized and
directly integrated into an implant with a wireless data
transfer device with an external readout to allow in situ
sensing without the necessity of invasive intervention.
2. Results and Discussion
2.1. The pH-Responsive Sensoric Layer
The pH significantly decreases in the inflamed
microenvironment and especially in the presence of
bacteria that additionally lower the pH value by their
metabolism [1].
Highly hydrophobic pH-sensing perfluorinated
polyaniline thin films with a water contact angle of ca.
140° and low internal resistance properties were
prepared through electrochemical polymerization. [2]
Simultaneous possession of the water-repelling
property and electron conductivity for
superhydrophobic perfluorinated polyaniline leads to a
unique polymer that is suitable as a solid contact in ion-
selective electrodes for in situ monitoring of pH
changes during early stages of inflammation and septic
shock. The superhydrophobic properties should
suppress interactions with interfering salts and
proteins, and the sensitivity towards protons could be
monitored by measuring the phase boundary potential,
which depends on the H+ concentration. The
potentiometric measurements demonstrate a fast
response with a slope of 44.4 ± 0.2 mV per unit pH.
The presence of interfering ions and/or human serum
albumin does not have any significant effect on the
performance of the perfluorinated film. Moreover, it is
demonstrated that the response of the perfluorinated
film is reversible within the biomedically relevant pH
range from 4.0 to 8.5, and stable over time.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
17
2.2. The Reactive Oxygen Species – Responsive
Sensoric Layer
Pathologically high concentrations of reactive
oxygen species (ROS) in inflamed areas and in their
vicinity are the key biomarkers of inflammation.
Among the ROS, hydrogen peroxide (H2O2) is an
eminent example. Its physiological concentrations in
biological systems range from 1 to several hundred
nM, whereas concentrations exceeding this range are
associated with cell damage. Typical concentrations of
H2O2 in an inflamed tissue are orders of magnitude
higher.
We built a robust selective potentiometric sensor of
reactive oxygen species (ROS) (Fig. 1) [3].
Fig. 1. Scheme of the ROS-sensing layer [3].
The sensor consists of a conductive polymer layer
based on polythiophene with an incorporated
porphyrin-metal complex that potentiometrically
detects the presence of ROS. This sensor is covalently
coated with a nonbiofouling layer of poly(2-methyl-2-
oxazoline), which works as a biocompatibilizer but
mainly prevents the sorption of proteins and other
biomacromolecules naturally occurring in organisms,
which could interfere with the ROS signal. We have
shown that our potentiometric sensor shows a rapid
response to hydrogen peroxide, does not experience
interference with bovine serum albumin as a model
serum protein when sensing ROS, is able to fully
reversibly detect ROS with a linear response within a
very wide range of biologically relevant concentrations
and, most importantly, is able to distinguish between
hydrogen peroxide and hypochlorite. We also
performed a head-to-head comparison of two
positional isomers of thienylated porphyrine for sensor
applications and four different coordinated metals (Cu,
Fe, Co and Mn, with Cu and especially Fe shown to be
the most appropriate).
2.3. The Free Iron – Responsive Sensoric Layer
Very often the availability of iron is the limiting
step of pathogenic bacterial infections. To suppress
bacterial growth, organism actively strongly locally
depletes free iron availability when an infection
occurs.
We constructed a sensor for the determination of
Fe2+ and/or Fe3+ ions [4] that consists of a polyaniline
layer as an ion-to-electron transducer; on top of it,
chelating molecules are deposited (which can
selectively chelate specific ions) and protected with a
non-biofouling poly(2-methyl-2-oxazoline)s layer. We
have shown that our potentiometric sensing layers
show a rapid response to the presence of Fe2+ or Fe3+
ions, do not experience interference with other ions
(such as Cu2+), and work in a biological environment
in the presence of bovine serum albumin (as a model
serum protein). The sensing layers detect free iron ions
in the concentration range from 5 nM to 50 M.
4. Conclusions
Our general “big picture” concept of the PJI
detection is based on several (finally miniaturized)
sensors and the presented partial sensors enable
implementation of it into such a multisensor with a
shared reference electrode. More detailed information
can be found in the cited references [1-4].
Acknowledgements
The authors acknowledge financial support from
the Czech Health Research Council (grant # NU20–
06–00424) and from Czech Science Foundation (M.H.,
grant # 21–01090S).
References
[1]. E. Tomsik, K. Gunar, T. Krunclova, I. Ivanko,
J. Trousil, J. Fojt, V. Hybasek, M. Daniel, J. Sepitka,
T. Judl, D. Jahoda, M. Development of Smart Sensing
Film with Nonbiofouling Properties for Potentiometric
Detection of Local pH Changes Caused by Bacterial
and Yeast Infections Around Orthopedic Implants,
Advanced Materials Interfaces, Vol. 10, Issue 5, 2023,
Article # 2201878, pp. 1-10.
[2]. E. Tomsik, P. Dallas, I. Sedenkova, J. Svoboda,
M. Hruby. Electrochemical deposition of highly
hydrophobic perfluorinated polyaniline film for
biosensor applications, RSC Advances, Vol. 11,
Issue 31, 2021, pp. 18852-18859.
[3]. T. Urbanek, I. Ivanko, J. Svoboda, E. Tomsik,
M. Hruby, Selective potentiometric detection of
reactive oxygen species (ROS) in biologically relevant
concentrations by a modified metalized
polyporphyrine sensing layer coated with
nonbiofouling poly (2-alkyl-2oxazoline)s, Sensors &
Actuators: B. Chemical, Vol. 363, 2022, Article #
131827, pp. 1-12.
[4]. R. Ismail, I. Sedenkova, Z. Cernochova,
I. Romanenko, O. Pop-Georgievski, M. Hruby,
E. Tomšík, Potentiometric Performance of Ion-
Selective Electrodes Based on Polyaniline and
Chelating Agents: Detection of Fe2+ or Fe3+ Ions,
Biosensors, Vol. 12, 2022, Article # 446, pp. 1-14.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
18
(010)
Software Defined Radio Based Concept for Extending Orthogonal
Multi-tone Time Domain Reflectometry Method
to Analyze Electrical Power Grids
A. Faschingbauer
Deggendorf Institute of Technology, Technology Campus Freyung
Grafenauer Str. 22, 94078 Freyung, Germany
Tel.: + 49855191764-22
E-mail: alexander.faschingbauer@th-deg.de
Summary: This paper presents a concept for extending Orthogonal Multi-tone Time Domain Reflectometry (OMTDR)
method to a Multi-Domain Reflectometry System (MDRS). OMTDR is implemented onto Orthogonal Frequency-Division
Multiplexing (OFDM) which enables parallel analysis of the meta data while data communication is running. The proposed
MDRS generates/extracts meta data and analyses it using a multi-modal analysis approach. The goal is to evaluate an existing
OMTDR approach, extend and apply it to a multi-branched complex power line network for detecting weak impedance
changes.
Keywords: Reflectometry, Soft fault, Weak impedance changes.
1. Introduction
Electrical reflectometry methods are used in many
areas like in the field of detecting defects or arcs in
cables, detecting weak connections as well as using it
indirectly in the context of structural health
monitoring. The literature shows, that the accuracy of
electrical reflectometry methods is steadily increasing,
resulting in better defect localization (within few
centimeter) and defect classification (e.g., weak
impedance changes) [1]. OMTDR is one of the most
promising approaches used for this purpose. It can be
applied to communication lines (with useful signals)
and power lines (DC or AC). If the System Under Test
(SUT) already uses OFDM modulation, these signals
can be directly used to analyse the underlying physical
layer (e.g., copper wires). This makes OMTDR quite
flexible and universal for different use cases. Several
cable types and systems have been investigated.
Unfortunately, most of them are at laboratory scale and
with few exceptions not fully useable for real world
problems. The presented concept addresses this
problem. Therefore, the OMTDR method will be
extended using metadata, generated from the data
packets and all their representations themselves.
2. Related Work
Reflectometry is based on injecting test signals into
a SUT and receiving a kind of echo back from the same
system. A lot of electrical reflectometry methods exist.
Table 1 depicts only some of them, especially for
detecting soft faults. All methods can be sorted into
two categories like detection of hard faults in wiring
systems (short or open circuit) and detection of soft
faults or soft/small/low impedance changes caused
through damaged insulation, bent wires, moisture, or
rust on contacts.
2.1. Coupling Methods
Electrical reflectometry requires an electrical
signal coupling which can be realized differently.
Table 2 gives a short overview about used coupling
types, found in literature.
Table 1. List of electrical reflectometry methods.
Method Descri
tion Reference(s)
BTDR Binary Time Domain
Reflectometr
y
[2] [3]
(C)CTDR (Continuous) Chaos Time
Domain Reflectometr
y
[3] [4]
CTFDR Cluster Time Frequency
Domain Reflectometr
y
[5]
JTFDR Joint Time Frequency
Domain Reflectometr
y
[6]
OMTDR
Orthogonal Multi-tone
Time Domain
Reflectometr
y
[3]
SSTDR Spread Spectrum Time
Domain Reflectometr
y
[3] [7]
TDR Time Domain
Reflectometr
y
[8] [9]
TFDR Time Frequency Domain
Reflectometr
y
[3]
TRR Time Reversal
Reflectometr
y
[10]
Table 2. Coupling Methods.
Signal Coupling Method Reference(s)
Direct [3] [4] [5] [6] [8] [10]
Direct, Tuneable Resistors [2] [3]
Direct, Ca
p
acitive, Inductive [3] [7]
Inductive [3]
Directional Coupling [9] [11]
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
19
Direct coupling means that the in- and outputs of
the reflectometer have an ohmic connection to the
Cable under Test (CUT). In most cases the type of
CUT is a coaxial or twisted pair cable. Tuneable
resistors are used in combination with the Binary Time
Domain Reflectometry (BTDR) method. Some
reflectometers use a directional Bridge which separates
the injected signal from the reflected signal with an
attenuation factor. Also, capacitive or inductive
coupling methods are used. These are more flexible,
because they can be used with a wider range of
different cable types.
2.2. Injected Signal
A very simple implementation of Time Domain
Reflectometry (TDR) injects a voltage pulse with a
specific duration [10] or a 1 ns gaussian pulse [5].
Also, other injected signals are possible like simple
binary [2, 4] or even pseudo random signals [3]. Some
articles describe a very detailed signal form e.g.,
48 MHz square wave pseudo random signal [3, 7].
Some approaches use a Vector Network Analyzer
(VNA) where the injected signal depends on the device
itself [8].
2.3. Cable Types and Network Complexity
Literature shows research using different cable
types. The range begins with wiring systems for data
communication (twisted pair, coaxial cables) [9] and
spans to power lines [3] which is a broad variety.
Network complexity is another important aspect in
presence of analyzing wiring networks. Complexity
ranges from simple End-2-End connections to very
complex multibranch networks which extend over a
wide area [3].
2.4. Coexistence of Useful Signals
Some electrical reflectometry methods are
applicable in systems where useful signals (ac/dc
power, communication signals) are present. Table 3
gives a short overview, weather the method can handle
useful signals or not.
Table 3. Methods with coexistence of useful signals
and without.
Method Seul Si
g
nal Reference(s)
CTFDR No [5]
OMTDR Yes [3]
SSTDR Yes [3] [7]
TFDR Yes [3]
TRR No [10]
3. Concept
This concept extends OMTDR to a MDRS and
applies it to detect soft faults in coaxial cables and
power lines with connected devices. OMTDR is
usually implemented onto OFDM. Data frames with
payload are modulated onto multiple sub-carriers and
transformed into the frequency domain using Inverse
Fast Fourier Transformation (IFFT). This step
typically creates I/Q-Samples, which are converted
into electrical signals using a DAC. At this point the
signal is ready for transmission. An OMTDR approach
is presented in [12] using a Field Programmable Gate
Array (FPGA) where transmitted samples before the
DAC are directly compared with the received samples
after the ADC in a digital manner. Also, some
postprocessing steps are described [12]. This approach
will be implemented using Software Defined Radio
(SDR) technique (Python + GNU Radio + National
Instruments Universal Software Radio Peripheral
(USRP) X310). The proposed steps will also be
adapted within this concept to verify the lab setup as
well as the data analysis.
3.1. Laboratory Setup
Fig. 1 shows the laboratory setup, which consists
of a SDR as interface between the software and
hardware domain. There shall be two wiring options of
SUT. Option 1 is a single transmission line (coaxial
cable, type RG-58) with two different lengths (100 m
or 500 m) and additionally four different termination
types 𝑍 with
(1)
as match, short, open, or other defined impedance.
Option 2 is a NYM type cable, used for power lines in
the 230 V layer. This cable shall be used to create a
power line network with several levels of network
complexity. Also, some typical devices shall be
connected to this multi-branched network to get a setup
comparable to real world applications. Suitable
coupling devices shall be selected and used for both
options.
Fig. 1. Proposed Lab Setup.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
20
3.2. Evaluation
The proposed lab setup with wiring option 1 is used
to evaluate the communication system, the OMTDR
method (hardware and software) as well as the
extended methods. Therefore, an OFDM transmitter
will be implemented, using GNURadio and existing
building blocks. The OFDM receiver will be
implemented in plain Python. This enables a simplified
extraction of all kind of meta data required at any point
of the signal processing chain. Later, the methods
tested with wiring option 1, shall be applied to wiring
option 2.
4. Extension of the OMTDR Method
Fig. 2
shows a simplified OFDM spectrum (dotted
line). The spectrum consists typically of a defined
number of sub- carriers with specified modulation type
and purpose. These are depicted below the dotted line.
Not every individual carrier is plotted. Instead, the
number below the carriers show how many real
carriers of the same type are present in the real
spectrum at the shown position. Corresponding to the
Fast Fourier Transformation (FFT) size, 64 subcarriers
are placed. Therefore, each carrier can be computed
into I/Q samples (time-domain) using the FFT or IFFT
vice versa.
Fig. 2. Simplified OFDM frame in the frequency domain
with subcarrier placement schema.
Fig. 3 shows a part of an OFDM packet in time-
domain. At the beginning of each OFDM packet a
cyclic prefix (CP), consisting of 16 I/Q samples is sent.
This repeats until the end of a packet (EOP) is reached.
Afterwards, two synchronization words, a packet
header and several payload frames are sent, each
separated by the CP. The parts between the CP blocks
correspond to the time-domain representation of
Fig. 2
.
Fig. 3. OFDM datagram with frames, separated
by the cyclic prefix.
Fig. 4 depicts the data processing chain. It is used
to generate the raw files for later analysis. The Data
Generator block is used to generate packets, containing
a unique packet number. The packets are then stored in
a binary format into a file. The OFDM Encoder block
reads the binary packets from the foregoing file,
arranges the binary data onto the subcarriers, places
them together and produces I/Q samples, which are
then stored in another file. The SDR Tx/Rx block reads
the I/Q data and transfers it to the USRP where they
are sent into the SUT. The received signal is a
superposition of the transmitted signal and the
response of the system. This signal is also saved to a
file in I/Q representation. Now, the OFDM Decoder
block detects the start of an OFDM packet,
demodulates and decodes it and saves the data also into
a file. In the Analysis block several methods are used
to get additional information out of the data, saved in
the files. The extraction of meta information from
frequency (
Fig. 2
) and time domain (Fig. 3) is
explained in the next parts.
5. Meta Data through Multi Domain Analysis
All analysis are grouped into domains. Each
domain provides at least one meta data source. The
meta data is generated during processing the pipeline
in Fig. 4.
Fig. 4. Data generation and processing pipeline.
5.1. I/Q Domain
The first approach is to apply the OMTDR method
using the I/Q samples of each transmitted and received
packet directly like in [12]. Additionally, specially
designed sequences for extracting parts of the signal
could be used together with correlation.
5.2. Data or Packet Domain
Thus, the software OFDM receiver can detect the
start of each packet in the receive stream file as well as
in the transmit stream file. The number of packets sent,
is well known. It must stay constant as long as not too
many samples of a packet are damaged during
propagation through the SUT. Additionally, all packets
carry a unique packet number in their payload.
Therefore, a packet drop can be detected. The received
I/Q samples will be slightly changed through system
noise and especially reflections of the system.
Afterwards the corresponding transmit and receive
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
21
packets can be analyzed using the approach in [12] to
get an insight of the SUT. Also, the total bit error rate
(BER) of the whole packets as well as only of the pilot
carriers and the relation between them could be
considered.
5.3. Frame Domain
Each OFDM frame consists of a short sequence,
which is used as a start of frame (SOF), a long
sequence (used for correcting frequency drift) and
several carriers, grouped into pilots and data carriers.
In the case of the IEEE 802.11 standard, short and long
sequence are standardized and therefore well-known
[13, p. 3314, Table I-3]. Typically, the Schmidl & Cox
synchronization technique is used to detect the short
sequence and thus the SOF. It is assumed that other
numeric calculations as well as machine learning
approaches could be helpful to generate meta data out
of these synchronization sequences.
5.4. Symbol Domain
Each OFDM symbol has several subcarriers. As
visualized in Fig. 2, they can be grouped in zero
carriers, data carriers and pilot carriers. The latter are
typically used to estimate the transmission channel in
communication systems. This information shall be
considered as meta information source, too. The
information, modulated onto the pilots is constant and
it shall be used to track rapid time variations, which
may be caused by the SUT. It is assumed that the time
variation is a constant, but system specific value. The
zero carriers may be used to estimate system typical
noise. This could be interesting for SUT option 2,
because there unshielded cables are used.
5.5. Frequency Domain
Also, the spectrum of the OFDM signal can be
analyzed. Information like bandwidth, receive power
or the signal to noise ratio (SNR) can be calculated and
used as meta data further. Relation of amplitudes of
subcarriers inside each OFDM symbol compared to the
relation of amplitudes of subcarriers between all
OFDM symbols in a Packet. Also, frequency
components which have more power than in the
original transmit signal can appear and therefore be
used as meta information which may be system
specific.
6. Expected Results
All of the proposed meta data may be used
individually to get a specific fingerprint of the SUT.
Moreover, the combination of two or more meta data
sources using specific methods for combining them,
could be a promising approach to detect small
impedance changes.
7. Conclusion and Further Work
The presented concept shows an approach to
extend the OMTDR method to a MDRS, using the
underlying OFDM communication system. The main
goal is to evaluate an existing OMTDR approach,
extend and apply it to a multi-branched complex power
line network for detecting weak impedance changes.
Therefore, a multi-modal analysis is implemented. It is
planned, to consider machine learning (ML)
approaches later on. The proposed system is capable of
generating the large amounts of training data required
for ML approaches. The output of trained ML models
may enable the detection of weak impedance changes,
too.
Acknowledgements
This paper was written in the context of the SI-
CM3S project, which is funded by the German Federal
Ministry for Economic Affairs and Energy as part of
the Central Innovation Program for SMEs (ZIM) on
the basis of a resolution of the German Bundestag.
References
[1]. L. Incarbone, F. Auzanneau, W. Ben Hassen,
and Y. Bonhomme, Embedded wire diagnosis sensor
for intermittent fault location, in Proceedings of the
IEEE SENSORS, Nov. 2014, pp. 562–565.
[2]. F. Auzanneau, Natural amplification of soft defects
signatures in cables using binary time domain
reflectometry, IEEE Sensors Journal, 21, 2, 2021,
pp. 937 - 944.
[3]. C. M. Furse, M. Kafal, R. Razzaghi, and Y.-J. Shin,
Fault Diagnosis for Electrical Systems and Power
Networks: A Review, IEEE Sensors Journal, 21. 2.
2021, pp. 1–1.
[4]. F. Auzanneau, Binary time domain reflectometry: A
simpler and more efficient way of diagnosing defects
in wired networks, in Proceedings of the IEEE
AUTOTESTCON Conference, Sep. 2018, pp. 1-8.
[5]. M. Franchet, N. Ravot, and O. Picon, Soft fault
detection in cables using the cluster time-frequency
domain reflectometry, IEEE Electromagnetic
Compatibility Magazine, Vol. 2, No. 1, 2013,
pp. 54–69.
[6]. S. Sallem and N. Ravot, Soft defects localization by
signature magnification with selective windowing, in
Proceedings of the 2015 IEEE SENSORS Conference,
Nov. 2015, pp. 1–4.
[7]. E. Benoit, N. K. T. Jayakumar, S. Kingston,
M. U. Saleh, M. Scarpulla, J. Harley, and C. Furse,
Applicability of SSTDR Analysis of Complex Loads,
in Proceedings of the IEEE International Symposium
on Antennas and Propagation and USNC-URSI Radio
Science Meeting, Jul. 2019, pp. 2087–2088.
[8]. M. Kafal, A. Cozza, and L. Pichon, Locating Multiple
Soft Faults in Wire Networks Using an Alternative
DORT Implementation, Institute of Electrical and
Electronics Engineers, Tech. Rep. 10.1109/
TIM.2015.2498559, 2016.
[9]. Q. Shi and O. Kanoun, Application of iterative
deconvolution for wire fault location via reflectometry,
in Proceedings of the International Symposium on
Instrumentation Measurement, Sensor Network and
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
22
Automation (IMSNA’ 2012), Vol. 1, August 2012,
pp. 102–106.
[10]. L. El Sahmarany, L. Berry, N. Ravot, F. Auzanneau,
and P. Bonnet, Time Reversal for Soft Faults Diagnosis
in Wire Networks, Progress In Electromagnetics
Research, Vol. 31, 2013, pp. 45–58.
[11]. C. Gao, L. Wang, J. Mao, S. Hu, B. Zhang,
and S. Yang, Non-Intrusive Cable Fault Diagnosis
Based on Inductive Directional Coupling, IEEE
Transactions on Power Delivery, Vol. 34, No. 4,
August 2019, pp. 1684–1694.
[12]. W. Ben Hassen, F. Auzanneau, L. Incarbone,
F. Peres, and A. P. Tchangani, Distributed Sensor
Fusion for Wire Fault Location Using Sensor
Clustering Strategy, International Journal of
Distributed Sensor Networks, Vol. 11, No. 4, April
2015, p. 538643.
[13]. IEEE Std 802.11™-2016, IEEE Standard for
Information technology, Telecommunications and
information exchange between systems, Local and
metropolitan area networks, Specific requirements,
Part 11: Wireless LAN Medium Access Control, 2016.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
23
(011)
Traffic Signaling and Cooperative Trajectories
based on Visible Light Communication
M. A. Vieira 1,2, G. Galvão 1, M. Vieira 1,2,3 M. Véstias1,4, P. Vieira1,5 and P. Louro 1,2
1 ISEL-Polytechnic Institute of Lisbon, Portugal
2 UNINOVA-CTS and LASI; Lisbon, Portugal
3 NOVA School of Science and Technology, Lisbon, Portugal
4 INESC-ID, IST, Un. de Lisboa, Lisbon, Portugal
5 Instituto de Telecomunicações, IST, Lisbon, Portugal
E-mail: mv@isel.ipl.pt
Summary: Visible Light Communication (VLC) is a promising solution proposed for optimizing traffic signals and vehicle
trajectories at urban intersections. This approach utilizes light communication between connected vehicles (CVs) and
infrastructure to enable coordinated traffic interactions. By leveraging streetlamps, intersection signals, and headlights, VLC
facilitates the transmission of information between CVs and the infrastructure. The system is designed to be flexible and
adaptive, accommodating diverse traffic movements across multiple signal phases. To evaluate the effectiveness of VLC,
simulations are conducted using the SUMO urban mobility simulator. These simulations generate traffic flows and incorporate
VLC mechanisms and relative pose concepts for queueing, requesting, and responding to interactions. To dynamically control
traffic flows and alleviate congestion during peak hours, a deep reinforcement learning algorithm is employed. This algorithm
optimizes traffic by utilizing both Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communications.
Comparisons are made between the traditional trajectory and signal optimization techniques. The results demonstrate the
benefits of VLC in terms of throughput, delay, and the reduction of vehicle stops. In conclusion, VLC presents an integrated
approach that harnesses light communication to optimize traffic signals and vehicle trajectories at urban intersections. Through
simulations and comparisons, VLC proves its effectiveness in enhancing traffic efficiency and reducing congestion, offering
promising insights for future urban traffic management systems.
Keywords: Visible light communication, Optical sensors, Cooperative traffic control, Connected vehicles, Deep reinforcement
learning; SUMO simulation.
1. Introduction
In today's world, communication technology has
become a subject of controversy due to the increasing
overload of radio frequencies and the need for stable
and consistent systems. As a potential solution to this
challenge, Visible Light Communication (VLC)
emerges by utilizing light-emitting diodes (LEDs) as
light sources and photodiodes as photodetectors [1. By
modulating visible light in time and frequency, VLC
offers a promising alternative for communication
technology [1]. Only light emitting diodes (LED)
lamps can be used for the transmission of visible light
[2]. This functionality has given rise to a novel
communication technology, VLC, where LED
luminaires can be used for high-speed data transfer [3].
VLC is an emerging technology [4] that enables data
communication by modulating information on the
intensity of the light emitted by LEDs. Increasingly,
smart cities can become comfortable, quick, and safe
places to travel. Technology has advanced to the point
where even non-autonomous vehicles are equipped
with sophisticated sensors and computers. That was the
first step on the road to improve road safety.
The application of VLC goes beyond traditional
communication methods and extends to intelligent
traffic control systems. By implementing a real-time
traffic control system, traffic flow can be significantly
improved through effective resource management and
information exchange. This study specifically focuses
on utilizing Visible Light Communication as a means
of transmitting information, providing guidance
services, and delivering specific information to
drivers. In the case of vehicular communications, the
use of VLC is made easier because all vehicles,
streetlights, and traffic lights are equipped with LEDs,
using them for illumination. In this context,
communication and localization are facilitated through
the utilization of streetlamps, traffic signaling, and the
headlights and taillights of vehicles. This approach
allows for the simultaneous use of outdoor automotive
lighting and infrastructure lighting to serve both
illumination and communication purposes [5, 6].
We propose a cooperative I2V2V2I2V system that
supports guidance services. This system employs an
edge/fog-based architecture, which effectively
manages the safe passage of vehicles through
connected intersections. Vehicular Communication
Systems are a type of network in which vehicles and
roadside units are the communicating nodes, providing
each other with information [7]. The main objective is
to optimize traffic safety and efficiency on public
roads through V2V and V2I communications [8-10].
Real-time traffic information is essential for
optimizing traffic light duration. By monitoring the
location, speed, and direction of nearby vehicles,
significant improvements in traffic management can
be achieved.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
24
2. Connected Vehicles and VLC
The central aim is to enhance both safety and
efficiency on public roads through the implementation
of Vehicle-to-Vehicle (V2V) and Vehicle-to-
Infrastructure (V2I) communications. This endeavor
revolves around elevating situational awareness and
curtailing traffic accidents. A key strategy involves
utilizing real-time traffic information to inform
dynamic adjustments in traffic light durations,
ultimately optimizing traffic flow and minimizing
potential hazards.
At the heart of this effort lies the integration of V2V
and V2I communications. By harnessing the real-time
data regarding the location, speed, and direction of
nearby vehicles, a comprehensive and accurate
understanding of the road environment is achieved.
This real-time awareness enables informed decision-
making, leading to substantial enhancements in traffic
management. The envisioned outcome is a marked
improvement in both the safety and efficiency of road
usage, offering a promising path towards a smarter and
more secure transportation landscape.
2.1. Intelligent Control System
To develop the intelligent control system model
that facilitates safe vehicle management through
intersections using V2V, V2I, and I2V
communications, Reinforcement Learning (RL)
concepts were used. RL is a training method based on
rewarding desired behaviors and/or punishing
undesired ones [11-13].
The simulations were agent-based and they have
been carried out in a tool for Simulation of Urban
MObility (SUMO) [14].
Fig. 1 illustrates the reinforcement learning loop,
where the agent receives the current state of the
environment (St) and learns from the feedback of the
action taken (At) on the overall traffic flow. The
agent's action directly affects the traffic light, which
serves as the controller of the intersection. This
iterative process allows the agent to learn from its
actions and improve over time, avoiding negative
situations and focusing on positive outcomes. The
agent's experiences and actions are stored to train a
model and enhance its decision-making capabilities
[15]. The traffic lights in SUMO are controlled by the
learning agent following its decisions, the overall flow
of traffic is described, and the actions of the traffic
lights control agent are rewarded. The reward (𝑅),
represent the accumulated total waiting time of all the
cars in the intersection captured respectively at
agentsteps 𝑡1 and 𝑡. The objective of the IM is to
minimize the total waiting time at each arm of the
intersection. When a vehicle's speed drops below
0.1 m/s, a queue alert is triggered. The IM agent must
explore new states while simultaneously maximizing
its overall reward. To illustrate this concept, a dynamic
phasing diagram and a state matrix based on the total
accumulated time are provided.
Fig. 1. Illustration of the reinforcement learning’s loop
of action-reward feedback.
2.2. Scenario, Environment and Architecture
In Fig. 2, a scenario with two traffic signal-
controlled intersections is depicted. The scenario is
designed with an orthogonal topology, organized into
clusters of square unit cells. This layout helps define
the structure and arrangement of the intersections
within the scenario. Each transmitter, X i,j, carries its
own color, X, (Red, Green, Blue, Violet) as well as its
horizontal and vertical ID position within the
surrounding network (i,j) [16]. In the Proof of Concept
(PoC), it was assumed that the crossroads are situated
at the intersections of line 4 with column 3 and
column 11.
Fig. 2. Simulated scenario with the optical infrastructure.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
25
In Fig. 2, there are two traffic signal-controlled
intersections with four traffic flows. From the West,
there are twenty red ai vehicles with straight movement
and four yellow ci vehicles with left turn only. From
the East, there are green bi vehicles with left turn only
(thirteen straight and two left turn). From the South,
there are six orange ei vehicles, with two having a left-
turn approach and four with straight movement. From
the North direction, there are thirteen blue fi vehicles,
with nine going straight and four having a left turn at
both intersections. The road request and response
segments offer a binary choice between turning
left/straight or turning right. The vehicles represent a
percentage of the traffic flow, and their ordering in
terms of priority is determined. The top three requests
are a1, b1, and a2, followed by b2, a3, and c1. In the
seventh, eighth, and ninth places are b3, e1, and a4,
respectively, followed by c2 in the tenth place. The
penultimate request is a5, and the last one is f1. Based
on the assumptions, there are 540 cars approaching the
intersection per hour, with 80 % of them coming from
the east and west directions. Among these cars, it is
assumed that 50% of them will make a left or right turn
at the intersection, while the remaining 50 % will
continue straight.
2.3. Visible Light Communication Link
A Vehicular Visible Light Communication system
(V-VLC) is structured around a transmitter and a
receiver connected via a wireless channel. The
transmitter's role involves generating modulated light,
often utilizing the ON-OFF-keying (OOK) amplitude
modulation technique. Concurrently, the receiver
detects fluctuations in the received light signal [17].
This dynamic system finds implementation in both the
road infrastructure, manifesting as streetlights, and
within the vehicles themselves, taking shape as
headlights.
Within this ecosystem, the environment is
meticulously defined, characterized by a cluster of
square unit cells arranged in an orthogonal
configuration. The cornerstone of this setup is the
deployment of tetra-chromatic white light sources
(WLEDs) positioned strategically at the corners of
each unit cell. These light sources offer distinct data
channels [18].
Functionally, the V-VLC system processes coded
signals as inputs. These signals are transmitted by
transmitters, which can take the form of streetlights or
headlights. Their purpose ranges from vehicle
identification (I2V), communication with traffic lights
(V2I), to facilitating communication between vehicles
(V2V). The encoded signals encapsulate not only the
essential information about the transmitter's position
within the network but also the steering angle (δ)
imperative for guiding the driver's orientation along
their trajectory.
To manage the seamless passage of vehicles
through intersections, a sophisticated interplay of
queue/request/response mechanisms and
temporal/space-relative pose concepts are employed
[19]. These mechanisms ensure efficient traffic
management and orderly vehicular movement at
crossroads.
The coded signals transmitted by the transmitters
are subsequently received and decoded by a PIN-PIN
photodetector. This photodetector boasts light filtering
properties, ensuring the precision and accuracy of data
reception, a pivotal aspect in maintaining the integrity
of the communication process.
2.4. Architecture
Illustrated in Fig. 3 is the implementation of a
hybrid structure seamlessly fusing mesh networking
with cellular technology. At the crux of this
configuration is the "mesh" controller, strategically
positioned at streetlight installations. This controller
assumes the critical role of a message-forwarding
entity, orchestrating the efficient flow of information
among vehicles operating within the mesh network. Its
functionality closely resembles that of router nodes
within a network framework.
Fig. 3. VLC Edge Computing infrastructure.
Complementing the mesh controller is the hybrid
controller, an innovative convergence of mesh and
cellular capabilities. This hybrid entity performs a dual
role: firstly, it functions as a border-router, skillfully
bridging the mesh and cellular domains. Additionally,
this hybrid controller serves as a catalyst for edge
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
26
computing functionalities, effectively extending the
computational capacity of the network [20].
This architectural paradigm yields a host of
functionalities. On one hand, it seamlessly supports
edge computing capabilities, empowering the
execution of computations closer to the data source. On
the other hand, it accommodates device-to-cloud
communication (I2IM), ushering in the potential for
real-time data exchange between devices and cloud-
based resources. Simultaneously, the architecture
fosters peer-to-peer communication (I2I), facilitating
direct information exchange among devices.
Embedded within this framework are computing
platforms that hold a pivotal position. These platforms
undertake the essential tasks of processing and
interfacing with sensors and controllers. In essence,
they form the dynamic hub where data is processed,
transformed, and made available for further analysis or
dissemination.
3. Intelligent Traffic Signal Control
3.1. Dynamic Traffic Phasing
In Fig. 4, a visual representation unfolds,
elucidating the sequential progression of phases within
the intersection. This orchestrated flow adheres to a
structured cycle length comprising six distinct phases.
Each of these phases is further intricately subdivided
into 16 discrete time sequences or states, delineating a
comprehensive temporal framework for the
intersection's operation.
Fig. 4. Requested phasing of traffic flows. * Adaptive sequences.
An essential observation to highlight is that states
designated with an asterisk (*) hold a dynamic quality,
representing movable states that adapt in response to
varying traffic demands within the cycle. Specifically,
sequences marked with the numbers "0", "1", and "16"
denote the exclusive pedestrian phase, signaling a
dedicated time interval for pedestrian movement.
Furthermore, the synchronization of the cycle
initiates with sequence "1", marking the
commencement of the orchestrated flow of phases.
Within this cycle, phases one through four are each
allocated sequence spanning from "2" to "15". These
sequences within these phases play a pivotal role in
meticulously regulating the traffic flow, ensuring a
structured and efficient movement of vehicles through
the intersection.
3.2. Adaptive V-VLC Traffic Control Evaluation
Using the application programming interface (API)
provided by SUMO, it is possible to interface with
external programs and interact with the simulation
environment. SUMO provides various statistics related
to the overall traffic flow and provides various outputs,
such as diagrams illustrating the duration of each state
or color of the traffic lights throughout the simulation.
Based on the simulation scenario depicted in Fig. 2, a
state diagram was generated using SUMO simulation.
Fig. 5a and Fig. 5c show the phase diagrams for the
two connected intersections, TL1 and TL2. The
SUMO environment is illustrated in Fig. 5b.
The simulation scenario was adapted from a real-
world environment in Lisbon [21], and it considers the
presence of roads that impact the traffic flow at both
intersections. These roads, referred to as the target
road, have a dynamic influence on the traffic flow, and
the impact of the historical traffic state from other
roads on the target road is limited in time. Here, the E-
W arm was considered as a target road. The
transmission of traffic flow and traffic waves measures
the time duration for which the traffic state of other
roads affects the target road within the same period. As
traffic continuously enters the system, the composition
of the traffic flow on the target road undergoes changes
over time. To improve the traffic flow conditions, a
modification was made to the initially proposed phases
(as shown in Fig. 4).
The modification involved an immediate transition
from the pedestrian phase (Ph0) to the N>S phase
(Ph4), followed by the remaining phases in both
intersections. This change in phase order was found to
enhance the traffic flow conditions in the simulation.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
27
By adjusting the phase sequence and optimizing the
traffic light control strategy based on the simulation
results, it is possible to achieve improvements in traffic
flow, reduce congestion, and enhance overall
intersection performance. In Fig. 6, a comparison is
presented between the queue (halting) and average
speed observed every second in SUMO/VLC
(Simulation of Urban MObility with Visible Light
Communication) for a 130-second cycle. The
simulation assumes a saturation flow of 2500 vehicles
per hour.
Fig. 5. State diagram resulting in two coordinated
intersections (TL1 and TL2). On the top an insert
of environment and the color phasing is inserted.
At the midle the environment is draft.
0 20406080100120140
0
5
10
15
20
Average Speed (m/s)
Time (s)
0
10
20
30
40
Average Seed
Halting
Halting (vehicles)
Fig. 6. Average speed and halting along a cycle.
The results demonstrate that on the regulated roads,
there are typically no congested conditions occurring
in each new cycle. The queue of vehicles in the first
section of the demand acts as an integrator, meaning it
accumulates vehicles and its length increases as
vehicles enter the section. This becomes critical when
the queue approaches the capacity of the link road. In
the unsaturated regime, which assumes that all vehicles
in the queue leave the target road by the end of the
sampling time, the queue of vehicles is always zero.
However, when the red light is activated, a maximum
queue of vehicles is generated since all vehicles in the
queue are held back. This analysis provides insights
into the behavior of traffic flow and queue dynamics
within the simulated environment. It demonstrates the
impact of traffic lights and their timing on the
accumulation and dispersal of vehicles at different
sections of the road network. Fig. 7 represents the
reward obtained, after the training, when the network
was tested. The test was performed for the two
intersections scenario with independent crossings. The
results obtained from the experiment indicate that as
the number of action steps increases, the cumulative
rewards become more positive. This demonstrates that
the agent learns to make better decisions over time
during the test. The feasibility and benefits of creating
a dynamic system that can adapt to specific traffic
scenarios are evident in the results. Safety and privacy
are crucial requirements for the V-VLC system.
0 20406080100120140
-4x10
3
-2x10
3
0
2x10
3
4x10
3
6x10
3
8x10
3
1x10
4
1x10
4
Cumulative Reward
Action Steps
Fig. 7. Cumulative reward.
To bolster security, upcoming advancements
should prioritize the enhancement of coding
techniques to guarantee that exclusively authorized
receivers can decipher secure request/response
messages. These security measures extend deep into
the fabric of the physical transmission process,
particularly within the Line of Sight (LoS) channel. In
this context, potential eavesdroppers are rendered
passive observers, devoid of any access to transmitted
information.
One promising avenue for augmenting security
entails harnessing the positional data of streetlamps to
deduce the flow of vehicular traffic. This innovative
approach holds the potential to obviate the necessity
for traditional certificates or passwords within the
network. Instead, it paves the way for a paradigm shift
toward statistical secrecy. By weaving this statistical
approach into the security framework, the reliance on
explicit authentication measures can be alleviated,
while concurrently fortifying the layers of protection
against unauthorized access.
In essence, the trajectory for fortifying security
within the system rests upon refined coding
techniques, the intrinsic security attributes of the LoS
channel, and the intelligent utilization of positional
information for traffic flow analysis. This approach
promises to yield enhanced security measures,
ushering in a future where secure communication
flourishes within the framework of advanced vehicular
systems.
c)
a)
b)
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
28
4. Conclusions
V-VLC technology integration in connected cars offers
significant improvements to urban traffic networks by
integrating traffic signal control with driving behavior.
This innovative system utilizes a
queue/request/response approach for efficient
intersection management while providing real-time
monitoring of queues and messages. Through detailed
data collection, V-VLC technology enables dynamic
adjustments to traffic light phases and durations,
leading to reduced travel times and minimized waiting
for drivers. Furthermore, V-VLC enhances safety by
directly monitoring crucial areas such as queue
formation, ensuring a safer and more efficient traffic
flow.
Acknowledgements
This research was funded (in part) by the
Portuguese FCT program, Center of Technology and
Systems (CTS) UIDB / 00066 / 2020 / UIDP / 00066 /
2020 and by IPL/2022 / POSEIDON_ISEL.
References
[1]. Zhaocheng Wang, Qi Wang, Wei Huang, Zhengyuan
Xu, Wiley, Visible Light Communications:
Modulation and Signal Processing, Wiley-IEEE Press,
2018.
[2]. Azevedo, I. L., Morgan, M.G., Morgan, F., The
Transition to Solid-State Lighting, Proceedings of the
IEEE, Vol. 97, No. 3, March, 2009, 481-510.
[3]. Schmid, S., Corbellini, G., Mangold, S., and Gross,
T. R., An LED-to-LED Visible Light Communication
system with software-based synchronization, in
Proceedings of the IEEE Globecom Workshops, 2012,
pp. 1264–1268.
[4]. Parth, H., Pathak, X., Pengfei, H. and Prasant,
M.,Visible Light Communication, Networking and
Sensing: Potential and Challenges, September 2015,
IEEE Communications Surveys & Tutorials 17(4):
Fourthquarter 2015, pp. 2047 – 2077, 2015.
[5]. Nawaz, T., Seminara, M., Caputo, S., Mucchi, L. and
Catani, J., Low-latency VLC system with Fresnel
receiver for I2V ITS applications, J. Sensor Actuator
Netw., Vol. 9, No. 3, Jul. 2020, p. 35.
[6]. Caputo, S., et al., Measurement-based VLC channel
characterization for I2V communications in a real
urban scenario, Veh. Commun., Vol. 28, Apr. 2021,
Art. No. 100305.
[7]. Yousefi, S., Altman, E., El-Azouzi, R., and Fathy,
M., Analytical Model for Connectivity in Vehicular Ad
Hoc Networks, IEEE Transactions on Vehicular
Technology, 57, 2008, pp. 3341-3356.
[8]. Elliott, D., Keen W., and Miao L., Recent advances in
connected and automated vehicles, Journal of Traffic
and Transportation Engineering, Vol. 6, Issue 2,
pp.109-131, April 2019.
[9]. Jitendra, N., and Bajpai, Emerging vehicle
technologies & the search for urban mobility solutions,
Urban, Planning and Transport Research, 4, 1, 2016,
pp.83-100.
[10]. Wang, N., Qiao, Y., Wang, W., Tang, S. and Shen,
J. Visible Light Communication based Intelligent
Traffic Light System: Designing and Implementation,
in Proceedings of the Asia Communications and
Photonics Conference (ACP’ 2018), 2018, pp. 1-3.
[11]. Iša, J., Kooij, J., Koppejan, R., and Kuijer,
L., Reinforcement learning of traffic light controllers
adapting to accidents, Design and Organisation of
Autonomous Systems, 2006, pp. 1–14.
[12]. Forbes, J. R. N., Reinforcement learning for
autonomous vehicles, University of California,
Berkeley, 2002.
[13]. Liang, X., Du, X., Wang, G., and Han, Z., A Deep
Reinforcement Learning Network for Traffic Light
Cycle Control, IEEE Transactions on Vehicular
Technology, Vol. 68, No. 2, Feb. 2019,
pp. 1243-1253.
[14]. Alvarez Lopez, Pablo, at. al., Microscopic Traffic
Simulation using SUMO, in Proceedings of the 21st
IEEE Intelligent Transportation Systems Conference,
ITSC), 2018, pp. 2575-2582.
[15]. Schepperle, H., Böhm, K., Agent-Based Traffic
Control Using Auctions. In: Klusch, M., Hindriks,
K.V., Papazoglou, M.P., Sterling, L. (eds),
Cooperative Information Agents XI. CIA 2007.
Lecture Notes in Computer Science, Vol. 4676.
Springer, Berlin, 2007.
[16]. Vieira, M. A. et. al., Optical signal processing for a
smart vehicle lighting system using a-SiCH
technology, Proc. SPIE, 10231, Optical Sensors 2017,
102311L, 2017.
[17]. Vieira, M. A., Vieira, M., Louro, P., Vieira, P., Bi-
directional communication between infrastructures
and vehicles through visible light, in Proceedings of
the Fourth International Conference on Applications of
Optics and Photonics, Proc. SPIE 11207, 3 October
2019, 112070C.
[18]. Vieira, M. A., Vieira, M., Vieira, P. and Louro,
P., Optical signal processing for a smart vehicle
lighting system using a-SiCH technology, Proc.
SPIE 10231, Optical Sensors 2017, 2017, 102311L.
[19]. Miucic, R. Connected Vehicles: Intelligent
Transportation Systems, Springer, Cham, Switzerland,
2019.
[20]. Yousefpour, A. et al., All one needs to know about fog
computing and related edge computing paradigms:
A complete survey, Journal of Systems Architecture,
Vol. 98, 2019, pp. 289-330.
[21]. Vieira, P., Vieira, M..A., Queluz, M. A, and Rodrigues,
A., A Novel Vehicular Mobility Model for Wireless
Networks, Wireless Pers Commun, 43, 2007,
pp. 1689–1703.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
29
(012)
Visible Light: An Identifier (ID) System for Building Guidance
M. Vieira 1,2,3, M. A. Vieira 1,2, P. Vieira 1,4 and P. Louro 1.2
1 ISEL-Polytechnic Institute of Lisbon, Portugal
2 UNINOVA-CTS and LASI; Lisbon, Portugal
3 NOVA School of Science and Technology, Lisbon, Portugal
4Instituto de Telecomunicações, IST, Lisbon, Portugal
E-mail: mv@isel.ipl.pt
Summary: This paper presents an approach that utilizes Visible Light Communication (VLC) to generate landmark route and
alert instructions for supporting people's wayfinding activities. The system comprises ceiling luminaries serving as
transmitters, which transmit map information, alerts, and path messages. Optical receivers collect this information in real time,
providing users with the most optimal route to avoid congestion. Tetrachromatic identifier white sources are used for lighting
and different data channels. The data is encoded, modulated, and converted into light signals. Mobile optical receivers capture
data, determine their location, and read transmitted data simultaneously. Bidirectional communication allows users to interact
with the received information. The system calculates the best route considering static or dynamic destinations, including buddy
wayfinding services. Results show that the system enables self-location, travel direction deduction, and efficient navigation
towards static or dynamic locations.
Keywords: Visible light communication, Geolocation, Indoor navigation, Bidirectional communication, Wayfinding, Optical
sensors, Transmitter/receiver.
1. Introduction
In the realm of modern technology, visible light has
emerged as an innovative and versatile tool for creating
identification systems with a unique twist. The concept
revolves around harnessing the power of light signals,
carefully choreographed in timed sequences and
divided into spatial beams, to serve as an Identifier
(ID) system for building recognition. This
groundbreaking approach leverages the capabilities of
LED arrays, enabling bidirectional communication
channels where light takes on the roles of both down-
link and up-link channels.
The implications of this novel technology are vast
and far-reaching. With its ability to encode
information through modulated light signals, it finds
applications in diverse domains such as positioning,
navigation, security, and mission-critical services.
This cutting-edge communication paradigm is known
as Visible Light Communication (VLC), [1] a data
transmission technology that seamlessly integrates into
indoor environments, making use of existing LED
lighting infrastructure with minimal modifications [2,
3]. VLC has become the development direction of the
next generation communication network with its huge
spectrum resources, high security, low cost, and so on
[4, 5].
One of the standout features of this technology lies
in its adaptability. By employing white polychromatic
LEDs, a phenomenon known as Wavelength Division
Multiplexing (WDM) comes into play, enabling the
simultaneous transmission of multiple data streams at
different wavelengths. This ingenious approach
effectively increases the data transmission rate,
allowing for more information to be conveyed within
a given timeframe.
A pivotal aspect of this innovative system is the
development of a WDM receiver that capitalizes on
light-controlled filters. This receiver is designed to
decode the intricate information carried by the
multiplexed signals. Through the process of
multiplexing, filtering, and decoding, the encoded
signals are meticulously unraveled, resulting in the
accurate recovery of the transmitted information
[6, 7].
In summary, the convergence of visible light, LED
arrays, and sophisticated data transmission techniques
has birthed a revolutionary approach to identification
and communication within building environments. As
we delve deeper into the intricacies of this technology,
we uncover its transformative potential to reshape
various industries and propel us into an era of more
efficient, secure, and intelligent systems.
In this paper, a VLC based guidance system to be
used by mobile users inside large buildings is
proposed. After the Introduction, in Section 2, a model
for the system is proposed and the communication
system described. In Section 3, the main experimental
results are presented, downlink and uplink
transmission is implemented and the best route to
navigate calculated. In Section 4, the conclusions are
drawn.
2. VLC System Model
The main goal is to specify the system conceptual
design and define a set of use cases for a VLC based
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
30
guidance system to be used by mobile users inside
large buildings.
2.1. VLC Emitter and Receiver Modules
The system model is structured around two primary
modules: the transmitter and the receiver, illustrated in
Fig. 1. The functionality of these modules is key to the
successful operation of the technology.
a)
b)
Fig. 1. a) Transmitters and receivers 3D relative positions
and footprints in the square topology; b) Configuration
and operation of the pin/pin receiver.
The first of these modules, the transmitter, assumes
the crucial role of transforming data from the sender
into an intermediary representation in the form of
bytes. These bytes serve as an intermediate step before
they are translated into light signals by the transmitter.
To realize both the communication and the
building illumination, white light tetra-chromatic
sources (WLEDs) are used providing a different data
channel for each chip. The transmitter and receiver
relative positions are displayed in Fig. 1a. Each
luminaire is composed of four polychromatic WLEDs
framed at the corners of a square. At each node, only
one chip is modulated for data transmission (see Fig.
1a), the Red (R: 626 nm¸ 25 μW/cm2), the Green (G:
530 nm, 46 μW/cm2), the Blue (B: 470 nm, 60
μW/cm2) or the Violet (V, 400 nm, 150 μW/cm2). A
fundamental difference between VLC and regular
radio frequency (RF) communication is that VLC does
not allow amplitude or phase modulation, and it must
encode information by varying emitted light intensity.
The LED can be dimmed (“off”) when transmitting
data bit '0’ and at its maximum brightness ("on") when
transmitting data bit '1'. This way, digital data is
represented by the presence or absence of a carrier
wave.
The conversion process involves taking the
original data bit stream and feeding it into a modulator.
This modulator employs an ON-OFF Keying (OOK)
modulation scheme, which is a fundamental
modulation technique widely used in communication
systems. The essence of the OOK modulation lies in
its binary nature. In this scheme, the presence of light
signifies a binary "1" and its absence signifies a binary
"0." By using this straightforward approach, the
modulator effectively encodes the data onto the light
signals. As the data bit stream varies, the modulator
toggles the light emission on and off accordingly,
translating the information into a sequence of
light pulses.
Through this process, the transmitter converts the
sender's data into light signals that will traverse the
transmission medium. This methodology serves as the
foundation for the subsequent stages of the
communication process, as outlined in the
system model.
The signal is propagating through the optical
channel, and a VLC receiver, at the reception end of
the communication link, is responsible to extract the
data from the modulated light beam. In the receiving
system, a MUX photodetector acts as an active filter
for the visible spectrum. The integrated filter consists
of a p-i'(a-SiC:H)-n/p-i(a-Si:H)-n heterostructure with
low conductivity doped layers [10] as displayed in Fig.
1b. It transforms the light signal into an electrical
signal that is subsequently decoded to extract the
transmitted information. The obtained voltage is then
processed, by using signal conditioning techniques
(adaptive bandpass filtering and amplification,
triggering and demultiplexing), until the data signal is
reconstructed at the data processing unit (digital
conversion, decoding and decision) [8] [9]. At last, the
message will be output to the users. In order to receive
information from several transmitters, the receiver
must position itself so that the circles corresponding to
the range of each transmitter overlap. This results in a
multiplexed (MUX) signal that acts both as a
positioning system and as a data transmitter. The grid
sizes were chosen to avoid overlap in the receiver from
adjacent grid points. The nine possible overlaps (#1-
#9), defined as fingerprint regions are also pointed out
for the unit square cell, in Fig. 1a.
2.2. Architecture and Geolocation
In VLC geotracking, geographic coordinates are
generated to provide location information. However,
the usefulness of this feature is further enhanced by
using these coordinates to determine meaningful
locations within a building and guide users through
unfamiliar spaces or towards specific destinations,
such as meeting rooms. To facilitate this process, VLC
employs cells for positioning and a Central Manager
(CM) that oversees and manages the entire system,
including generating optimal routes. Introducing a
paradigm-shifting concept, a mesh cellular hybrid
structure is unveiled, offering an approach to network
architecture. This groundbreaking framework takes
shape in the form of Fig. 2, where the components and
their interactions are vividly portrayed.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
31
Fig. 2. Mesh and cellular hybrid architecture.
In addition to the establishment of secure
pathways, the mesh network excels at enabling peer-
to-peer communication, commonly known as I2I
interactions. This architecture is designed to foster
direct information exchange among devices, a process
that assumes significance in a data-driven
environment. The heart of this operation lies in the
ability of each WLED (White Light Emitting Diode)
to emit a unique Visible Light Communication (VLC)
signal, essentially acting as a beacon of identification.
Employing this beacon, the optical receiver adeptly
calculates the user's trajectory, employing a
specialized position algorithm. This calculated indoor
route, denoted as q(x, y, z, δ, t), encapsulates critical
spatial and temporal information, offering users insight
into their movement within the indoor environment. A
user moves from outdoor to indoor and requests
assistance in finding the right track (D2I). They can
customize their points of interest for wayfinding
services. The requested information (I2D) is sent by
the emitters, at the ceiling, to its receiver.
This architecture serves two purposes: enabling
edge computing and device-to-cloud communication,
and enabling peer-to-peer communication for
information exchange. The culmination of this
intricate network architecture is marked by the role of
ceiling luminaires, which assume the mantle of routers
or mesh/cellular nodes. Functioning as central hubs
within this ecosystem, they engage in the vital task of
disseminating messages (I2D) that encapsulate the
calculated indoor route. Through their orchestrated
efforts, users are provided with a tangible
representation of their indoor trajectory, a testament to
the architecture's capability to transform data into
actionable insights.
3. Cooperative Guidance System
3.1. Communication Protocol, Coding/decoding
Techniques
In the process of encoding the information, a
modulation scheme known as On-Off Keying (OOK)
was employed. This approach involves the toggling of
the signal between two distinct states, representing
binary values. Furthermore, the transmission was
executed synchronously, adhering to a 64-bit data
frame structure. This frame is partitioned into three
primary segments: Sync, Navigation Data, and
Payload. This breakdown is visually represented at the
upper section of Fig. 4, and its essence is succinctly
captured in the summarized form presented in Table 1.
Table
1.
Frame structure.
Header Navi
g
ation Data Pa
y
load
Synch x y z pin
1
pin
2
δ Wayfinding data Stop
b
it
5 bits
(
10101
)
24 bits
(
4 bits
p
er field
)
34 bits
(
…….
)
1 bit
(
0
)
Frame length = 64 bits
The header block stands as a synchronization
mechanism, encompassing the bit sequence [10101].
This initial segment holds high importance, as it serves
as a consistent marker repeated within each data frame.
This repetition enables the receiver to accurately
pinpoint the commencement of every frame. To
achieve this, a standardized header bit sequence,
specifically [10101], is concurrently applied to all
emitters. This sequence is enacted in an alternating
fashion between "on" and "off" states [10101].
Moving on to the second block, it accommodates
the Identification (ID) data. This ID comprises
4+4+4 bits and conveys the geolocation information
(x, y, z coordinates) of the emitters within the array. To
encode these IDs, a 4-bit binary representation is
adopted for decimal numbers. Notably, the z
coordinate accounts for the floor number, which can be
either positive or negative. To address this, the initial
bit is employed to signify the sign of the floor number
("0" for positive, "1" for negative), with the remaining
three bits encapsulating the numerical value of the
coordinate.
In scenarios requiring bidirectional
communication, user registration is necessitated. This
process involves selecting a username (pin1)
comprising four decimal numbers, with each number
correspondingly linked to an RGBV channel.
Additionally, should buddy friend services be sought,
a 4-binary code for the meeting (pin2) must be
provided.
The last segment, termed the δ block (steering
angle δ), constitutes a 4-bit sequence. This component
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
32
completes the user's pose within the time frame
q (x, y, δ, t). This pose is augmented with steering
angle information, encompassing eight possible angles
along the cardinal points. The significance of these
angles, guiding the trajectory from a start point to the
subsequent destination.
The codes assigned to both pin2 and δ remain
consistent across all channels. In scenarios where
wayfinding services are unnecessary, these final three
blocks assume a value of zero, thus furnishing the user
solely with their own location information.
The third and concluding block is aptly labeled the
"payload." This segment pertains to a sequence of bits
that doesn't directly contribute to the navigation
service. It encompasses miscellaneous data and
concludes with a stop bit. To decode the information
received via the photocurrent signal as captured by the
photodetector, a crucial step is undertaken. This
process relies on a calibration curve that has been pre-
established to facilitate this mapping [10]. The
calibration curve represents a sequence of bits
meticulously designed to correspond to each
conceivable decoding level. In essence, this calibration
curve serves as a guide, aiding in the establishment of
associations between photocurrent thresholds and
specific bit sequences. In Fig. 3 a MUX/DEMUX
signals of the calibrated cell is presented. In the same
frame of time a random signal (Payload)
is superimposed.
0.0 1.0 2.0 3.0 4.0 5.0 6.0
0
1
1
1
1
0
1
0
Message
Calibration
Sync.
R
0110
1011
0011
1110
d8
d9
d10
d11
d12
d13
d14
d15
0111
d7
d6
d5
d4
d3
d2
d1
d0
1000
0101
0100
0010
1001
1010
0001
0000
1100
1101
1111
RGBV
V
B
G
MUX signal
Time (ms)
Fig. 3. MUX/DEMUX signals of the calibrated cell.
In the same frame of time a random signal (Message)
is superimposed.
This calibration curve leverages 16 distinct
photocurrent thresholds. These thresholds correspond
to bit sequences meticulously engineered to cover all
sixteen permutations achievable from the four RGBV
input channels (24 combinations). The brilliance of
this approach lies in its simplicity: by juxtaposing the
calibrated levels (d0-d15) with the assigned four-digit
binary codes for each level, the decoding process
becomes transparent and straightforward. This direct
comparison illuminates the decoding pathway,
rendering the message intelligible.
In Fig. 4 the MUX received signal and the decoding
information that allows the VLC geotracking and
navigation in successive instants (t0, t1, t2) from user
“7261” guiding him along his track is exemplified. The
visualized cells, paths and the reference points
(footprints) are also shown as inserts.
0.01.02.03.04.05.0
#2
#3
#1
R
ID
Sinc.
E
SE
SE
t1
t2
G3,3,1 #2
#1
B4,2,1
R3,2,1
t2
t0
1010
#3 t1
t0
Wayfinding Data
R3,2,1
1110
B4,2,1
pin1pin2
z
yx
0000
1111
V4,1,1
B4,2,1
G3,1,1
R3,2,1
MUX signal
Time (ms)
Fig. 4. Fine-grained indoor localization and navigation in
successive instants. On the top the transmitted channels
packets are decoded [R, G, B, V].
Here, the MUX received signal and the decoding
information that allows the VLC geotracking and
guidance in successive instants (t0, t1, t2) from user
“7261” guiding him along his track is exemplified. The
visualized cells, paths, and the footprints are also
shown as inserts. Data shows that at t0 the network
location of the received signals is R3,2,1, G3,1,1, B4,2,1 and
V4,1,1, at t1 the user receives the signal only from the
R3,2,1, B4,2,1 nodes and at t2 he was moved to the next
cell since the node G3,1,1 was added at the receiver.
Hence, the mobile user “7261” begins his route into
position #1 (t0) and wants to be directed to his goal
position, in the next cell (# 9). During the route the
navigator is guided to E (code 3) and, at t1, steers to SE
(code 2), cross footprint #2 (t3) and arrives to #9. The
ceiling lamps (landmarks) spread over all the building
and act as edge/fog nodes in the network, providing
well-structured paths that maintain a navigator’s
orientation with respect to both the next landmark
along the path and the distance to the eventual
destination.
3.2. Multi-person Cooperative Localization
and Guidance Services
In Fig. 5 the MUX synchronized signals received
by two users that have requested guidance services, at
different times, are displayed. In the top of the figure,
the decoded information is shown and the simulated
#1E
2
7
55
99
66
88
44
V
4,3,1
V
4,1,1
B
4,2,1
R
3,2,1
G
3,1,1
G
3,3,1
6
8
7
#3 1
SE
SE
#2
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
33
scenario is inserted to guide the eyes. At the right hand
the request/response information is inserted.
0.01.02.03.04.05.0
t
3
=180
o
t
3
W
V
2,3,-1
B
2,4,-1
7261
G
1,3,-1
t
1
request
0111
response
request
Wayfinding Data
3009
3009
CM
NE
pin
1
pin
2
z
y
x
W#1
V
4,5,1
=45
o
=180
o
G
3,5,1
#1
Wayfinding Data
R
3,4,1
B
4,4,1
pin
1
pin
2
z
y
x
0000
1111
V
4,5,1
B
4,4,1
G
3,5,1
R
3,4,1
MUX signal
Time (ms)
Fig. 5. MUX/DEMUX signals assigned requests from two
users (“3009” and “7261”) at different poses (C
4,4,1
; #1W
and C
2,3,-1
; #6 W ) and in successive instants (t
1
and t
3
).
We have assumed that a user located at C2,3,-1,
arrived first (t1), auto-identified as (qi (t1 ), i="7261",)
and informed the controller of his intention to find a
friend for a previously scheduled meeting (code 3). A
buddy list is then generated and will include all the
users who have the same meeting code. User “3009”
arrives later (qj (t3 ), sends the alert notification (C4,4,1;
t3) to be triggered when his friend is in his floor
vicinity, level 1, identifies himself (“3009”) and uses
the same code (code 3), to track the best way to his
meeting . The “request” message includes, beyond
synchronism, the identification of the user (“3009”), its
address and orientation, qi(t), (C4,4,1, #1W) and the help
requested (Wayfinding Data). Since a meet-up
between users is expected, its code was inserted before
the right track request. Upon receiving this request (t3),
the buddy finder service uses the location information
from both devices to determine the proximity of their
owners (qij (t3)) and provides the best route to the
meeting, avoiding crowded areas. In the “response”,
the block CM identifies the CM [0000] and the next
blocks the cell address (C4,4,1), the user (3009) for
which the message is intended and finally the
requested information: meeting code 3, orientation NE
(code 4) and wayfinding instructions.
The outcomes of our study underscore the
remarkable capabilities of VLC's dynamic LED-
enhanced guidance system. This system not only
furnishes users with precise and dependable route
guidance but also empowers them to navigate
effectively and engage in geotracking activities. As
users traverse large building complexes, VLC
technology offers a groundbreaking solution that steers
them towards optimal routes while delivering
continuous guidance throughout their journey.
The bidirectional communication prowess
embedded within the system introduces a realm of
possibilities for diverse services. Mission-critical
applications can harness the inherent reliability and
low-latency communication offered by the ID system.
This reliability is particularly pertinent in scenarios
requiring swift and trustworthy data exchange. Beyond
this, the system's bidirectional communication
capacities pave the way for an array of consumer-
centric services. For instance, location-based
advertisements can be intelligently deployed,
personalized information can be seamlessly delivered,
and indoor wayfinding functionalities can be fine-
tuned to heighten user experiences within the building
environment.
In essence, our research not only highlights the
technical capabilities of the VLC-enabled guidance
system but also envisions the transformative potential
it holds for users within expansive indoor spaces. The
system's dual nature, combining navigation and
communication sets the stage for improved efficiency,
enhanced experiences, and novel applications that
redefine how individuals interact with and navigate
through complex building structures.
4. Conclusions
This paper embarks on an exploration of the
potential for Visible Light Communication (VLC) to
revolutionize indoor navigation within vast building
complexes. Tailored to the needs of mobile users, it
introduces a novel guidance system that leverages the
capabilities of VLC technology. Central to this system
is a hybrid mesh cellular architecture, complemented
by the establishment of an encompassing
communication protocol tailored to multi-level
scenarios.
The system acts as a steadfast navigational
companion, seamlessly providing precise route
guidance within complex indoor environments. The
spectrum of functionalities includes navigation
assistance, real-time route tracking, and reliable
guidance for users in motion. The culmination of our
experimental endeavors is reflected in global results,
where the successful localization of mobile receivers
coincides with the simultaneous transmission of data.
A distinctive hallmark emerges in the form of a
cooperative localization mechanism. As regions within
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
34
the environment become densely populated, this
mechanism adeptly adapts by autonomously
rescheduling localization tasks. This dynamic feature
orchestrates the dissemination of guidance information
and issues alerts, ensuring a responsive approach to
dynamic environmental conditions.
Acknowledgements
This research was funded (in part) by the
Portuguese FCT program, Center of Technology and
Systems (CTS) UIDB / 00066 / 2020 / UIDP / 00066 /
2020 and by IPL/2022 / POSEIDON_ISEL.
References
[1]. Hassan, N. U., Naeem, A., Pasha, M. A. and Adoon,
T. J. , Indoor Positioning Using Visible LED Lights:
A Survey, ACM Computing Surveys, Vol. 48, 2015,
pp. 1-32.
[2]. Tsonev, D., Chun, H., Rajbhandari, H., S., McKendry,
J., Videv, S., Gu, E., Haji, M., Watson, Kelly,
S. A., Faulkner, G., Dawson, M., Haas, H., and
O’Brien, D., A 3-Gb/s single-LED OFDM-based
wireless VLC link using a Gallium Nitride μLED,
IEEE Photon. Technol. Lett., 26, 7, 2014,
pp. 637–640.
[3]. O’Brien, D.H., Minh, L., Zeng, L., Faulkner, G., Lee,
K., Jung, D., Oh, Y., and Won, E. T., Indoor visible
light communications: challenges and prospects, Proc.
SPIE 7091, 2008, 709106.
[4]. Hassan, N. U., Naeem, A., Pasha, M. A., Jadoon,
T., and Yuen, C., Indoor positioning using visible led
lights: A survey, ACM Comput. Surv., Vol. 48, 2015,
pp. 1–32.
[5]. Ozgur, E., Dinc, E., Akan, O. B., Communicate to
illuminate: State-of-the-art and research challenges for
visible light communications, Physical
Communication, 17, 2015, pp. 72–85.
[6]. Vieira, M., Louro, P., Fernandes, M., Vieira,
M. A., Fantoni, A. and Costa, J., Three Transducers
Embedded into One Single SiC Photodetector: LSP
Direct Image Sensor, Optical Amplifier and Demux
Device, in Advances in Photodiodes, InTech,
Chap. 19, 2011, pp. 403-425.
[7]. Vieira, M. A., Louro, P., Vieira, M., Fantoni, A. and
A. Steiger-Garção, Light-activated amplification in Si-
C tandem devices: A capacitive active filter model,
IEEE Sensor Journal, 12, 6, 2012, pp. 1755-1762.
[8]. Vieira, M. Vieira, M. A., Louro, P., Vieira, P., Fantoni,
A., Light-emitting diodes aided indoor localization
using visible light communication technology, Opt.
Eng. 57, 8, 2018, 087105.
[9]. Vieira, M. A., Vieira, M., Louro, P., Vieira, P., Bi-
directional communication between infrastructures
and vehicles through visible light, Proc. SPIE 11207,
Fourth International Conference on Applications of
Optics and Photonics, 3 October 2019, 112070C.
[10]. Vieira, M., Vieira, M. A., Louro, P., Fantoni, A.,
Vieira, P. Dynamic VLC navigation system in
Crowded Buildings, International Journal on
Advances in Software, 14, 3-4, 2021, pp. 141-150.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
35
(015)
Classification of Sports Exercises and Repetition Counting
based on Inertial Measurement Data
P. Krutz 1, M. Rehm 1, Z. Lang 1, M. Dix 1 and J. Patalas-Maliszewska 2
1 Chemnitz University of Technology, Institute for Machine Tools and Production Systems,
Reichenhainer Strasse 70, 09126 Chemnitz, Germany
2 University of Zielona Góra, Institute of Mechanical Engineering, 65-417 Zielona Gora, Poland
Tel.: +4937153134950
E-mail: pascal.krutz@mb.tu-chemnitz.de
Summary: Inertial measurement units (IMU) are often used in the field of Human Activity Recognition (HAR) and were also
used in this work with the additional possibility of recording absolute positions by ultra-wideband measurement. Seven
different sports exercises were recorded within the framework of a study, including 21 participants. After data preprocessing,
artificial neural network structures were trained for exercise classification and an optimisation of the hyperparameters was
carried out. The performance of the trained models was compared and validation accuracies of up to 91.6 % were achieved.
Another focus was to implement an algorithm to count the correct completed repetitions independently of the type of activity.
For this purpose, the distance within the samples presented in matrix form was calculated, which made it unnecessary to select
characteristic signal variables specific to the exercise. Despite its comparatively simple structure, the implemented counting
algorithm showed promising results, which could be further optimised by several post-processing routines.
Keywords: Inertial sensors, Human activity recognition, Repetition counting, Machine learning, Artificial neural networks.
1. Introduction
IMU sensors, in combination with broadband
technologies and high-performance algorithms, offer
great potential in the detection, monitoring and
improvement of sports exercises. Especially in times
of a lack of skilled workers and the demands resulting
from the increasing individualisation of personal
leisure time, digital therapy and training methods are
needed for both the physiotherapy and fitness markets.
Typically, methods of supervised learning are
predominantly used in the field of HAR. The most
common approaches in the literature are those based
on Support Vector Machine [1-3], followed by Naive
Bayesian [4, 5] and k-Nearest Neigbour [6]. Less
common are deep learning approaches using
Convolutional Neural Networks (CNN) [7] and Long
Short Term Memory (LSTM) Networks [8].
Unsupervised learning methods are the exception and
have so far only been used to count the number of
repetitions, for example in tennis strokes [9]. The work
is based on preliminary work [10, 11] on supervised
learning of three sports exercises, where high accuracy
was achieved with both the classifiers and the neural
networks used. The workflow included data
acquisition and preprocessing, the training of AI
models, the optimisation of model parameters and a
final evaluation of model performance.
This study aims to choose, optimize and compare
model structures and parameters for supervised
learning using an extended database. The database
consists of recordings of seven different sports
activities performed by 21 participants. Another focus
of this paper is the development of a universal counting
algorithm suitable for counting repetitions of any sport
activity.
2. Data Generation and Preprocessing
Within the study with 21 participants, 7 different
exercises were completed at one sports device as
successive sets. The execution of the different
activities is shown in Fig. 1.
Fig. 1. Recorded exercises on the sports device,
1-support, 2-squats, 3-dips, 4-lunges, 5-pull ups, 6-sit ups,
7-pushups.
The exercises support, squats, dips, lunges, pull-
ups, sit-ups and push-ups were recorded as continuous
recordings with IMUs at the positions chest, hand and
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
36
foot. With a set sampling rate of 30 Hz the quantities
linear axial accelerations, rotation rates, orientations
(Euler angles, quaternions, magnetic flux densities),
absolute pressure and the Cartesian position in space
were captured. The time series were labelled during the
recording process by switching the subjects between
pause and activity intervals using a wearable device. In
total, around 230 minutes of exercise data were
recorded, divided into 24 exercise sets, which also
included intervals of stretching and loosening
exercises, labelled as pauses. Signal interruptions that
occurred were linearly interpolated during data pre-
processing and the time series of three IMUs were
synchronised. The distribution of the generated motion
data to the different classes is shown in Table 1. As can
be seen from this, about half of the total amount of data
belongs to the pause intervals. Despite the
comparatively high volume of seemingly unnecessary
data, it was fully utilized for model training in order to
train the ability to distinguish between pause and active
intervals.
Table 1. Distribution of classes in the used database.
class lun
g
es di
p
s
p
ull-u
p
s s
q
uats
# Sam
p
les 38997 22675 21736 32288
p
ercenta
g
e [%] 9.39 5.46 5.23 7.77
class push-ups pauses sit-ups support
# Samples 21958 220290 32544 24816
p
ercenta
g
e [%] 5.29 53.04 7.84 5.98
3. Training, Optimization and Compare
of KNN
The supervised models were trained in Matlab
R2022b. At first, the number of input variables was
reduced to 36 by using only the linear accelerations,
rotation rates, magnetometer data and Euler angles.
Also the data were divided by segmentation in order to
be able to process the time series as sequential data in
Matlab. To ensure a diverse distribution of unbalanced
data across the training and holdout validation sets, a
segment length of 500 samples was chosen
(corresponding to approximately 16 s of workout
time). The ratio for the validation data holdout was set
to 0.3. Consequently, 582 sequence blocks of training
data and 249 blocks of validation data were generated.
Subsequently, a LSTM- and a CNN-network were
trained and an optimisation of the hyperparameters was
carried out. The CNN network can be described as a
temporal convolutional network (TCN), which
consists of several CNN blocks connected in series.
The time series are scanned in the form of a receptive
field, the extent of which is influenced, for example,
by the number of blocks. [12] The most accurate
models are compared in Table 2 with the accuracies
achieved and the training times required. The TCN
could be trained more than three times as fast as the
LSTM with slightly better accuracy.
Table 2. Comparison of the trained LSTM and TCN
structures with the highest accuracies.
model training
time
accuracy worst F1-
score (class)
training validation
LSTM 13 min
26 s 94.2 90.8 83.8
(
p
ull-ups)
TCN 3 min
49 s 95.5 91.6 83.4
(p
ull-u
p
s
)
Fig. 2 shows the obtained confusion chart for
validating the TCN. One possible reason for incorrect
assignments after the optimisation of the
hyperparameters can be justified with the applied
methodology for labelling the data. In this study, the
test persons independently initiate the beginning and
end of an exercise interval by operating a wearable,
implying that the activity ranges are not uniformly
defined. This can be observed in the false positives and
false negatives elements of the pause class, which
contain significantly more samples than the other
classes. However, this effect was already reduced by
the implementation of a three-second countdown
between the actuated switch-over point and the
beginning of the exercise execution. However,
deviations remain, especially in exercises where the
starting position can only be assumed after the
wearable has been activated (e.g. pull-ups and dips).
Fig. 2. Confusion matrix for applying the trained TCN
on the validation data.
Furthermore, it must be taken into consideration
that the subjects were largely unrestricted in their
exercise execution. Different variations were tolerated,
e.g. squats with stretched or bent arms and also
simplified executions (e.g. pull-ups in inclined
posture). Despite the fact that some exercises have a
high degree of similarity (e.g. lunges/squats,
dips/support), inter activity mismatch is limited. Fig. 3
shows the TCN model predictions for a section of the
validation dataset. There are occasionally false
assignments, which were filtered out by post-
processing. In the smoothed prediction that is also
shown, a sliding window of 50 samples was used to
determine the dominant class in each window, which
slightly improves the overall accuracy of the validation
by additional 0.4 %.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
37
Fig. 3. Temporal course of the TCN model predictions
for a segment of the validation data, comparing model
prediction (non-smoothed/smoothed) and ground truth.
4. Repetition Counting
The execution of repetitive exercises is visible as a
periodic progression of individual measured variables,
which differ in type and curve progression depending
on the respective type of exercise. In order to reduce
the data load and to improve the generalisability of the
implemented algorithms, the strategy was to use only
variables that are independent of the orientation and
position of the subjects. Therefore, only the values of
the axial accelerations and rotation rates were
considered. Consequently, 18 measurement variables
were used to determine the number of completed
repetitions. As the value ranges of the inertial
measurement variables are widely distributed,
symmetrical normalisation was applied to all variables
in the interval [-1...+1].
In order to enable an evaluation of the counting
algorithm without the influence of an upstream model
for classification, the ground truth labels were used for
the development of the counting algorithm. It is also
important to mention that only the activity intervals in
their chronological order from the recording were used
as the data basis for the analysis. This order of data
would also be present in a potential interconnection of
model and counting algorithm, when the latter receives
sequences of predictions by the classification model.
Consequently, 24 sets in total were analysed in series
by the counting algorithm.
The special requirement in this work was to find a
parameter that shows an oscillating course for a variety
of different sports exercises. Such an exercise-
independent parameter can be calculated with the
distance Dis between a pattern segment A and every
segment B of the respective activity as described in
equation (1) and (2). A and B are matrices with the
dimension m x n, where m defines the number of
samples, also called segment length and n the number
of features for one segment.
(1)
(2)
The pattern segment was selected in the centre of
each exercise interval to avoid the influence of
transition areas between other classes because, as
previously mentioned, it was not possible to precisely
define the start and end points of an exercise execution.
After the segment length was set to 30 samples, the
distances for all segments of an activity interval were
calculated with a step size of one sample. The obtained
distance values were smoothed by a moving average
filter with a bandwidth of 15. Additionally, the median
was calculated for each interval of smoothed distance
values. The courses of unfiltered/filtered distances and
the median values are presented in Fig. 4 for an
exemplary execution of six pull-ups. The oscillating
distance curves represent the number of completed
repetitions. A characteristic attribute is that the
distance increases significantly at the borders of an
interval to be examined. In addition, the distance in the
area of the pattern segment becomes zero. To count the
repetitions, the number of intersections at the falling
edge of smoothed distance values and the median were
analyzed (red crosses in Fig. 4).
Fig. 4. Calculated distance values for the execution of
six pull-ups, marking the counted intersections with the
median at falling flank (N = 30, M = 18).
To evaluate the accuracy of the counting algorithm,
the deviation rate and the class-wise deviation rate
were used, described in (3) and (4). The deviation rate
δ is signed and thus enables the assessment of whether
too many or too few repetitons Rc have been counted
in an exercise set. When determining the class-wise
deviation rate δclass for n records, the amounts of the
individual deviation rates are calculated. All deviation
rates are relativised by multiplying with the true
number of repetitions Rt.
𝛿
 (3)
𝐶𝐴𝐵𝑎
11
𝑏
11
⋯𝑎
1𝑛
𝑏
1𝑛
⋮⋱⋮
𝑎
𝑚1
𝑏
𝑚1
⋯𝑎
𝑚𝑛
𝑏
𝑚𝑛
,󰇛𝐴,𝐵∈ℝ
𝑚𝑛
󰇜
𝐷𝑖𝑠󰇛𝐴,𝐵󰇜|𝑐
𝑖𝑗
|
𝑖𝑚,
𝑗
𝑛
𝑖1,
𝑗
1
𝑎
𝑖𝑗
𝑏
𝑖𝑗
𝑖𝑚,
𝑗
𝑛
𝑖1,
𝑗
1
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
38
𝛿 ||




 (4)
In order to obtain a final value for the overall
performance of the counting algorithm for a larger
number of trials, δSUM was calculated with the weighted
averaged of δclass, considering the overall distribution
of the m activity-classes and shown in equitation (5).
There nm is the number of samples of each activity-
class and nA the number of all activity-samples.
𝛿∙

 (5)
Table 3 shows the results of experiments for
optimising the counting algorithm by varying the
features and the segment length for distance
calculation. First, the features were varied, using a
segment length of 30 samples. The deviation rates
listed (class-wise and weighted average) suggest that
the accelerations (acc) have a significantly higher
influence for successful repetition counting than the
gyroscope data (gyro). The lowest δSUM was
determined with a combination of rotation rates and
accelerations and was therefore defined as the feature
setting for the further investigations. The lower part of
Table 3 shows the results of varying the segment
length, i.e. the number of samples used to calculate the
distance. It was varied in the range of 20 to 70 samples,
with the best results obtained for the already used
30 samples and, with an additional minimal
improvement of 0.7 % for δSUM, with 50 samples
segment length.
For the detailed consideration of the class-wise
delta, these two mentioned tests are considered
(highlighted grey in Table 3). In both cases, the δclass
was most significant for the support class with 21.2 %
and 23.9 % respectively. In contrast, the classes squats
and sit-ups were the most reliably counted by far with
δclass between 2.3 % and 3 %.
Table 3. Results of optimisiation experiments, determine the
deviation rates for the activities (1-support, 2-squats, 3-dips,
4-lunges, 5-pull ups, 6-sit ups, 7-push-ups) and the overall
deviation.
δclass in % δSUM
in %
1 2 3 4 5 6 7
variable feature selection, segment length = 30
acc +
gyro 21.2 2.8 19.3 15.0 9.1 2.9 7.9 10.8
acc 18.6 8.3 14.6 13.4 7.5 5.2 9.0 10.9
gyro 38.3 3.9 34.8 58.1 28.6 11.7 10.7 27.5
variable segment length, features: acc + gyro
20 19.7 4.4 21.5 23.5 12.1 7.3 9.0 14.0
40 23.9 3.0 19.3 14.2 8.5 2.6 7.3 10.8
50 23.9 3.0 15.9 12.4 9.6 2.3 7.3 10.1
60 23.1 3.0 16.7 12.4 14.6 2.6 7.9 10.8
70 26.1 2.8 19.3 13.7 14.6 3.5 9.0 12.0
In order to analyse possible causes for the partly
very different values of δclass, the deviations for each of
the 24 recorded sets were considered for the upper,
highlighted parameterisation in Table 3. Occurring 𝛿
with a modulus greater than 0.3 were examined in
more detail. Typical scenarios for exceed and fall
below the true number of repetitions were identified
from different recordings and are presented below.
Fig. 5. Progressions of the distance values for incorrectly
counted repetitions, top: lower deviation to the true number
of repetitions (δ < 0), bottom: exceeding true repetitions
(δ > 0); respective indication of the type of exercise with true
and counted repetitions.
In Fig. 5 top left, the performance of squats is
shown where the exercise execution was varied within
the interval. As a result, the median is too high in the
immediate vicinity of the pattern segment, so that there
are no intersections with falling edges in this area. The
upper right part of the figure is similar. Here, several
repetitions of dips at the end of the interval were also
no longer detected. In the lower part of Fig. 5, too few
dips were recorded in the left example. Here the
recognisable periodic progression of the distance
values was comparatively short, which suggests only a
short execution within the labelled active interval. In
the fourth example (bottom right), the two completed
repetitions of the support exercise cannot be identified
in the progression. One interpretation is that the
exercise was performed very irregularly, it is also
possible that the test person was physically
overloaded. Finally, it can be stated with regard to the
evaluation of the faulty intervals that the implemented
methodology is particularly susceptible to errors when
exercises are performed irregularly. The observation
that irregular execution is caused by physical
overstress among the test subjects is supported by a
dependable analysis of relatively simple exercises, like
squats and sit-ups. A special status is occupied by the
lunges, whose execution (step position of left and right
foot alternately or identical for several repetitions) was
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
39
not exactly defined, which indicates that the sensor
attached to only one foot does not contribute to a
periodic progression of the distance values for all
movements.
5. Discussion and Conclusion
In the presented work, a recorded data set with seven
sports activities was examined. The first step was to
classify the completed exercises using supervised
machine learning methods. Two different network
structures (LSTM and TCN) were trained and the
hyperparameters of the models were iteratively
optimised. The TCN model gave a good performance
with 91.6 % validation accuracy using significantly
shorter training times. One way to improve the
accuracy of the prediction is by post-processing the
labels produced during recording. This will enable a
more accurate determination of the pure duration of the
exercise, as it has been shown that the dominant
percentage of misclassifications is caused by confusion
of the break class with other activities. For further
investigations, the interconnection of several
classification levels would be another possible
approach to further increase the model accuracy. For
post-processing of the model predictions, a smoothing
algorithm was applied that determined the dominant
prediction class using a filter width of 50 samples.
Although the overall accuracy of the prediction was
only slightly improved to 0.4 %, the described
procedure can optimise the user experience in possible
applications for real-time recognition.
The second crucial aspect of the study involved
developing a counting algorithm to track completed
exercise repetitions. By calculating distance values for
a pattern segment defined within an exercise interval,
the number of repetitions could be made visible for all
activities on the basis of this one parameter, which
made the class-specific analysis of individual signal
variables unnecessary. In order to finally determine the
number of repetitions in an algorithmic way, the
intersections of the distance values with the median of
all distances of a single exercise phase were evaluated.
For performance evaluation with different
parameterisations (segment length, input features), the
deviation rate was introduced. The implemented
algorithm performed well when counting regular
executions. The key advantage of the approach is its
ability to load a new pattern segment as a reference for
each new exercise interval, resulting in a highly robust
algorithm capable of accommodating different people
and their specific type of exercise execution. However,
it is a precondition that the individual repetitions
within an active exercise are performed very
uniformly, which tends to lead to deviations in the case
of conditionally demanding exercises. Furthermore,
the calculated median of the distances was not always
ideal, so that executions were not counted at all or even
counted twice. Here, it would also be of interest for
further research to develop a more robust processing
method for irregular distance patterns.
Acknowledgements
This publication is funded by the Federal German Ministry
for Economic Affairs and Climate Action.
References
[1]. S. Kautz et. al., Activity recognition in beach
volleyball using a deep convolutional neural network,
Data Mining and Knowledge Discovery, 2017,
pp. 1–28.
[2]. Brock, H., Ohgi, Y., Assessing motion style errors in
ski jumping using inertial sensor devices, IEEE
Sensors Journal, 99, 2017, pp. 1–11.
[3]. Pernek, I. et.al., Recognizing the intensity of strength
training exercises with wearable sensors, Journal of
Biomedical Informatics, 58, 2015, pp. 145–155.
[4]. Connaghan, D. et. al., Multi-sensor classification of
tennis strokes, Sensors, 2011, pp. 1437–1440.
[5]. Schuldhaus, D. et. al., Inertial sensor-based approach
for shot/pass classification during a soccer match, in
Proceedings of the 21st ACM KDD Workshop on
Large-Scale Sports Analytics, Sydney, Australia,
2015, pp. 1–4.
[6]. Groh, B. H. et. al., Classification and visualization of
skateboard tricks using wearable sensors, Pervasive
and Mobile Computing, 40, 2017, pp. 42–55.
[7]. Jiao, L. et al., Multi-sensor Golf Swing Classification
Using Deep CNN, in Procedia Computer Science, 129,
2018, pp. 59–65.
[8]. Rassem, A. et. al., Cross-country skiing gears
classification using deep learning, ArXiv Preprint
ArXiv:1706.08924, 2017.
[9]. Kos, M., Kramberger, I., A wearable device and
system for movement and biometric data acquisition
for sports applications, IEEE Access, 2017,
pp. 6411–6420.
[10]. Patalas-Maliszewska, J. et. al., Inertial Sensor-Based
Sport Activity Advisory System Using Machine
Learning Algorithms, Sensors, 23, 3, 2023, 1137.
[11]. Pajak, I. et. al., Sports activity recognition with UWB
and inertial sensors using deep learning approach, in
Proceedings of the IEEE International Conference on
Fuzzy Systems (FUZZ-IEEE’ 2022), 2022, pp. 1-8.
[12]. Mathworks: Sequence-to-Sequence Classification
Using 1-D Convolutions, https://de.mathworks.com
/help/deeplearning/ug/sequence-to-sequence-
classification-using-1-d-convolutions.html,
19.07.2023.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
40
(016)
Difference in Sensor Placement Position of Insole-type
Pressure Transducers
Y. Uchida 1, T. Funayama 2, E. Ohkubo 1 and Y. Kogure 3
1 Dept. of Life Science, Teikyo University of Science, Adachi-ku, 120-0045 Tokyo, Japan
2 Dept. of Occupational Therapy, Teikyo University of Science, Uenohara-shi, 409-0193 Yamanashi, Japan
3 Professor Emeritus, Teikyo University of Science, Adachi-ku, 120-0045 Tokyo, Japan
Tel.: + 81369101010, fax: + 81369103800
E-mail: uchida@ntu.ac.jp
Summary: A system aimed at detecting changes in physical condition is proposed in this study. To reduce costs, the number
of pressure sensors is limited to four. The system evaluates the changes in the load applied to each sensor on the insoles.
Commercially available insoles are classified into S, M, and L sizes and are cut to fit the shoe size. Consequently, sensors are
not always attached to the appropriate position on the insole, and substantial variations are expected to occur because of
misalignment. We found that the output characteristics differed significantly depending on the position of the toe sensor. In
particular, the toe length varies considerably among individuals, and it is important to adjust the sensor position to suit each
individual. The peak value of the sensor output and the steepest slope value at the subsequent decrease emerged as promising
feature values. The incorporation of machine learning into the output results, including other sensor positions, is expected to
yield more accurate data.
Keywords: Insole, Pressure sensors, Health condition, Arduino nano, Bluetooth, Decision tree, Machine learning.
1. Introduction
Rapid aging is a global problem [1] that is expected
to become a major societal concern not only in Europe
but also in Asia [2]. The number of caregivers for the
elderly is expected to increase due to the declining
birth rate, and the number of caregivers for each
elderly person is expected to decrease, resulting in the
elderly being cared for by the elderly. Among the
primary concerns in caring for the elderly is that they
may become bedridden owing to fractures caused
by falls.
Many studies have employed images and sensors to
prevent falls [3-7]. Insole sensors, which are
commercially available and relatively easy to use, are
often used to prevent falls because the gait state is
greatly influenced by physical conditions. While these
commercial insole sensors were mainly developed for
athletes, they have attracted considerable interest for
their relevance to lower limb dynamics, crucial not
only for runners but also for people of all ages.
Furthermore, there is a growing interest in gait analysis
within rehabilitation centers and facilities dedicated to
the elderly.
We previously reported a sensor that is used as a
reference when determining movement limitation
states using data from the insole sensor pressure
distribution [8-10]. The e-rubber smart insole, known
as FEELSOLE®, was available in three sizes (S, M,
and L) with 2 cm increments. In this case, the output of
the toes seemed low, which raised the question of
whether it was important to accurately determine the
sensor position. Because the tip of each toe has a short
bone, a slight difference in the sensor ground position
may affect the output. There are diseases and disorders
for which information from the toe portion of the insole
is important, including hammertoe, a condition in
which the toes are bent.
Therefore, we fabricated a device in which the
pressure sensor was fixed to the shoe insole with tape
to allow the sensor position to change arbitrarily.
Using this insole, the pressure changes during walking
were examined by changing the position of the toe
sensor, and the effects of different sensor positions
were investigated.
The research was approved by the Ethics
Committee of Teikyo University of Science.
2. Experimental
The pressure distribution screen of a commercially
available smart insole shows the pressure applied to
the sensor in different colors. In an initial experiment
to determine the optimal sensor position, the sensor
was placed as shown in Fig. 1(a) using an
INTERLINK FSR-402 sensor to obtain the foot
pressure distribution for comparison with the
simplified display. The sensor was affixed to four
locations: 15 mm from the toe, 60 mm from the inside
and outside of the foot, and 20 mm from the heel. The
side with the affixed sensor faced downwards. As the
experiment progressed, as shown in Fig. 1(b),
measurements were obtained by changing the sensor
placement position to 80 and 90 mm for the inner and
outer sensor positions, respectively. This adjustment
aimed to achieve a more accurate distribution of foot
pressure. As shown in Fig. 2, the system uses an
Arduino Nano to convert analog signals from the
sensors into digital format and sends them to a PC via
Bluetooth using a Microchip SDB with
PIC24FJ64GB004. The received signals were
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
41
processed by a program created by processing, and the
files were saved. For comparison, the smart insole
FEELSOLE® was used on top of the fabricated
insoles. Because of the cushioning effect of the smart
insole, the output of the homemade insole sensor was
approximately 0.5V lower. A 9V battery was used for
this prototype owing to the unavailability of a polymer
battery charger during the prototype phase due to
supply shortage. A homemade insole sensor was
attached to the right foot for measurement. Sandals
were used instead of shoes, primarily because patients
in Japanese hospitals prefer sandals owing to their ease
of removal and wear. Moreover, this choice facilitates
easy wiring of sensor signals to the Arduino Nano and
Microchip SBDBT attached to the top of the footwear.
(a) (b)
Fig. 1. Sensor position.
Fig. 2. Measurement system.
During the measurement, the participant walked for
approximately 30 or 60 s, with and without motion
restriction on the right knee joint using a supporter, for
a distance of approximately 3 m. A video was captured
from the front for separate analysis. While walking, the
participant repeatedly performed U-turns and straight
turns. The position of the sensor was varied from the
tip of the insole to examine the relationship between
the sensor position and the presence or absence of
motion restriction in the right knee joint. At 70 mm, the
sensor position was lower than that of the inside and
outside sensors, as shown in the photograph in
Fig. 1(a).
The participant was a male in his 60s. Exercise
restriction was simulated using a supporter in the knee
area.
3. Results
3.1. Output Waveforms from a Self-made Insole
Sensor
With the sensor positioned 15 mm from the tip of
the insole, the measurement results during walking
with and without motion restriction are shown in
Figs. 3 (a) and (b), respectively. The position of the
peak was different in each cycle, and the output voltage
was lower in some cycles. In the absence of motion
restriction, the signal decreased with time.
Assuming that the low-output state below 50 % of
the peak value corresponded to the state at the U-turn,
we compared the difference between the peak
(maximum value) and the minimum value among four
consecutive values in the interval up to the low-output
value before the first U-turn. The interval was used as
a break in this case. The difference tended to be larger
under these limitations.
(a)
(b)
Fig.3. Measurement results during walking without
(a), and with motion restriction (b).
3.2. Comparison of Average Output Voltages
and the Difference between Maximum
and Minimum Output Voltages
The sensor position from the toe was varied from
15 mm to 70 mm. Fig. 4 illustrates two characteristic
examples: the average of the four sensor output
voltages and the difference between the maximum and
minimum output voltages. In the figure, Toe represents
the case without motion limitation, ToeR represents
the case with motion limitation, and labels marked as
dif represent the maximum and minimum voltage
differences.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
42
Fig. 4. Average of four sensor output voltages and the
difference between maximum and minimum voltages.
When the sensor position of the toe was changed
without motion restriction, different results were
obtained. The output was such that the characteristics
shifted to the left, except for the area close to the tip,
but the output voltage at a distance closer to the tip
increased. The trend of the difference between the
maximum and minimum values did not change
significantly with or without motion limitation, or with
different measurements.
3.3. Change in Peak Value with no Motion
Limitation by Self-made Device
A decrease in the output voltage was observed with
the passage of measurement time when the toe sensor
position was changed. The time variation of the peak
value when the toe sensor position was changed is
shown in Fig. 5. The horizontal axis represents the
point evaluated as the peak, and not the time, owing to
the nature of the evaluation software. The peak points
correspond to the measurement time, and at 50 mm, the
values are almost stable, except for the U-turn point.
Fig. 5. Change in peak value with no motion limitation
by self-made device.
3.4. Output from Smart Insole
The output from the smart sensor was obtained for
confirmation because the output from the self-made
insole sensor decreased with time. The results are
shown in Fig. 6. The unit of output from the smart
insole was N. Because the sensor position of the smart
insole did not change, slight variations in amplitude
were observed while the period remained relatively
constant. This differed from the observed output
pattern of the self-made insole sensor.
Fig. 6. Output of the smart insole sensor when the position
of the self-made sensor was changed.
3.5. Classification using a Decision Tree
As reported in SEIA’ 2022 and our previous paper,
the set of peak values and the steepest slope values in
the decreasing portion after the peak reflect the
participant's characteristics. Based on this idea, we
attempted classification using a decision tree with the
output from the sensor at the toe portion and the
maximum slope in the decreasing portion after the
peak. The results are shown in Fig. 7. The restricted
and unrestricted data are circled in red and black,
respectively. It is evident that the two regions can be
classified.
Fig. 7. Classification using a decision tree.
4. Discussion
4.1. Sensor Position and Motion Limitation
Up to a sensor position of approximately 35 mm,
the output voltage was larger when no restriction was
applied than when a restriction was applied, probably
because of the smaller kick of the foot during walking
when restricted and the foot was flat with the right foot
landing evenly. The toes of the subjects were located
40 mm from the base of the toes to the tips of the feet.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
43
Therefore, the toe sensor was closer to the medial and
lateral sensors at a position of 45 mm or more, such
that the output was almost the same, regardless of the
limitation. This supports the case where the output data
from the sensors on the toe side vary considerably
depending on the sensor position around the base of the
toes. In contrast, minimal changes are observed when
the toes are closer to the center of the foot. In addition,
the range between the maximum and minimum values
increased when there was a restriction because the
landing of the foot at each time during walking varied.
4.2. Comparison of Two Types of Insole Sensors
In this experiment, the choice of sandals allowed
easy shifting of the insole, leading to frequent changes
in the position of the sensor. This is believed to be the
case in this study. This highlights the importance of
therapists ensuring the sensor is accurately fixed to the
desired position to increase the reproducibility of the
experiment.
4.3. Consideration of Classification Methods
Although the number of experiments conducted
was small, we successfully demonstrated that decision-
tree classification based on the set of peak values and
the steepest slope value in the decreasing portion after
the peak could be used to classify the data into two
groups. However, the boundary area does not differ
significantly. Thus, it is necessary to increase the
amount of data and consider a classification method
such as a support vector machine.
5. Conclusions
It is desirable to consider the sensor position
corresponding to the participant’s characteristics so
that the therapist can obtain the desired data using an
insole sensor. However, owing to cost constraints, it is
important to select features that consider variations in
sensor positions. We found that one candidate is the
slope that indicates a decrease after the peak value,
which we have proposed.
Acknowledgments
This work was supported by JSPS KAKENHI, Grant
Numbers JP20K11924 and JP23K11207.
References
[1]. J. D. Sciubba, Population Aging as a Global Issue,
Oxford Research Encyclopedia of International
Studies, Oxford, UK, 2020.
[2]. T. Jayawardhana, S. Anuththara, T. Nimnadi,
R. Karadaanaarachchi, R. Jayathilaka, and
K. Galappaththi, Asian Ageing: The relationship
between the elderly population and economic growth
in the Asian context, PLOS ONE, 18, 4, 2023,
e0284895.
[3]. M. M. Alam, and E. B. Hamida, Surveying Wearable
Human Assistive Technology for Life and Safety
Critical Applications: Standards, Challenges and
Opportunities, Sensors, 2014, 14, pp. 9153-9209.
[4]. X. Qian, H. Cheng, D. Chen, Q. Liu, H. Chen, H. Jiang,
and M-C. Huang, The Smart Insole: A Pilot Study of
Fall Detection, BODYNTES, 2019, LNICST 297,
pp. 37-49.
[5]. S. Subramaniam, S. Majumder, A. I. Faisal, and
M. J. Deen, Insole-Based systems for Health
Monitoring: current Solutions and Research
Challenges, Sensors, 2022, 22, 438.
[6]. S. Usmani, A. Saboor, M. Haris, M. A. Khan and
H. Park, Latest Research Trends in Fall Detection and
Prevention Using Machine Learning: A systematic
Review, Sensors, 2021, 21, 2021, 5134.
[7]. Y. Uchida, T. Funayama, Hori, M. Yuge, N. Shinozuka
and Y. Kogure, Possibility of Detecting Changes in
Health Conditions using an Improved 2D Array Sensor
System, Sensors & Transducers, Vol. 259, Issue 5,
2022, pp. 29-36.
[8]. T. Funayama, Y. Uchida, and Y. Kogure, Assessment
of Walking Condition Using Pressure Sensors in the
Floor Mat, in Proceedings of the Eleventh
International Conference on Global Health
Challenges GLOBAL HEALTH 2022, Valencia, Spain,
November 15, 2022, 70023.
[9]. T. Funayama, Y. Uchida, and Y. Kogure, Detection of
motion restriction with smart insoles, Sensors &
Transducers, Vol. 259, Issue 5, 2022, pp. 61-68.
[10]. T. Funayama, Y. Uchida and Y. Kogure, Step
Measurement Using a Household Floor Mat and Shoe
Sensors, International J. of Advances in Life Science,
Vol. 15, No. 1&2, 2023, pp. 33-34.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
44
(017)
Electrochemical Determination of Cd2+, Pb2+, Cu2+ and Zn2+ in Liquids
using Modified Titanium Dioxide
Vorobets V. S., Fomanyuk S. S., Medyk I. A., Kolbasov G. Ya., Karpenko S. V.
Vernadsky Institute of General and Inorganic Chemistry of the NAS of Ukraine, Department of electrochemistry
and photoelectrochemistry of non-metallic systems, 32/34 Acad. Palladin Str., 03142 Kyiv, Ukraine
Tel.: + 380442252280
E-mail: vorobetsvs@i.ua
Summary: Nanocrystalline Ce- or Y-modified titanium dioxide thin films electrodes were used to determine the concentration
of Pb2+, Cd2+, Cu2+, Zn2+ ions in liquids by anodic stripping voltammetry (ASV) and stripping photoelectrochemical (SP)
methods. In both cases well-defined signals for studied heavy metal ions (HMIs) were obtained. The optimal conditions to
determine the concentration of studied HMIs by ASV method have been chosen. It is determined that the magnitude of
analytical signal of Cd, Pb, Cu, Zn depends linearly on its concentration, thus Ce-TiO2 and Y-TiO2 electrodes can be used as
indicator electrodes for determination of studied HMIs by ASV method. The prospects of using the SP method to determine
the cadmium, lead and copper content in liquid media are shown. The sensitivity of SP method to Pb2+ was 0.01 mgL-1, to
Cu2+ – 0.1 mgL-1, to Cd 2+ –0.01 mgL-1.
Keywords: Stripping voltammetry, Stripping photoelectrochemical method, Modified TiO2 electrodes, Heavy metals.
1. Introduction
The global nature of modern environmental
problems requires constant monitoring of toxic
substances in the environment. Heavy metal ions
(HMIs), such as lead, mercury, cadmium, zinc, copper,
manganese, etc., are the most challenging pollutant
entering natural and anthropogenic environments.
They pose a major risk to human health due to their
toxicity, resistance to biodegradation and the ability to
accumulate in tissues, causing a number of diseases.
On the other hand, such metals as iron, copper, zinc,
molybdenum participate in biological processes and in
certain quantities are recognized as necessary trace
elements for the functioning of plants, animals and
humans. Therefore, the development of sensitive
methods for determining the residual amounts of
heavy metals in waters is an urgent task. To effectively
address this issue, robust, sensitive, and selective
electrochemical sensing techniques are required to
enable the rapid detection of hazardous contaminants.
One of these methods is stripping voltammetry
(SV) [1], based on the ability of elements accumulated
on the working electrode from the analyzed solution to
dissolve electrochemically at a certain potential
characteristic of each element. The registered
maximum current of an element depends on the
concentration of this element. The process of
accumulation on the working electrode takes place at
a certain electrolysis potential for a given time. The
process of electrochemical dissolution of the reduction
products from the electrode surface and the
registration of analytical signals (in the form of peaks)
on the voltammogram is carried out at a certain
voltage. As active electrodes in SV mercury electrodes
[2], mercury-film [3], electrodes based on carbon
materials [4], and others [5, 6] are usually used. The
disadvantage of most techniques [3-6] is the
complexity of manufacturing the applied electrodes.
This, as well as the toxicity of mercury used in the
manufacture of electrodes or in measurements by
adding a solution of Hg(II) ions to the background
electrolyte [5], requires the search for new non-toxic,
chemical inert materials with stable properties.
This work presents the results of using anodic
stripping voltammetry (ASV) and stripping
photoelectrochemical (SP) method for determining the
concentration of HMIs (Pb2+, Cd2+, Cu2+, Zn2+) in
liquids using new electrode materials based on
nanocrystalline Ce- or Y-doped titanium dioxide.
2. Experimental
2.1. Preparation of thin-film Ce/Y-TiO2 electrodes
Nanoscale titanium dioxide was synthesized by the
sol-gel method from titanium (IV) tetraisopropoxide
using Triton X-100 as a pore agent, as reported in our
previous work [7].
2.2. Electrochemical Measurements
The electrochemical measurements was performed
under potentiodynamic conditions using a PC-based
electrochemical setup which had the following
characteristics: measured currents 210–9– 110–1 A,
potential scan rate 0.01–50 mV/s, working electrode
potential range –4 to + 4 V. Platinum was used as a
counter electrode, Ce-TiO2 and Y-TiO2 films on Ti
substrate as working electrode, Ag/AgCl as reference
electrode. The measurements were made in 0.1 M
hydrochloric acid, 0.35 M formic acid and acetate
buffer (pH=5.0).
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
45
2.3. Photoelectrochemical (PEC) Measurements
The РEC measurement was performed using an
PGSTAT Elins Р-8S. Platinum served as the counter
electrode, while Ce-TiO2 and Y-TiO2 films were
employed as the working electrode on a titanium
substrate. As the reference electrode was used
Ag/AgCl. The PEC study was carried out in 0.1 M KI
solutions. The spectral dependences of photocurrent
quantum yield of modified TiO2 films were measured
in a quartz cell on a setup, which included an
MDR–2 monochromator and the light source was a
DKSSh–500 xenon lamp with stabilized discharge
current.
3. Results and Discussion
3.1. Determination of Cadmium, Lead, Copper
and Zinc by ASV Method
Сadmium, lead, copper and zinc were determined
by the ASV method that involves three main steps:
1. Deposition (electroconcentrating): In this step,
the Pb2+, Cd2+, Cu2+, Zn2+ ions are electrochemically
deposited onto the surface of the working electrode.
The deposition is performed by applying a negative
potential to the working electrode in the presence of
the HMIs (Pb2+, Cd2+, Cu2+, Zn2+), causing them to be
reduced and deposited onto the electrode surface.
2. Stripping (electrodissolution): After deposition,
the potential of the working electrode is scanned in the
positive direction, causing the deposited heavy metal
ions to be oxidized and stripped from the electrode
surface. The resulting current is proportional to the
concentration of HMIs that were deposited on the
electrode surface during the electroconcentrating step.
3. Analysis: By measuring the current produced
during the stripping step, the concentration of HMIs in
the original sample can be quantified using a
calibration curve generated from standard solutions of
known Pb2+, Cd2+, Cu2+, Zn2+ concentrations.
Solutions of 0.1 M hydrochloric acid, 0.35 M
formic acid and acetate buffer (pH=5.0) were used as
background electrolytes. The solutions were stirred
during the preliminary electroconcentrating at
potential of –(1000–1600) mV (vs silver-chloride
reference electrode) for 60–240 s and then the
potential was scanned from -1600 mV to +200 mV. It
is determined that the magnitude of analytical signal
of HMIs in studied electrolytes depends linearly on the
ion concentration of each studied heavy metal ions,
thus Ce-TiO2 and Y-TiO2 electrodes can be used as
indicator electrodes for determination of these HMIs
by ASV method.
Fig. 1 shows the ASV response to Pb(II) over the
metal ion concentration range from 0.1 to 5.1 mgL-1
in solution 0.1 M HCl used as background solution.
The Pb(II) peak is clearly seen at about -0.45 V, the
linearization equation in 0.1 M HCl as background
solution is i/mA = -0.02 + 0.52 C/mgL-1, with a
correlation coefficient of 0.992 (Fig. 2, curve 2), the
sensitivity for electrochemical analysis of Pb(II) is
0.52 mA mgL-1.
Fig. 1. Polarization curves on Y-TiO2 electrodes in acetate
buffer (рН=5.0) containing: 1 – background solution;
2–0.1; 3–0.6; 4– 1.6; 5–3.1; 6–5.1 mg/L of Pb(II) ions.
Ee -1.4V, te - 120 s.
Fig. 2. Dependence of the analytical signal magnitude
of Pb on its concentration in: 1-acetate buffer;
2-0.1 M HCl; 3-0.35M formic acid.
In other solutions (acetate buffer (pH = 5.0) and
0.35M formic acid) used as a background electrolyte,
the magnitude of the analytical signal of Pb also
linearly depends on the Pb(II) ion concentration,
however, the sensitivity to Pb in these solutions is
much lower than in 0.1M HCl solution (Fig. 2, curves
1 and 3).
The ASV responses to Cd(II) over the
concentration range from 0.001 to 2.0 mgL-1 are
shown in Fig. 3. The Cd (II) peak is visible at about
-0.7±0.05 V. The linearization equation in acetate
buffer (pH = 5.0) as background solution is
i/mA = -0.04 + 0.26 C/mgL-1, with a correlation
coefficient of 0.996 (Fig. 4, curve 1), the sensitivity for
electrochemical analysis of Cd (II) is 0.26 mA mg/L.
In solution of 0.35M formic acid used as a background
electrolyte, the magnitude of the analytical signal of
Cd also linearly depends on the Cd (II) ion
concentration, however, the sensitivity to Cd in this
solution is lower than in acetate buffer (Fig. 4,
curves 2).
It was found that ASV responses to Cu(II) and
Zn(II) over the concentration range of 0.1 to
5.1 mgL-1.in acetate buffer with pH = 5.0 are
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
46
observed at -0,3±0,05 V for Cu(II) and -0,9±0,05 V for
Zn(II), and the magnitude of the analytical signals of
Cu and Zn also linearly depend on the Cu(II) and
Zn(II) ion concentration.
Fig. 3. Voltammetry of cadmium on 1%Ce-TiO2 electrodes
in acetate buffer (pH = 5.0) containing Cd(II) ions:
1- 0; 2- 0.001; 3- 0.01; 4- 0.5; 5- 1.0; 6- 1.5; 7- 2 mg/L
ions Ee = -1.5 V, te = 180 c.
Fig. 4. Effect of the nature of the electrolyte on the
concentration dependence of Cd2+: 1-acetate buffer
with pH = 5.0; 2- 0.35 M formic acid.
The use of titanium dioxide electrodes modified
with rare earth elements as working electrodes also
allows simultaneously determination of the
concentration of heavy metals in their mixture in
liquids without removing oxygen (unlike mercury,
amalgam, carbon and other electrodes that are
commonly used for analysis by this method). As
shown in Fig. 5, concentrations of heavy metal ions
(Pb(II), Cd(II), Cu(II), Zn(II)) and O2 were
simultaneously detected under similar conditions. The
results show that the detection (individual or
simultaneous) of the four considered heavy metal ions
and O2 results in well-separated stripping peaks,
respectively observed at -0.45 V, -0.7 V, -0.3 V,
-0.9 V and -0.55 V. It is thus easy to distinguish the
four heavy metal ions and oxygen from the potential
of the stripping peak with our modified electrodes.
Optimal conditions for the determination of lead,
cadmium, copper and zinc in liquids by the stripping
voltammetry are presented in Table 1.
It was found that the sensitivity of the ASV method
to Pb2+ was 0.05 mgL-1, to Cu2+ – 0.1 mgL-1, to Zn2+
0.1 mgL-1, to Cd 2+ –0.1 mgL-1.
-8
-6
-4
-2
0
2
4
-1.8 -1.6 -1.4 -1.2 -1 - 0.8 -0.6 -0.4 -0 .2 0
Е,V
І,mA
Pb
Cd
Zn
Cu
O
2
Fig. 5 Voltammograms at 2 % Y-TiO2 electrodes in acetate
buffer (pH = 5.0) with a content of Me ions (Me = Cu, Pb,
Cd, Zn) of 2 mg/L; Ee = -1.4 V, te = 120 c.
Table 1. Optimal conditions for the determination of heavy
metals by ASV
Me Film Еe, V te, s Eas, V
Cd Ce-TiO2. Y-TiO2 -1.5 180 -0.7±0.05
Pb Ce-TiO2. Y-TiO2 -1.4 120 -0.45±0.05
Cu Ce-TiO2. Y-TiO2 -1.0 120 -0.3±0.05
Zn Ce-TiO2. Y-TiO2 -1.4 240 -0.9±0.05
Note. Еe - electroconcentrating potential, te- time of
electroconcentrating, Eas-analytical signal potential
3.2. Determination of Cadmium, Lead and
Copper Concentration by SP Method
The stripping photoelectrochemical method
consists in preliminary electrochemical concentration
of heavy metals on electrodes in the form of metals or
metal oxides, followed by measurement of their
photoelectrochemical response (photocurrent or
voltage drop) and comparing it before and after
concentration in different parts of the spectral rang.
Analysis of the spectral characteristics of the
photocurrent at different concentrations of Pb2+, Cd2+,
Cu2+ (Figs. 6 and 7) showed that the concentrating of
metal products of heavy metal ions on the TiO2
electrodes leads to a decrease in the photoresponse in
the spectral range of 250-350 nm, which corresponds
to light absorption by the TiO2 film, even with small
amounts of HMIs.
At the same time, the photoresponse signal correlates
with the concentration of heavy metal ions. This is
reflected by straight -line graphs of the dependence of
quantum yield on the logarithm of concentration, as
shown in Fig. 8 and Fig. 9.
The sensitivity of SP method to Pb2+ was 0.01
mgL-1, to Cu2+ – 0.1 mgL-1, to Cd 2+ –0.01 mgL-1.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
47
0
0,2
0,4
0,6
0,8
200 300 400 500 600
l, nm
h
TiO2
0,01 mg/L Cd
0,05 mg/LCd
0,25 mg/L Cd
1,0 mg/L Cd
Fig. 6. Spectra of photocurrent quantum yield of titanium
oxide films at different concentrations of cadmium in
acetate buffer.
Fig. 7 Photocurrent quantum yield spectra of titanium
dioxide films at different copper concentrations in acetate
buffer: 1- 0; 2- 2.0; 3- 3.0; 4- 4.0; 5- 5.0; 6- 6.0 mg/L of
Cu(II) ions.
Fig. 8 Plot of the dependence of the photocurrent quantum
yield on the concentration of copper ions in acetate buffer.
Fig. 9 Dependence of photocurrent quantum yield on the
logarithm of cadmium ion concentration, C, in acetate
buffer.
4. Conclusions
1. New electrode materials containing films of
nanodispersed titanium dioxide modified with rare-
earth elements (Y, Ce), have been synthesized. The
materials characterized by high sensitivity and
selectivity in determining the concentration of heavy
metals (Pb, Cd Cu, Zn) and oxygen in liquids,.
2. Combined methods of express analysis of
liquids for the content of small concentrations of
heavy metals (Pb, Cd, Cu, Zn) based on a combination
of stripping voltammetry and stripping
photoelectrochemical method have been developed.
3. Sensor systems for joint determination of
oxygen and heavy metals (lead, cadmium, copper,
zinc) in liquids have been developed.
References
[1]. J. S. Paulo, R. Stradiotto Nelson, Simultaneous
determination of trace amounts of zinc, lead and
copper in rum by anodic stripping voltammetry,
Talanta, Vol. 44, 1997, pp. 185-188.
[2]. С. Fernandez-Bobes, M. T. Fernandez-Abedul et. al.,
Anodic stripping voltammetry of heavy metals using a
hanging mercury drop electrode in a flow system,
Electroanalysis, Vol. 10, Issue 10, 1998, pp. 701-706.
[3]. Metrohm, Determination of cadmium and lead by
anodic stripping voltammetry at mercury film
electrode, Application Bulletin 241/2 e, p. 14.
https://www.metrohm.com/ru-ru/applications/AB-
241
[4]. М. Rievaj, Р. Tomčik, М. Čerňanska et. al., Trace
determination of lead in environmental and biological
samples by anodic stripping voltammetry on carbon
paste electrode, Chem. Anal., Vol. 53, Issue 5, 2008,
pp. 717-723.
[5]. K. C. Armstrong, C. E. Tatum, R. N. Dansby-Sparks
et. al., Individual and simultaneous determination of
lead, cadmium, and zinc by anodic stripping
voltammetry at a bismuth bulk electrode, Talanta, Vol.
82, Issue 2, 2010, pp. 675-680.
[6]. L. Jing, G. Shaojun, Z. Yueming, W. Erkang, High-
sensitivity determination of lead and cadmium based
on the Nafion-graphene composite film, Anal. Chim.
Acta, Vol. 649, Issue 2, 2009, pp. 196-201.
[7]. V. S. Vorobets, G. Ya. Kolbasov, I. A. Medyk et. al.,
Synthesis, Photo- and Electrocatalytic Properties of
Nanostructured Y-TiO2 Films, Surf. Eng. Appl.
Electrochem. Vol. 57, Issue 5, 2021, pp. 535–541.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
48
(018)
Near-Field Microwave Probe Technique for Local Broadband
Characterization of Nanocomposite Materials
H. Bakli 1 and M. Makhlouf 2
1 Laboratory of Electronics and Electrical Systems Engineering, BP 48 Cherchell, 42006, Tipaza, Algeria
2 Laboratory of Combustion, Detonation and Ballistic, BP 48 Cherchell, 42006, Tipaza, Algeria
Tel.: + 213670261014
E-mail: baklih@yahoo.com
Summary: Local characterization and determination of small dielectric and conductivity contrasts of PVC/Graphene/Fe2O3
nanocomposites using near-field microwave probe technique is investigated in this paper. The proposed method presents a lot
of advantages such as simplicity, low coast, broadband operation and high sensitivity. An electrical model is proposed to
represent the probe-sample interaction and an inverse procedure based on one-port calibration model is used to relate the
measured reflection coefficient to the local properties of the material under test. Permittivity and conductivity of
PVC/Graphene/Fe2O3 nanocomposites are experimentally determined at different test frequencies at a wide band frequency.
Keywords: Near-field microwave probe, PVC/Graphene/Fe2O3 composites, Graphene, Calibration model, Conductivity,
Dielectric permittivity.
1. Introduction
Recently, conductive polymer composites find
large-scale applications as anti-static materials in
printed electronics, supercapacitors, organic solar
cells, biosensors, flexible transparent displays, etc
[1, 2]. These materials are produced by combining
reinforcing charges with a matrix system such as
polymer. This combination of charges and polymer
provides characteristics superior to either of the
materials alone and are increasingly being used as
replacements for relatively heavy metallic materials. In
addition, conductive composites are highly interesting
for electromagnetic shielding applications. In fact, the
rapid development of electronic gadgets, commercial,
biological, military and defence systems create
electromagnetic (EM) noise known as EM pollution.
EM pollution causes human health diseases and
worsens the functioning of electronic equipment and
their durability [3, 4]. So, it has therefore become
essential to develop and characterize composite
materials.
Near-field microwave probe technique (NFMP)
has become a powerful tool for super resolution
imaging and local characterization of various materials
such as dielectrics, semiconductors and metals [5-12].
This high resolution can be achieved thanks to the use
of near-field probes witch present very small size [8-
10]. Nevertheless, these probes exhibit high input
impedance resulting in a poor accuracy of
measurement by using standard 50 Ω measurement
equipment such as conventional vector network
analyzer (VNA). To overcome this problem, these
probes have been associated to resonators to match the
probe high impedance to the impedance of the
microwave measuring instrument. However, the use of
resonators limits severely the frequency band of
operation. Moreover, resonators exhibit a high
unloaded quality factor that fails in the presence of the
material under test. Consequently, the local broad-
band electromagnetic characterization of new
materials such as composites materials with high
sensitivity and accuracy remains a challenge.
In this work, a simple and inexpensive technique is
proposed for the local determination of small
Dielectric and conductivity contrasts of conductive
nanocomposites. The working principle is based on the
association of a near field probe and a vector network
analyzer. The idea is to exploit the probe's own
resonant frequencies. In fact, at these frequencies, the
levels of the reflection coefficient magnitude are
around 0, which can ensure high measurement
sensitivity and therefore characterization over a wide
frequency band.
Thus, in section 2, to better understand the
electromagnetic interaction between the microwave
probe and the material under test, the electrical field
distribution is simulated by using HFSSTM from Ansys.
A lumped element model is also developed to
represent the sample-probe interaction. The sample
preparation protocol is described in section 3. In
section 4, the experimental set-up and results are
provided to validate the proposed approach. As a
demonstration different PVC/Graphene/Fe2O3
nanocomposites have been characterized in a large
band of frequency.
2. Probe-sample Interaction
2.1. Simulation of the Probe-sample Interaction
To study the probe-sample interaction, the
distribution of electrical field around the probe tip, is
simulated using HFSSTM at the test frequency
2.45 GHz. As shown in Fig. 1, which represents the
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
49
distribution of the electrical filed when no material is
considered (probe in air), the electrical field is well
confined around the probe apex.
Fig. 1. HFSS
TM
simulation of electric field at the cross-
section along the probe when the tip is in air (f=2.45 GHz).
From these results, we can conclude that high
resolution and thus local characterization can be
achieved by using this probe.
2.2. RLC Model
Because of the small size of the probe tip in
comparison with the wavelength (near-field region),
the probe–sample interaction can be represented by a
lumped element network in the quasi-static
approximation and thus Zt can be calculated as the
impedance of the whole RLC network (Fig. 2)
𝑍󰇡

𝑍󰇢𝑗𝜔𝐶 , (1)
where Cc is the coupling capacitance, Zs is the sample
near-field impedance represented with the bulk sample
capacitance Cs, inductance Ls and resistance Rs. The
capacitance between the sample and the outer
conductor of the coaxial probe Cout can be ignored,
since Cout >> Cc. The capacitance between the tip and
the outer conductor of the open-ended coaxial probe is
represented by the parasitic stray capacitance Cstr.
When the tip-sample separation is much smaller than
the tip size (h << D), the influence of Cstr can be
neglected [11]. Thus, the lumped element model of
such a probe–sample interaction can be simplified as a
coupling capacitance Cc in series with the sample
impedance ZS.
Fig. 2. Lumped element model of tip-sample interaction.
3. Sample Preparation
3.1. Hydrothermal Grafting of Iron Oxide Fe2O3
onto Graphene
Due to the obvious existence of nucleation and
homogeneous grain development, hydrothermal
synthesis has been chosen because it was proven to be
more favourable than other techniques. In general,
hydrothermal synthesis is a method for crystallizing
materials from an aqueous solution by controlling
thermodynamic variables such as temperature,
pressure, and concentration. The reactions are usually
carried out in a metal autoclave with a chemically inert
and thermally stable Teflon container.
An aqueous solution of ferric chloride (11.26g
Fe2Cl3, 6H2O in 100 ml H2O) has been first prepared
and agitate for 3 hours and 30 minutes until a
homogeneous solution is achieved. Graphene solution
has been then added to the first solution gradually. The
used graphene was fabricated by electrochemical
exfoliation process from graphite of electrical storage
devices and treated by microwave irradiation for 15
seconds [6].
After that, a concentrated aqueous solution of
NaOH is slowly added to the previous solution (drop
by drop) while stirring until the pH is around 10.96.
The resulting solution is vigorously agitated for 30
minutes before being placed in an autoclave for three
hours of hydrothermal treatment at 180°C.
3.2. Elaboration of PVC/Graphene/Fe2O3
Nanocomposites
We start by mixing the Graphene/Fe2O3 solution
with the DBS (Dibutyl Sebacate), then sonicating it for
30 minutes to scatter the graphene and prevent
agglomeration. Finally, we add the PVC. The mixture
is heated for 1 hour at 155 °C.
A series of nanocomposites were constructed to
explore the influence of fillers on the characteristics of
PVC, and the compositions developed are summarized
in the Table 1.
4. Experimental Results
4.1. Open-ended Coaxial Probe Measurement
PVC/Graphene/Fe2O3 nanocomposite samples
have been first characterized using open-ended coaxial
probe. This technique consists to place an open-ended
coaxial transmission line in contact with a sample so
that the electrical fields around the end of the probe
will interact with the sample under test. A VNA is then
used to measure changes in the reflection coefficient
affected by the electromagnetic properties of the
sample, referenced to the connector plane.
In Fig. 3, we represent the different magnitude
spectra of the reflection coefficient for the different
PVC/Graphene/Fe2O3 composite materials measured
by using an open-ended probe.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
50
Table 1. Composition of nanocomposites materials.
PVC/Graphene/Fe
2
O
3
Composition
PVC
%
DBS
%
G
%
Fe
2
O
3
%
3 % Fe
2
O
3
45 % 52 % 0 % 3 %
3 % Graphene 45 % 52 % 3 % 0 %
5 % Graphene 45 % 50 % 5 % 0 %
5%Graphene3%Fe
2
O
3
45 % 47 % 5 % 3 %
Fig. 3. Measured magnitude spectra of the reflection
coefficient for air and PVC/Graphene/
Fe2O3
nanocomposite materials using open-ended coaxial probe.
From this graph, it can be noted that it is very
difficult to distinguish between different composite
materials, and thus it can be concluded that measuring
small contrasts in permittivity and conductivity of
nanocomposite materials using this technique is not
possible.
4.2. NFMP Measurement
In Fig. 4, we present the experimental set-up based
on the association of a near-field probe that consists of
a commercial coaxial connector (RoHS Multicamp
SMA Square Flange Jack Receptacle, 50), with a
central conductor which lengthens to form a tip with a
diameter of 1.3 mm as well as a vector network
analyzer (Keysight PNA 5222A).
Fig. 4. Experimental set-up.
In Fig. 5, we represent the different magnitude
spectra of the reflection coefficient for the different
PVC/Graphene/Fe2O3 composite materials.
Fig. 5. Measured magnitude spectra of the reflection
coefficient for air and PVC/Graphene/
Fe2O3
nanocomposite materials.
This graph shows that the following tree resonance
frequencies can be investigated: around 4, 11 and 18.5
GHz, and that the composite materials are well
differentiated by means of reflection coefficient
magnitude.
In Fig. 6, we represent the different magnitude
spectra of the reflection coefficient for the different
PVC/Graphene/Fe2O3 composite materials around a
third resonance frequency of 18.5 GHz.
Fig. 6. Measured magnitude spectra of the reflection
coefficient for air and PVC/Graphene/
Fe2O3
nanocomposite materials around 18.5 GHz.
From this graph, one can note that when
concentrations of graphene and/or iron oxide increase,
the resonant frequency shifts and the magnitude of the
reflection coefficient increases. In fact, when the
concentration of nanoparticles increases, the particles’
interaction within the matrix increases. The average
polarization correlated with a cluster of particles is
more robust than an individual particle due to
improved composite inclusion dimensions and thus
greater interfacial area.
S
11
[dB]
Frequency [GHz]
-1.4
-1.0
-0.6
-0.2
0.2
0.6
0 5 10 15 2
0
3% iron oxide
3% graphene
5% graphene
5% graphene 3% iron oxide
air
S
11
[dB]
Frequency [GHz]
-35
-30
-25
-20
-15
-10
-5
0
0 5 10 15 20
3% iron oxide
3% graphene
5% graphene
5% graphene 3% iron oxide
air
-20
-18
-16
-14
-12
-10
-8
-6
-4
15 16 17 18 19 2
0
3% iron oxide
3% graphene
5% graphene
5% graphene 3% iron oxide
air
S
11
[dB]
Frequency [GHz]
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
51
This results in a higher average polarization and,
hence, a larger contribution to the dielectric properties
and conductivity.
5. Conclusions
The PVC/Graphene/Fe2O3 nanocomposites have
been characterized using a near-field microwave probe
technique. An electrical model based on lumped
elements was proposed to represent the probe-sample
interaction. A calibration model will be developed and
presented in the final version of this article to retrieve
the materials' electromagnetic properties from the
measured reflection coefficient. The conductivity and
permittivity of these materials will be determined at
different test frequencies in a large band of frequency.
References
[1]. H. Pang, L. Xu, D. X. Yan, and Z. M. Li, Conductive
polymer composites with segregated structures,
Progress in Polymer Science, Vol. 39, Issue 11, 2014,
pp. 1908–1933.
[2]. G. Mook, R. Lange, and O. Koeser, Non-destructive
characterization of carbon-fiber-reinforced plastics by
means of eddy-currents, Composites Science and
Technology, Vol. 61, Issue 6, 2001, pp. 865–873.
[3]. V. Shukla, Review of electromagnetic interference
shielding materials fabricated by iron ingredients,
Nanoscale Advances, Vol. 1, Issue 5, 2019,
pp. 1640–1671.
[4]. A. Tamburrano, D. Desideri, A. Maschio, and
M. Sabrina Sarto, Coaxial waveguide methods for
shielding effectiveness measurement of planar
materials up to 18 GHz, IEEE Transactions on
Electromagnetic Compatibility, Vol. 56, Issue 6, 2014,
pp. 1386–1395.
[5]. A. Tselev, N. V. Lavrik, I. Vlassiouk, D. P. Briggs,
M. Rutgers, R. Proksch, and S. V. Kalinin, Near-field
microwave scanning probe imaging of conductivity
inhomogeneities in CVD graphene, Nanotechnology,
Vol. 23, Issue 38, 2012, p. 385706.
[6]. H. Bakli, M. Moualhi, and M. Makhlouf, High-
sensitivity electrical properties measurement of
graphene-based composites using interferometric near-
field microwave technique, Measurement Science and
Technology, Vol. 33, Issue 4, 2022, p. 045012.
[7]. D. E. Steinhauer, C. P. Vlahacos, F. C. Wellstood,
S. M. Anlage, C. Canedy, R. Ramesh, A. Stanishevsky,
and J. Melngailis, Quantitative imaging of dielectric
permittivity and tunability with a near-field scanning
microwave microscope, Review of Scientific
Instruments, Vol. 71, Issue 7, 2000, pp. 2751–2758.
[8]. M. Farina, D. Mencarelli, A. Di Donato, G. Venanzoni,
and A. Morini, Calibration protocol for broadband
near-field microwave microscopy, IEEE Transactions
on Microwave Theory and Techniques, Vol. 59, Issue
10, 2011, pp. 2769–2776.
[9]. A. Imtiaz and S. M. Anlage, Effect of tip geometry on
contrast and spatial resolution of the Near-Field
Microwave Microscope, Journal of Applied Physics,
Vol. 100, Issue 4, 2006, p. 044304.
[10]. R. A. Kleismit, M. K. Kazimierczuk, and
G. Kozlowski, Sensitivity and resolution of
Evanescent Microwave Microscope, IEEE
Transactions on Microwave Theory and Techniques,
Vol. 54, Issue 2, 2006, pp. 639–647.
[11]. S. M. Anlage, V. V. Talanov, and A. R. Schwartz,
Principles of near-field microwave microscopy
scanning probe microscopy: electrical and
electromechanical phenomena at the nanoscale,
Scanning Probe Microscopy, Vol. 1, S. Kalinin and
A. Gruverman (Eds.), Springer Sci, New York, 2007,
pp. 215–253.
[12]. C. Gao, F. Duewer, and X.-D. Xiang, Quantitative
microwave evanescent microscopy, Applied Physics
Letters, Vol. 75, Issue 19, 1999, pp. 3005–3007.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
52
(020)
Comparison of the Depth Accuracy of a Plenoptic Camera and a Stereo
Camera System in Spatially Tracking Single Refuse-derived Fuel Particles
in a Drop Shaft
M. Zhang 1, R. Streier 2, M. Vogelbacher 1, S. Wirtz 2, V. Scherer 2 and J. Matthes 1
1 Karlsruhe Institute of Technology, Institute for Automation and Applied Informatics,
Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany
2 Ruhr-University Bochum, Department of Energy Plant Technology,
Universitätstr. 150, 44801, Bochum, Germany
Tel.: + 49 721 608-24919, fax: + 49 721 608-22602
E-mail: miao.zhang@kit.edu
Abstract: With the development of depth cameras in the last decades, several cameras are able to acquire 3D information of
the captured scenes, such as plenoptic camera and stereo camera system. Because of the differences in principle and
construction of various depth cameras, different cameras own particular advantages and disadvantages. Therefore, a
comprehensive and detailed comparison of different cameras is essential to select the right camera for the application. Our
research compared the depth accuracy and stability of a stereo camera system and a plenoptic camera by monitoring the settling
processes of various refuse-derived fuel particles in a drop shaft. The particles are detected at first using detection approaches,
and the particle detections are subsequently associated in accordance with data association algorithms. The spatial particle
trajectories are obtained by the tracking-by-detection approach, based on which the performances of the cameras are evaluated.
Keywords: 3D Measurement, Plenoptic camera, Stereo camera system, Particle tracking-by-detection, Comparison of 3D
sensors.
1. Introduction
The knowledge of 3D measurements is of
significant importance in various applications, for
instance, object localization and tracking. To achieve
the 3D localization of objects, a 3D sensor is quite
essential. With the development of measurement
techniques, more and more sensors are able to acquire
3D information concerning captured scenes, such as
the time-of-flight (ToF) camera [1], the structured light
camera [2], the stereo camera system [3], and the
plenoptic camera [4]. Given the multiple potential
options for 3D cameras, selecting an appropriate
solution based on using the environment is of interest
to a range of research communities. For instance, in
[5], the consumer depth camera Microsoft Kinect with
a novel depth imaging technique is compared to the
state-of-the-art continuous wave amplitude modulation
ToF cameras by a set of experimental setups for the
purpose of evaluating the respective merits and
drawbacks of the cameras. Further, Chiu et al. [6]
compared 3D reconstruction results based on data
collected by two different types of depth cameras (ToF
and stereoscopic cameras) and commercial 3D
scanning systems to determine the selection of the
depth camera concerning the applications.
As indicated by the review of the related works, the
ToF camera, the structured light camera, and the stereo
camera system have been widely researched regarding
their availabilities of providing depth information.
Notwithstanding, the research on plenoptic cameras
based on the plenoptic function is still scarce, so there
is insufficient information concerning comparison
with other depth cameras. In addition, since the
plenoptic camera has been developed in recent
decades, several issues concerning the plenoptic
camera remain problematic, e.g., incomplete
corresponding data processing techniques and custom
imaging hardware [7].
This study contributes to comparing a plenoptic
camera and a stereo camera system in spatially
tracking single refuse-derived fuel (RDF) particles in a
drop shaft. These two camera systems were selected in
accordance with use case addressed (experimental
setup, depth, resolution, etc.) and are compared
primarily in terms of depth accuracy. Moreover, the
structured light camera and the ToF camera are
basically not adequate for observing small particles
(such as RDF particles), and therefore, only the stereo
camera system and plenoptic camera are considered.
First, the two cameras measure the depth of objects
distributed equidistantly in the drop shaft.
Subsequently, the motions of four RDF fractions in the
drop shaft, viz., wood chips, PE-granule, paper shred,
and confetti, are photographed separately. The
captured images are then used to derive the 3D
trajectory of each RDF particle by image processing
techniques according to the tracking-by-detection
principle. By comparing the 3D trajectories from the
same object, the performances of cameras can be
evaluated.
This paper is structured as follows: in section 2, the
experimental setup is illustrated. Section 3 briefly
demonstrates the applied image processing approaches
to detect and track particles. In section 4, the
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
53
corresponding results are discussed. Section 5
concludes the paper.
2. Experimental Setup
Single RDF particles are transported to the upper
part of the drop shaft and then fall through a tube into
the drop shaft, as shown in Fig. 1. As a consequence of
the extra distance provided by the tube, the particles
are accelerated to their terminal velocities before the
recording starts. In order to achieve a high contrast of
the illuminated particle to the background, the inner
wall of the drop shaft is painted black. With 18 LED
modules, the lighting system, located at the top inlet of
the drop shaft, is able to offer a total luminous flux of
104,400 lm. Furthermore, a drawer with tilted panels is
installed at the bottom of the drop shaft, which ensures
one particle within a certain capture period. As
schematically depicted in Fig. 1, the total distance
between the top edge of the drawer and the location of
the cameras is 4.8 m. While the plenoptic camera was
fixed directly above the drop shaft, the two stereo
cameras were mounted on each side above of the drop
shaft. Table 1 lists the technical parameters of the
utilized stereo camera system and the plenoptic
camera.
Fig. 1. Schematic of the drop shaft (left) and the utilized
two depth camera systems (right).
The stereo camera system was calibrated with a
checkerboard in accordance with the calibration
method recommended in [8]. Additionally, the
intrinsic and extrinsic parameters entailed for the
subsequent stereo rectification were also determined.
Meanwhile, the plenoptic camera was calibrated based
on the principles demonstrated in [9].
Table 1. Technical details of 3D cameras utilized
in the study.
Stereo camera Pleno
p
tic camera
Manufacture
r
Baume
r
Ra
y
trix
Model VLXT-28 M.I R12
Principles of
depth
measurement
Stereoscopic
camera
Light-field
technique
Image
resolution 1920 ×1464 px 2048 × 1536 px
Max. frame
rate 500 fps 330 fps
RDF contains a broad range of fractions, and four
representative fractions, namely wood chips, confetti,
paper shreds, and polyethylene (PE) granules, as
shown in Figure 2 were selected for the particular
experiment. The particles' actual sizes can be roughly
estimated according to the 1 cm scale at the bottom
right of the figure. Additionally, Table 2 lists the 3D
dimensions of the fuel fractions. In the following, the
paper presents the comparison of the depth cameras
concerning the experiments with the depicted four
fractions.
Fig. 2. Various RDF fractions applied in the experiments.
(a) Wood chips. (b) Confetti. (c) Paper shreds. (d) PE
granules. [10]
Table 2. Physical properties of the experimented RDF
fractions
Fraction Form Length
(mm)
Width
(mm)
Thickness
(mm)
Wood
chips Cuboid 5-10 4-7 1
Confetti Round
flake 6 6 0.104
Paper
shre
d
Long
flake 25-35 6 0.104
PE
granule
Round
p
late 4 4 2
3. Image Processing Approaches
The spatial trajectory of each single RDF particle
is derived based on the principle of tracking-by-
detection, which identifies objects at first and
associates the detections into trajectories afterward.
Since the measurements refer to single particle
tracking-by-detection, the detection and tracking
process are relatively uncomplicated. The particles are
identified by virtue of binarization and then associated
temporally. The available image processing
approaches for multiple particle tracking-by-detection
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
54
using the plenoptic camera are presented in [11], which
include a novel combined detection method and a data
association approach using a linear Kalman filter with
the 2.5D global nearest neighbor approach. The same
image processing procedure can also be used on the
stereo camera system. Notwithstanding, as a
consequence of the higher depth accuracy provided by
the stereo camera system, the processing might be
refined accordingly.
4. Results and Discussion
First, in order to obtain an initial estimate of the
accuracy of the two cameras, both cameras measured
the distance to a calibration object simultaneously,
where the object was placed at six different depth
positions. For each captured image, plenty of pixels
belonging to the object can be captured. Hereby, we
recognize a mean depth as the representing depth of the
object. Fig. 3 shows the depths of the calibration object
with different ground truth depths measured by both
camera systems, where 200 measured depth points of
each ground truth depth are selected and displayed in
the figure. Additionally, Fig. 4 presents the comparison
result by a boxplot to point out the measured median
distance and its corresponding distribution over time.
The zero point is defined as the upper edge of the drop
shaft scaffold, which is 4.8 m to the bottom. As
revealed by the figure, the stereo camera system
delivers median measured distances smaller than the
ground truth values, whereas the plenoptic camera
system measures the distances that are first smaller and
then larger than the true values. Moreover, the
measurement deviation of the stereo camera system
tends to increase as the measurement distance
increases. Comparatively, the measurement error of
the plenoptic camera decreases at first, roughly
minimized at around 3 m to 4 m, and then boosts.
Compared to 4 m, where the median measurement
deviates from the ground truth value by only 48 mm,
the error at 4.8 m reaches 470 mm. Overall, the stereo
camera shows superior stability and accuracy in
measuring the distance of non-moving objects
compared to the plenoptic camera.
In addition to comparing the differences between
the two depth cameras in measuring fixed calibration
objects, the cameras are also compared for measuring
continuously varying depths of the calibration object,
as schematically depicted in Fig. 5. The varying
distance is measured five times, and the resulting five
trajectories are shown in the figure. Generally, the
distance changes measured by the two cameras are
comparable. As revealed in Fig. 5, the measurements
of the plenoptic camera are accompanied by more
significant fluctuations, especially at the beginning and
the end. Since the drop shaft is 4.8 m high, the
trajectory should end up fluctuating slightly at 4.8 m.
Therefore, we can deduce a larger measured value of
the plenoptic camera at 4.8 m, which indicates an
agreement with the statement provided in Fig. 4.
(a) Stereo camera system
(b) Plenoptic camera system
Fig. 3. Measured depth of the calibration object with
different ground truth depths. The horizontal coordinate is
the frame number, and the vertical coordinate represents the
measured distance.
(a)
(b)
Fig. 4. Boxplot of measured depth. The horizontal coordinate
is the actual distance of the object, and the vertical coordinate
represents the measured distance. (a) Measured distance of
the stereo camera system. The S refers to the stereo camera,
and the blue number is the median of the measured value. (b)
Measured distance of the plenoptic camera system. The P
stands for the measured value of the plenoptic camera, and
the green number is the corresponding temporal median.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
55
(a) Stereo camera system
(b) Plenoptic camera system
Fig. 5. Measured continuously increasing drop distance with
relation to fall time. The horizontal coordinate is the fall time,
and the vertical coordinate represents the drop distance.
As a result of completing the comparison of
measuring the distance of fixed objects and
continuously increasing distances, an initial
impression of the accuracy of the two cameras is
obtained. Thereafter, the trajectories of various RDF
particles derived by the two cameras are also
compared. For each fraction, ten particles were
dropped, detected, and tracked. For clarity, only one
trajectory of each fraction is presented in Fig. 6.
Because the two cameras were triggered
asynchronously and their captured ranges differed,
time differences exist between the captured
corresponding trajectories. In order to achieve a visual
comparison of the trajectories' similarities, the
trajectories captured by the plenoptic camera are
delayed by a certain amount of time. For the purpose
of determining the time, the average value of the first
ten captured depths by the plenoptic camera is
computed. Subsequently, the depth value that is closest
to the average value on the captured trajectory
belonging to the stereo camera system is searched, and
the corresponding time is the delay time. Apparently,
the plenoptic camera provides a longer measured depth
range, especially when tracking smaller particles such
as confetti and plastics. As far as stability is concerned,
the fluctuations of the trajectories with respect to all
fractions provided by the plenoptic camera are much
more substantial than those of the stereo camera
system, as also revealed in the previous experiments.
The measurement fluctuations of the plenoptic camera
are more considerable than the stereo camera system
for measuring both stationary and moving objects.
Moreover, the fluctuations of measuring small objects
are even more significant, which is not conducive to
object tracking. Although the trajectories provided by
the stereo camera system fluctuate more slightly, small
particles can be detected and tracked longer by the
plenoptic camera.
(a) Wood
(b) Confetti
(c) Paper shred
(d) PE granule
Fig. 6. Examples of drop distance with relation to fall time
of various particle fractions. The horizontal coordinate is the
fall time, and the vertical coordinate represents the drop
distance.
In addition, the point-based similarity of the
resulting trajectories of each particle is computed
according to dynamic time warping [12], which aims
to find the warping path between two trajectories with
the smallest warping cost. Dynamic time warping can
compute the similarity of two time series (e.g., the
time-depth trajectory), especially for time series with
different lengths and frame rates. Dynamic time
warping warps and distorts the time series
automatically (i.e., localized scaling on the time axis)
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
56
so that the two signals are as consistent as possible to
obtain the maximum similarity. Supposing two time
series Q and C with respective lengths n and m, the
value of the i
th
frame from Q is denoted as q
i
and the
value of the j
th
frame from C as c
j
afterward. To align
these two series, a matrix of the dimension n×m is
constructed with matrix elements (i, j) denoting the
Euclidean distance between the two points q
i
and c
j
.
The smaller the distance, the higher the similarity.
Dynamic time warping finds the minimum of the sum
of the Euclidean distances and recognizes the sum as
the warping cost. The dynamic time warping takes the
time shift into consideration and is thus superior to
simply computing the Euclidean distance between the
corresponding trajectories. Fig. 7 gives an example of
the distance between two trajectories (obtained
signals) using dynamic time warping. The figure above
depicts the originally measured distances of the two
cameras, and the bottom figure presents the
comparison of the distances using dynamic time
warping. The outcome of the comparison is the sum of
the Euclidean distances between corresponding points.
Fig. 7. Schematic of the point-based similarity utilizing
dynamic time warping. The horizontal coordinate is the
captured frame, and the vertical coordinate represents the
measured drop distance.
Table 3 illustrates the computed median distances
of the fractions in accordance with dynamic time
warping. The bottom row of Table 3 indicates the
distance normalized by the number of frames. As
presented in Fig. 7, the trajectory is automatically
patched according to the initial or termination value
after the shifting to ensure the same length, which
gives rise to more considerable distances. Therefore,
the larger the time difference between the two
trajectories, the further the resulting distance will be.
As shown in Table 3, wood chip has shorter matching
distance as a consequence of its regular and rapid
movement. Compared to woos chip, confetti and paper
shred have a larger windward area and greater wind
resistance. As a result, their falling motions tend to be
more irregular and slower, leading to longer trajectory
distances. This can also be indicated by the trajectory
instances in Fig. 6. Although PE also triggers regular
movement, the difference existing in the captured
range of the two cameras results in a large standardized
warping distance.
Table 3. Median distances of the fractions in mm using
dynamic time warping.
Fraction Wood
chi
p
s Confetti Paper
shre
d
PE
g
ranule
Distance 101212 420171 644593 159781
Standardized
distance 241.55 325.32 323.21 491.20
5. Conclusions
The study compared two different 3D
cameras, viz., a stereo camera system and a plenoptic
camera, preliminarily with respect to their depth
accuracy and stability in measuring the depth of fixed
calibration objects, continuously varying depths, and
tracking various single RDF fractions in a drop shaft.
Concerning measuring the depth of fixed objects, the
two cameras are able to provide comparable depth
accuracy. The accuracy of the stereo camera decreases
with increasing distance, whereas the measurement
deviation of the plenoptic camera is nonlinear, which
decreases at first and then rises. Additionally, the
plenoptic camera measures the depth with the presence
of a considerable variation, which shows a negative
impact on the tracking process. The same statement
can also be deduced when measuring the continuously
varying depths. Generally, the two cameras deliver
falling distances with a high agreement between each
other. Nevertheless, the depth stability provided by the
stereo camera is superior. When comparing the
cameras in tracking fuel particles, the point-based
matching distance using dynamic time warping is
introduced to illustrate the similarity between the
measured depth trajectories. The distances of the
fractions with regular and rapid motions, e.g., wood
chips and PE granules, are significantly shorter than
those with long-time motions. Furthermore, with the
plenotpic camera, the small particles are longer visible.
Hence, we can infer a longer observation of the small
particles with the plenoptic camera.
To conclude, the stereo camera system and the
plenoptic camera could provide comparable depth
accuracy. In this regard, the stereo camera system
shows a slight advantage. However, the measurement
stability of the stereo camera system is far superior to
the plenoptic camera. Since the measured depth of the
plenoptic camera is accompanied by considerable
fluctuations, their impacts on further tracking
processes can not be ignored. In several cases, more
sophisticated tracking approaches or post-processing
are essential to deal with the depth fluctuations caused
by the plenoptic camera. On condition that the issues
caused by fluctuations can be tackled, the plenoptic
camera can replace the stereo camera system in
situations, where the stereo camera system can not be
applied, such as only one opening is available.
Acknowledgements
This study is supported by AiF-German Federation
of Industrial Research Associations (No. 20410N).
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
57
References
[1]. T. Ringbeck, M. Albrecht, J. Frey, M. Grothof, H. Heß,
H. Kraft, T. Möller, J. Mosen, and B. Schink, Time- of-
Flight 3D camera for autonomous navigation and
industrial automation, in Proceedings of the 11th
international : an event of the Association for Sensor
Technology, AMA (R. Lerch, ed.)’, Nürnberg,
Germany, Mai 2003.
[2]. V. L. Tran and H. Lin, A structured light RGB-D
camera system for accurate depth measurement,
International Journal of Optics, Vol. 2018, 8659847.
[3]. R. Orozco, C. Loscos, I. Martin, and A. Artusi, Chapter
4 - multiview HDR video sequence generation, High
Dynamic Range Video, F. Dufaux, P. Le Callet,
R. K. Mantiuk, and M. Mrak, eds. Academic Press,
Cambridge, MA, USA, 2016.
[4]. R. Ng, Digital light field photography, PhD thesis,
Stanford University, 2006.
[5]. B. Langmann, K. Hartmann, and O. Loffeld, Depth
camera technology comparison and performance
evaluation, in Proceedings of the 1st International
Conference on Pattern Recognition Applications and
Methods, SciTePress - Science and Technology
Publications, 2012.
[6]. C.-Y. Chiu, M. Thelwell, T. Senior, S. Choppin, J. Hart
and J. Wheat, Comparison of depth cameras for
threedimensional reconstruction in medicine, in
Proceedings of the Institution of Mechanical
Engineers, Part H: Journal of Engineering in
Medicine, Vol. 233, Jun 2019, pp. 938–947.
[7]. E. M. Hall, B. S. Thurow, and D. R. Guildenbecher,
Comparison of three-dimensional particle tracking and
sizing using plenoptic imaging and digital in-line
holography, Appl. Opt., Vol. 55, Aug. 2016,
pp. 6410–6420.
[8]. Z. Zhang, A flexible new technique for camera
calibration, IEEE Transactions on three-dimensional
Pattern Analysis and Machine Intelligence, 22, 11,
2000, pp. 1330-1334.
[9]. C. Heinze, S. Spyropoulos, S. Hussmann, and C.
Perwass, Automated robust metric calibration
algorithm for multifocus plenoptic cameras, IEEE
Transactions on Instrumentation and Measurement,
65, 5, 2016, pp. 1197-1205.
[10]. M. Zhang, M. Vogelbacher, K. Aleksandrov,
H.-J. Gehrmann, D. Stapf, R. Streier, S. Wirtz,
V. Scherer, and J. Matthes, A Novel Plenoptic Camera-
Based Measurement System for the Investigation into
Flight and Combustion Behavior of Refuse-Derived
Fuel Particles, ACS Omega, 8, 19, 2023, pp. 16700–
16712.
[11]. M. Zhang, M. Vogelbacher, V. Hagenmeyer,
K. Aleksandrov, H.-J. Gehrmann and J. Matthes, 3-D
refuse-derived fuel particle tracking-by-detection
using a plenoptic camera system, IEEE Transactions
on Instrumentation and Measurement, 71, 2022,
pp. 1-15.
[12]. Paliwal, K. K., Anant Agarwal, and Sarvajit S. Sinha.
A Modification over Sakoe and Chiba’s Dynamic
Time Warping Algorithm for Isolated Word
Recognition, Signal Processing, Vol. 4, 1982,
pp. 329–333.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
58
(021)
Impact of Solvent on Ammonia Detection Performance
of Polyaniline-based Sensors
S. Vassaux, N. Redon, E. A. da Silva and C. Duc
Center for Energy and Environment, IMT Nord Europe, Institut Mines-Télécom,
University of Lille, F-59000 Lille, France
Tel.: + 33327712222
E-mail: sabine.vassaux@imt-nord-europe.fr; nathalie.redon@imt-nord-europe.fr;
edilene.dasilva@imt-nord-europe.fr; caroline.duc@imt-nord-europe.fr
Summary: Ammonia constitutes a polluting gas in atmosphere. Chemiresistive sensors based on the polyaniline
(PAni) can be used to monitor it. In this technology, the doped PAni is dispersed in a solvent like m-cresol, n-
methyl-2-pyrrolidone or dichloroacetic acid, as reported in the literature. However, these solvents cause
environmental and toxicity issues. This study aims to identify a safer solvent for PAni formulation displaying
efficient gas sensing performances. Several alternative organic solvents were selected and tested in the formulation
of PAni doped with camphorsulfonic acid (CSA). PAni-films morphology was characterized using SEM. Sensing
performances of those materials formulated with various solvents have been evaluated at different ammonia
concentrations. Results show that the solvent influences the surface morphology of PAni/CSA films, suggesting
different specific surface areas. Then, the sensitivity of the sensors to ammonia response is impacted by the
solvent. The role of the active layer morphology on the sensing performance is highlighted.
Keywords: Chemiresistive sensors, Ammonia detection, Polyaniline-based sensors, Sustainable formulation.
1. Introduction
Ammonia is a polluting gas, which at high
concentrations, can cause human illnesses by
inhalation [1]. Hence, ammonia monitoring is crucial
in several industrial and agricultural applications. To
measure ammonia in real-time, many sensing methods
exist such as optical, acoustic method or solid-state
sensing technologies [1]. Among these latter, the
chemiresistive devices based on conductive polymers
and more specifically based on polyaniline (PAni),
display the advantages to be easily customized, to have
a good processability and to work at room temperature
[2]. To manufacture polyaniline-based sensors, the
polymer needs to be dispersed in a solvent. Up to now,
most popular solvents identified to disperse the
insoluble doped polyaniline were m-cresol [3],
dichloroacetic acid [4] or chloroform. These solvents
raise environmental and toxicity issues. To reach a
sustainable fabrication of polyaniline-based sensors,
safer solvents must be found. Therefore, the objective
of this study is to identify safer solvents to disperse
conductive polyaniline, without impacting drastically
its sensing performance. The influence of the solvent
on properties of Polyaniline films will be studied.
2. Materials and Methods
2.1. Materials
Polyaniline-emeraldine (Mw = 65000 g mol-1) and
(+)-Camphor-10-sulfonic acid (CSA), chosen as the
doping agent were provided by Sigma Aldrich. They
were mixed together to obtain a doping rate of 50 %.
The powder mix was then meticulously incorporated
in four different solvents: dichloroacetic acid (DCAA),
acetic acid (AA) and ethanol (E), selected given the
same carbon number on their chain, and Toluene (T)
chosen due to its aromatic structure, which is similar
to the one of m-cresol. The resulting solutions stayed
under stirring at 700 rpm for 5 days and then were
sonicated for 1 hour. Four “PAni/CSA/solvent”
solutions were obtained. The solutions concentrations
were comprised between 20 and 30 g/L. The sensor
fabrication was carried out through drop-cast
technique: 1 to 2.5 µL of the solutions were deposited
onto gold interdigitated electrodes on a polyimide
flexible substrate. The sensors were dried on a hotplate
at 100 °C for one night, and during 7 days at 100 °C in
an oven under vacuum.
2.2. Methods
The morphology of the PAni films was observed
using Scanning Electron Microscopy at a voltage of
10 kV and a magnification size of 1000. Ammonia
detection performance was evaluated at different
concentrations (from 50 to 2000 ppb), at a constant
temperature (20.4 °C) and a constant relative humidity
of 51 %, as a relative humidity of (50 ± 5) %
corresponds to the recommended ideal atmosphere
relative humidity to test specimens [5].
Relative responses of all sensors were calculated as
a function of the resistance under clean air (R0) and
real-time resistance (R) as followed:
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
59
𝑅𝑒𝑙𝑎𝑡𝑖𝑣𝑒 𝑅𝑒𝑠𝑝𝑜𝑛𝑠𝑒 
100 (1)
The relative response according to ammonia
concentration measured in the exposure chamber was
plotted and the sensitivity, corresponding to the slope
of the linear regression, was obtained.
3. Results
Fig. 1 shows great morphology variability among
PAni/CSA films formulated in the presence of
different solvents. Uniform and smooth films are
obtained using PAni within dichloroacetic acid
(DCAA), while a more porous polymer film is
observed for PAni films formulated with acetic acid
(AA). Needle-shaped particles result from the toluene
evaporation on the polymer film (T), while large
aggregated structures are obtained after ethanol (E)
evaporation.
Then, the influence of the solvent on ability of
PAni/CSA sensors to detect ammonia was studied.
Fig. 2 shows that all sensors are sensitive to ammonia
and present the commonly observed behavior of PAni-
based sensors, characterized by an increasing relative
response with the ammonia concentration [1].
Moreover, independently of the solvent, the sensors
present good reversibility characterized by a relative
response around 17 ± 5% of their initial value after
30 minutes of desorption under clean air. An
interesting point is that the sensitivity of the sensors is
drastically impacted by the solvent. The sensitivity of
DCAA, AA, T and E samples are 5.86, 39.82, 74.34
and 87.28 (%/ppm), respectively. Those variations can
be mainly attributed to the difference of specific
surface area (Fig. 1). Indeed, given the morphology of
the samples, the specific surface area of E an T samples
should be well higher than the one of AA and DCAA
samples. A higher specific surface area is known to
facilitate the interaction of ammonia vapor with the
PAni film [6] and to lead to enhanced performance, as
it was suggested by Tanghy [2]. Finally, in this study,
the PAni sensor formulated with ethanol displays the
best performances with the highest sensitivity and the
best reversibility.
Fig. 1. SEM images of PAni films deposited on a Polyimide
substrate for solutions DCAA, AA, E and T.
Fig. 2. Response toward ammonia concentration from 0 to
2000 ppb of PAni/CSA sensors from solutions DCAA, AA,
E and T (51 % RH and 20.4 °C).
4. Conclusions
The solvent used to disperse the doped polyaniline
has a great influence on the morphology and sensing
performance to ammonia of PAni-based films. The
microstructure and hypothetically the corresponding
specific surface area of the PAni film leads to a better
sensitivity to ammonia, without impacting drastically
the reversibility of the sensor. Ethanol seems to be a
promising alternative as a safer solvent to keep the
efficient sensing capability to ammonia of PAni films.
Acknowledgements
Authors are thankful to IMT Nord Europe for the
financial support and Groupe TERA.
References
[1]. D. Kwak, Y. Lei, et R. Maric, Ammonia gas sensors:
A comprehensive review, Talanta, Vol. 204, 2019,
pp. 713730.
[2]. N. R. Tanguy, M. Thompson, et N. Yan, A review on
advances in application of polyaniline for ammonia
detection, Sensors and Actuators B: Chemical,
Vol. 257, 2018, p. 10441064.
[3]. T. Vikki et al., Molecular Recognition Solvents for
Electrically Conductive Polyaniline, Macromolecules,
Vol. 29, No. 8, 1996, p. 29452953.
[4]. T. E. Olinga, J. Fraysse, J. P. Travers, A. Dufresne, et
A. Pron, Highly Conducting and Solution-Processable
Polyaniline Obtained via Protonation with a New
Sulfonic Acid Containing Plasticizing Functional
Groups, Macromolecules, Vol. 33, No. 6, 2000,
pp. 21072113.
[5]. International Organisation for Standardization,
International Standard ISO 554-1976 (E): Standard
atmospheres for conditioning and/or testing-
Specifications, 1976.
[6]. Z. Pang, E. Yildirim, M. A. Pasquinelli, et Q. Wei,
Ammonia Sensing Performance of Polyaniline-Coated
Polyamide 6 NaNofibers, ACS Omega, Vol. 6, No. 13,
2021, p. 89508957.
-10
10
30
50
70
90
110
130
150
0 200 400
Relative response (%)
Time (min)
DCAA AA E T
50 ppb
200 ppb
500 ppb
1000 ppb
2000 ppb
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
60
(023)
Feasibility of Gait Change Detection using Smart Footwears
T. Funayama 1, Y. Uchida 2, Y. Kogure 3, D. Souma 4 and R. Kimura 5
1
Faculty of Medical Sciences, Teikyo University of Science, Uenohara-shi, Ymanashi, Japan
2
Faculty of Life & Environmental Sciences, Teikyo University of Science, Adachi-ku, Tokyo, Japan
3
Professor Emeritus, Teikyo university of Science, Adachi-ku, Japan
4
Department of Rehabilitation, Isogo Central Hospital, Yokohama-shi, Kanagawa, Japan
5
Department of Rehabilitation, Seirei Yokohama Hospital, Yokohama-shi, Kanagawa, Japan
Tel.: + 81554634411, fax: + 81554636944
E-mail: funayama@ntu.ac.jp
Summary: This study focused on the potential of utilizing pressure sensors and accelerometers in the field of rehabilitation.
Wearable devices have evolved significantly in recent years. By harnessing these technological devices, the progress of patients
undergoing rehabilitation can be monitored and analyzed. Assessments of the soles and walking can potentially be used for
understanding health status, extending beyond mere gait analysis to a heightened capacity for discerning one's health status.
Rehabilitation robots that not only assist with physical strength but also repetitively guide desirable movements potentially
produce sustained effects, even after their use is discontinued. Therefore, we examined walking before wearing a robot, during
robot-assisted walking, and after wearing the robot, using smart insoles and acceleration measurements. The results suggest
that smart insoles attached to shoes and accelerometers could potentially detect changes in gait.
Keywords: Smart insole, Accelerometer, Activity measurement, Gait change detection.
1. Introduction
Wearable devices capable of measuring activities,
which have rapidly gained popularity in recent years,
can also be used for the rehabilitation of daily activities
[1–2]. For this purpose, the equipment should be
beneficial, easy to operate, easy to wear, convenient,
and practical for older people and those with
disabilities who have health challenges. Furthermore,
it is crucial that healthcare professionals and caregivers
understand and utilize the data effectively. A plethora
of data can be acquired from digital devices. However,
the more abundant the collected data, the more
complex and challenging it becomes to comprehend its
underlying implications. Furthermore, a higher
number of sensors is often correlated with increased
costs and device operation complexity. Walking speed
is sometimes referred to as the “sixth vital sign” and
can provide insights into the body's state beyond just
locomotion [3]. In recent years, research on smart
insoles has increased [4–11]. We used a smart insole
with pressure sensors placed in four parts of the sole
and an attached accelerometer worn on the shoe to
compare data before, during, and after walking using a
gait-assisting robot.
We report the potential of smart insoles and
accelerometers to detect changes in walking through
robot-assisted rehabilitation. This study was approved
by the Human Research Ethics Committee of the
Teikyo University of Science.
2. Experiment Method
2.1. Devices
We used a wireless smart insole (FEELSOLE®)
that had pressure sensors. It allows for measurements
in four parts (toe, heel, inside, and outside) of each
foot, making a total of eight parts. The insoles must be
calibrated before use. Calibration was performed four
times: with no pressure and no feet in the shoes,
standing on both feet, and standing on one foot on each
side. The sampling frequency was set to 50 Hz. Using
ORPHE ANALYTICS, data was stored on the cloud
and downloaded in the CSV format.
An ORPHE CORE® accelerometer was used. The
data were uploaded to the cloud using the ORPHE
ANALYTICS application, and data upload was
confirmed by downloading the data in the CSV format.
The sampling frequency was set to 200 Hz. We used
two triaxial accelerometers, one for each leg, making a
total of two devices. The assessments were conducted
by attaching the devices to the outer side of each shoe,
one on each foot. Data from both smart insoles and
accelerometers were synchronized using ORPHE
ANALYTICS. The robot used in this study was the
Orthobot, which is commonly used for walking and
gait rehabilitation. When attached to a long-leg brace,
the robot steers the lower limbs in favorable movement
patterns. The placement of the sensors is depicted in
Fig. 1, and the scene of the robot equipped for walking
is showed in Fig. 2.
Fig. 1.
Insole sensors and accelerometers.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
61
Fig. 2. Walking with robot.
2.2 Measurement and Analysis
We measured pre-usage walking, walking while
wearing the robot, and post-usage walking using smart
insoles with built-in pressure sensors, shoes fitted with
accelerometer sensors. The subject performed three
walking trials for approximately 20 seconds each while
wearing the walking-assist robot. The analysis used
data from the third trial. Gait analysis after use was
conducted twice: once at five minutes and again at
twelve minutes of robot use. The 1000th to 6000th data
points were analyzed, excluding the periods
immediately after the start and just before the end of
the study.
Peak values were identified from the insole data,
representing the points of highest sensor force
application for the four parts–heel, toe, inside, and
outside–during a single foot-ground contact. The
subsequent decrease in the sensor values after each
peak was calculated to determine the rate of decrease.
The peak values were detected using the find_peaks
function from the Python library, SciPy. The threshold
for peak determination was set at 50 % of the
maximum value for each trial for the heel, toe, inside,
and outside parts. The rate of decrease indicates a
decline in the value following the peak and was
calculated by determining the difference in the
weighted averages of adjacent data points. In other
words, the difference between the weighted averages
of the x and x+1 data points was computed. The
weighted average was obtained using five values (the
target value and two values each before and after) and
calculated as follows: Due to the significant influence
of the central region, the percentages were calculated
by assigning 40 % to the target value of the waveform,
20 % to the values before and after one peak, and 10 %
to the values before and after two peaks. The maximum
rate of decrease after each peak was determined, and
then, due to multiple foot-ground contacts within a
single walk, the mean of the maximum decrease rates
for that walk was calculated. The mean maximum
decrease rates were compared across different parts of
the sole, including the heel, toe, inside, and outside of
the foot. The weighted average difference after the
peaks extracted using find_peaks had positive values
sometimes; however, only negative values were
utilized in the calculations.
The data from the accelerometer were examined by
calculating the absolute value in each case and taking
the square root of the sum of the squares of the values
in the X, Y, and Z axes. The subject was a man in his
60 s who wore the robot on his right leg.
3. Results
3.1 Insole and Accelerometer Data
An example of the data obtained from the insole
and accelerometer is shown in Fig. 3. The data from
the insole are represented on the Y-axis on the left,
whereas the data from the accelerometer are
represented on the Y-axis on the right.
Fig. 3.
Insole and acceleration data.
The acceleration only represents the value obtained
by taking the square root of the sum of the squares of
the values in the three axes (X, Y, and Z axes). Each
color corresponds to a specific area: blue for the left
heel, light blue for the left toe, green for the left inside,
yellow-green for the left outside, red for the right heel,
pink for the right toe, orange for the right inside,
yellow for the right outside. Additionally, left
acceleration is depicted in dark gray, and right
acceleration is depicted in light gray. “L” and “R” in
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
62
all figures and tables represent “Left” and “Right,”
respectively. The data from both the left and right
insoles as well as the accelerometers are synchronized.
3.2. Peak Values of the Insoles
Table 1 presents the mean peak values for the four
parts of the left and right insoles. Twelve minutes after
the robot was removed, a decrease was observed in the
outside region, whereas the values from the heel and
inside regions increased.
Table 1. Mean of peak values.
Before During
5 min
after
12 min
after
L_heel 118.4 132.4 119.0 122.6
L_toe 70.2 68.2 72.0 72.2
L_inside 122.4 135.7 138.3 133.5
L_outside 75.9 63.5 65.3 73.0
R_heel 108.4 98.8 114.0 129.8
R_toe 85.4 37.5 82.2 79.5
R_inside 135.3 126.3 148.1 152.2
R_outside 79.5 37.8 71.2 65.7
3.3. Rate of Decrease
An example of the post-peak maximum decrease
rates is shown in Fig. 4. These data are from the right
insole prior to robot usage. Because the decrease rates
were expressed as absolute values, higher values
indicated greater decrease rates, suggesting swift shifts
in the center of gravity.
Fig. 4. Post-peak decrease rate (right foot).
Peaks were automatically calculated using the
find_peaks function in Python with the threshold set at
50 % of the maximum value. Consequently, the
number of peaks extracted may vary in different
regions of the insole. The X-axis in Fig. 4 represents
the number of peaks. Figs. 5 and 6 show the mean
absolute maximum post-peak decrease rates for each
walking instance. Larger values indicate greater rates
of decrease, signifying rapid shifts in the center of
gravity. These effects were detected not only in the
right foot where the robot was attached but also in the
left foot. The most significant differences in the rate of
decrease were in the inside of both the left and right
feet, and the effects were discernible up to twelve
minutes after robot removal.
Fig. 5. Post-peak decrease rate (left foot).
Fig. 6. Post-peak decrease rate (right foot).
3.4. Accelerometer Data
Accelerometer data were compared based on the
mean of the absolute values. As shown in Fig. 7, the
square root of the sum of the squares of the values for
the three axes increased after robot-assisted walking,
suggesting an overall acceleration of leg movement.
Acceleration in the X- and Y axes increased during
walking in both the during-robot- and post-robot-
assisted walking phases compared with before robot-
assisted walking, whereas it decreased in the Z-axis
during robot-assisted walking and twelve minutes
later. This may indicate a potential to capture changes
in walking patterns. Furthermore, examining the
standard deviation (refer to Table 2), it was observed
that even in this result, the maximum was achieved at
five minutes after robot-assisted walking, and the
value increased significantly twelve minutes
thereafter.
0
10
20
30
1234567891011
Maximum difference
R_heel R_toe
R_inside R_outside
0
5
10
15
20
25
Mean of the max difference
L_heel L_toe
L_inside L_outside
0
5
10
15
20
25
30
Mean of the max difference
R_heel R_toe
R_inside R_outside
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
63
Fig. 7. Acceleration data.
Table 2. Standard deviations of the accelerometer data.
Left Right
Before 0.605 0.591
Durin
g
0.589 0.491
5 minutes afte
r
0.877 0.903
12 minutes afte
r
0.865 0.832
4. Discussion
After robot-assisted walking, there was a tendency
for the peak values to increase at the heel and inside
part of the foot. Moreover, the inside part exhibited a
significant maximum decrease rate after the peak,
indicating rapid weight transfer. Accelerometer data
revealed differences in the speed and movement
direction after robot-assisted walking. This suggests
the possibility of an increased stride length or greater
leg swing. The effects of robot-assisted walking
influenced both the right foot, to which the robot was
attached, and the left foot, based on data from both
insole and acceleration measurements. Therefore,
assessing its effects using either the insole or
accelerometer alone is possible.
5. Conclusions
Changes were detected in both the insole and
accelerometer before and after robot-assisted walking
sessions. Data collected twelve minutes after robot
removal also varied. The insoles and accelerometer
sensors attached to the shoes have the potential to
detect changes in walking patterns. In the future,
provided that the use of sensors becomes more
accessible in the field of rehabilitation, they may
contribute to assessing the effectiveness of
rehabilitation training and determining how much of
the effect of rehabilitation persists after its completion.
Acknowledgements
This study was supported by JSPS KAKENHI, Grant
Number JP23K11207.
References
[1]. S. Raghav, S. Mani, A. Anand, S. Pathak, A. Singh,
G. Kandasamy, M. Kumar, Role of Sensor-Based
Insole as a Rehabilitation Tool in Improving Walking
among the Patients with Lower Limb Arthroplasty: A
Systematic Review, Intelligent Systems and Smart
Infrastructure: Proceedings of ICISSI 2022, 2023, 38.
[2]. B. Marques, J. McIntosh, A. Valera, and A. Gaddam,
Innovative and assistive eHealth technologies for
smart therapeutic and rehabilitation outdoor spaces for
the elderly demographic, Multimodal Technologies
and Interaction, 4, 4, 2020, 76.
[3]. A. Middleton, G. D. Fulk, M. W. Beets, T. M. Herter,
S. L. Fritz, Self-selected walking speed is predictive of
daily ambulatory activity in older adults, Journal of
Aging and Physical Activity, Vol. 24, Issue 2, 2016,
pp. 214 -222.
[4]. T. Funayama, Y. Uchida, Y. Kogure, Detection of
motion restriction with smart insoles, Sensors &
Transducers, Vol. 259, Issue 5, 2022, pp. 61-68.
[5]. T. W. Seo, J. Y. Lee, B. H. Lee, The reliability test of
a smart insole for gait analysis in stroke patients,
Korean Physical Therapy Science, Vol. 29, No.1,
2021, pp. 30-40.
[6]. S. Saidani, R. Haddad, R. Bouallegue, R. Shubair, A
New Proposal of a Smart Insole for the Monitoring of
Elderly Patients, in Proceedings of the 35th
International Conference on Advanced Information
Networking and Applications, Toronto, Canada,
12-14 May 2021, Vol. 2, pp. 273-284.
[7]. E. M. Macdonald, B. M. Perrin, L. Cleel, M. I. C.
Kingsley, Podiatrist-Delivered Health Coaching to
Facilitate the Use of a Smart Insole to Support Foot
Health Monitoring in People with Diabetes-Related
Peripheral Neuropathy, Sensors, Vol. 21, Issue 12,
2021, 3984.
[8]. S. Subramaniam, S. Majumder, A I. Faisal, M J. Deen,
Insole-Based Systems for Health Monitoring: Current
Solutions and Research Challenges, Sensors, 2022,
Vol 22, Issue 2, 438.
[9]. S. Kim, S. Park, S. Lee, S. H. Seo, H. S. Kim, Y. Cha,
et al., Assessing physical abilities of sarcopenia
patients using gait analysis and smart insole for
development of digital biomarker, Scientific Reports,
Vol. 13, Issue 1, 2023, 10602.
[10]. V. Tsakanikas, A. Ntanis, G. Rigas, C. Androutsos,
D. Boucharas, N. Tachos, et al., Evaluating gait
impairment in Parkinson’s disease from instrumented
insole and IMU sensor data, Sensors, Vol. 23, Issue 8,
2023, 3902.
[11]. T. Funayama, Y. Uchida, and Y. Kogure, Step
Measurement Using a Household Floor Mat and Shoe
Sensors, International Journal on Advances in Life
Sciences, Vol. 15, No. 1 & 2, Issue 5, 2023, pp. 33-43.
0
0,5
1
1,5
2
Mean of acceleration values
L ACC X L ACC Y
L ACC Z L 3-axis square root
R ACC X R ACC Y
R ACC Z R 3-axis square root
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
64
(024)
Exploring the Hidden Complexity: Approximate Entropy and Sample
Entropy Analysis in Pulse Oximetry of Female Athletes
A. M. Cabanas 1, D. Catalán 1, N. Sáez 1, C. Flores 1, and P. Martín-Escudero 2
1 Departamento de Física, Universidad de Tarapacá, Arica, 1010069, Chile
2 Medical School of Sport Medicine, Faculty of Medicine, Universidad Complutense de Madrid,
Madrid, 28040, Spain
E-mail: acabanas@academicos.uta.cl
Summary: In recent years, the use of Approximate Entropy (ApEn) and Sample Entropy (SampEn) to quantify the complexity
of time-series data in the medical and sports fields has attracted significant attention. These methods have been employed
across various biomedical signals, such as heart rate, to evaluate physiological changes or conditions. This study aims to
examine the application of ApEn and SampEn to temporal series of pulse oximetry and heart frequencies collected from a
cohort of female athletes. Through a comparative analysis, we highlight the differences between ApEn and SampEn in
representing the regularity and unpredictability of these physiological signals. Additionally, we delve into the relationship
between these entropy measures and the athletes' maximal oxygen consumption (VO2,max). Preliminary findings suggest that
while both entropy measures provide valuable insights into the athletes' physiological complexities, SampEn emerges as a
more consistent and dependable metric, especially for shorter data sets. Furthermore, we've uncovered pivotal insights
regarding the relationship between fitness levels and entropy measures, suggesting the potential of these metrics as predictors
or indicators of both athletic performance and cardiovascular health. This study emphasizes the importance of entropy-based
metrics in sports physiology and sets the stage for more specialized interventions for athletes using monitoring devices.
Keywords: Pulse oximeter, Approximate entropy, Sample entropy, VO2,max, Women's response to exercise.
1. Introduction
In recent years, there has been escalating interest in
understanding the factors that influence the
physiological response of women to exercise,
particularly the variations in oxygen saturation.
Research has shown that women tend to exhibit an
early decrease in oxygen saturation during maximal
exercise, experiencing this at lower oxygen intakes
compared to men. Some theories postulate that active,
healthy women might encounter exercise-induced
arterial hypoxia due to anatomical differences in lung
capacity and structure, potentially affecting oxygen
diffusion. However, more recent studies suggest that
the oxygen desaturation observed in women is the
primary limiting factor in achieving peak maximum
oxygen uptake (VO2,max) levels, rather than mere lung
size or capacity [1].
The advent of wearable sensors, which
continuously monitor physiological variables like
heart rate (HR) and oxygen saturation using
photoplethysmography (PPG), offers a promising
avenue to understand exercise effects and to tailor
training interventions. Pulse oximetry, a non-invasive
optical technique, has emerged as a vital tool for
studying oxygenation during physical exertion,
enabling continuous tracking of changes in peripheral
oxygen saturation. Although pulse oximetry has
limitations [2], investigating oxygen saturation
variations through wearable devices like pulse
oximeters can significantly augment our
comprehension of physiological responses to exercise.
This understanding can then be leveraged to
personalize training interventions and optimize
performance monitoring.
Emerging evidence points to the intricate patterns
of variability in SpO2 signals as a treasure trove of
insights into respiratory control and the perception of
breathlessness during hypoxic conditions [3].
Fluctuations in SpO2 are correlated with breathlessness
perception and reflect the exchange of information
with other respiratory variables [4].
The assessment of dynamical system regularity is a
prominent subject in the fields of science and
engineering. Indeed, the ability to discern levels of
complexity within biological data sets has become
increasingly important. This measurement has diverse
applications, such as analyzing medical health
conditions [5,6], identifying anomalies in real-time
network dynamics [7] and predicting earthquakes [8].
Various statistical and mathematical approaches have
been developed to quantify complexity within time
series data. These methods encompass the
Kolmogorov complexity measure [9], the C1/C2
complexity measure [9], and entropy [10].
This growing understanding of physiological time-
series data complexity has paved the way for various
statistical metrics. In this context, Approximate
Entropy (ApEn) and Sample Entropy (SampEn) have
gained prominence in recent times [11,12]. These
measures, originally developed to quantify the
regularity and complexity of time-series data, have
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
65
found extensive applications in the biomedical realm,
helping discern patterns and irregularities in various
physiological signals [11, 12].
Pulse oximetry and heart rate data, pivotal
indicators of cardiovascular and respiratory health,
offer a myriad of insights ripe for entropy-based
analysis. In the realm of athletic performance, these
physiological indicators are invaluable. They not only
reflect an athlete's health but also offer a lens into
performance potential and resilience [1]. Female
athletes, given their distinct physiological dynamics,
represent a fascinating demographic for such
investigations, demanding tailored methodologies for
analysis and interpretation [1].
Central to athletic performance, especially in
endurance sports, is the measure of VO2,max — the
maximal oxygen consumption. This parameter
essentially represents an individual's capacity to
transport and utilize oxygen during incremental
exercise, often correlated with cardiovascular fitness
and aerobic endurance [11]. The relationship between
VO2,max max and entropy measures of pulse oximetry
and heart frequencies can provide profound insights
into the intricacies of female athletic performance [12].
While both ApEn and SampEn aim to quantify the
unpredictability of fluctuations in a time series, they
are not without their nuances. ApEn has been observed
to have certain biases, especially with shorter data sets,
which can render its results unreliable [13]. SampEn,
developed as an improvement over ApEn, addresses
some of these limitations, offering more consistency
and reliability, especially with short data sets [13].
This article delves into the nuances of ApEn and
SampEn as applied to time-series data from pulse
oximetry and heart rates of female athletes. Our
objective is to offer a statistically robust and consistent
metric of system complexity, elucidate the distinctions
between these entropy measures, and explore their
interplay with VO2,max.
2. Methods
Entropy, a foundational thermodynamics concept
that gauges disorder within a closed system, holds
significance in nonlinear dynamical systems for
assessing complexity. It proves valuable for
scrutinizing time series due to its unconstrained
approach to probability distribution [14]. Shannon
entropy (ShEn) and conditional entropy (ConEn) serve
as fundamental metrics to measure information
quantity and generation rate, respectively [5]. These
underpin other entropy gauges developed to evaluate
time series intricacies. In this framework, entropy
provides researchers the ability to quantify complexity
within relatively short data sets based on meaningful
experimental comparisons to control groups.
Pincus introduced the widely employed
approximate entropy (ApEn) metric as a measure of
regularity to quantify levels of complexity within a
time series [14]. It measures system complexity akin to
entropy and is better suited for analyzing clinical
cardiovascular and other time series data. Another
entropy measure, sample entropy (SampEn), was
established by Richman and Moorman [16]. SampEn
aligns more closely with theoretical expectations than
ApEn across various conditions [16]. This enhanced
accuracy makes SampEn valuable for analyzing
experimental clinical cardiovascular and other
biological time series data.
In our research, we extracted time-series data
pertaining to pulse oximetry and heart rate from a
group of twenty-seven physically active and healthy
female participants. These athletes underwent an
incremental exercise test using a treadmill ergometer.
Data from ergospirometry was meticulously recorded
on protocol sheets, which were later transferred to
anonymized databases. We then employed
Approximate Entropy (ApEn) and Sample Entropy
(SampEn) methods to evaluate the complexity and
regularity of these time-series datasets.
2.1. Entropy-Based Regularity Assessment of
Time Series Data
To quantify the regularity and complexity of our
time-series data, we employed two entropy-based
metrics: Approximate Entropy (ApEn) and Sample
Entropy (SampEn).
ApEn measures the unpredictability of fluctuations
in a time-series dataset. Introduced by Pincus [17,18],
it's been applied in various biomedical contexts for its
ability to handle short and noisy data. ApEn is robust
to noise, works for both stochastic and deterministic
processes, and produces non-negative values
indicating complexity.
Given a time-series data of length N
𝑢󰇛𝑖󰇜,𝑢󰇛2󰇜,…,𝑢󰇛𝑁󰇜, the following steps outline its
computation:
1. Fix parameters: m (pattern length) and r
(similarity criterion).
2. Form N-m+1 vectors of length m from the time
series xm(i). The distances between them is:
𝑑󰇟𝑥󰇛𝑖󰇜,𝑥󰇛𝑗󰇜󰇠𝑚𝑎𝑥󰇛|𝑢󰇛𝑖𝑘󰇜𝑢󰇛𝑗𝑘󰇜|
with 0𝑘𝑚1
3. For each vector, count the number of vectors that
are similar to it within a tolerance r.
𝐶󰇛𝑟󰇜=(number of 𝑗𝑁𝑚1 such that
𝑑󰇟𝑥󰇛𝑖󰇜,𝑥󰇛𝑗󰇜󰇠𝑟󰇜/󰇛𝑁𝑚1).
4. Compute the regularity measure for patterns of
length m as:
𝜑󰇛𝑟󰇜
𝑙𝑜𝑔𝐶󰇛𝑟󰇜


(1)
5. The statical estimator of the ApEn(m,r,N) is then
defined as:
𝐴𝑝𝐸𝑛󰇛𝑚,𝑟,𝑁󰇜󰇛𝑢󰇜𝜑󰇛𝑟󰇜𝜑󰇛𝑟󰇜 (2)
On the other hand, SampEn is a refinement of
ApEn, designed to be less dependent on time series
length and more consistent [16]. SampEn addresses
some of the biases and inconsistencies of ApEn.
Indeed, its calculation excludes self-matches, making
it a more unbiased estimator of system complexity.
Given the same time series, the computation involves:
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
66
1. Similar to the steps in ApEn, begin with a time
series of length N and construct vectors.
2. However, in counting the number of matches, do
not include self-matches (i.e., exclude the case
j=i).
3. Define regularity measures for sequences of
length m as:
𝐵󰇛𝑟󰇜1
𝑁𝑚𝐶󰇛𝑟󰇜


and 𝐴󰇛𝑟󰇜1
𝑁𝑚1𝐶
󰇛𝑟󰇜


4. Compute SampEn(m,r,N)(u) as:
𝑆𝑎𝑚𝑝𝐸𝑛󰇛𝑚,𝑟,𝑁󰇜𝑙𝑛𝐴󰇛𝑟󰇜
𝐵󰇛𝑟󰇜
In summary, both ApEn and SampEn measure the
regularity or unpredictability of time-series data, with
SampEn being refined to exclude self-matches. This
distinction in the counting method results in SampEn
generally producing more consistent and reliable
results than ApEn.
To investigate the influence of parameters on
entropy calculations, time series data was analyzed
using various combinations for ApEn and SampEn.
The parameters were “m” (length of data for
comparison) with values of 1, 2, and 3, “r” (sensitivity
criterion) set at 0.1, 0.15, 0.20, and 0.25 times the
standard deviation of the whole time series. “N” (data
length) represents the total count of steps in the series.
In this context, “m” indicates the number of steps
compared in a sequence, while “r” denotes the
acceptable variance in step lengths. For example, with
m=2, two steps are compared, and with r=0.2, step
lengths are considered similar if they vary within 20%
of the series' overall standard deviation. Let us remark
that a practical approach is to define the tolerance as
r=0.2SD, where SD represents the standard deviation
of the dataset. This facilitates comparisons between
datasets with varying amplitudes. For the purposes of
this study, every time series was normalized to yield an
SD of 1.
3. Results
Table 1 summarizes the clinical variables of N (the
27 participants in the study). The values presented are
averages, accompanied by their respective standard
deviations to provide an understanding of the
variability within the data.
The temporal evolution of the oxygen saturation
SpO
2
and the HR obtained from the ECG signal for two
athletes is depicted in Fig. 1. The blue lines correspond
to the SpO
2
temporal series while the red and pink lines
correspond to the HR dataset in bmp [1]. The
continuous line correspond to an athlete with a good
physical fitness condition (VO
2,max
=45.7 ml/(kgmin)).
The dash-dotted lines correspond to a dark-skinned
athlete with an excellent physical fitness condition
(VO
2,max
=50.9 ml/(kgmin)).
Table 1. Clinical characteristic of the participants.
Subjects: 27 X
SD
Age (years) 22.96 6.19
Size (cm) 163.81 6.90
Weight (kg) 57.24 6.70
BMI (kg/m
2
) 21.31 1.98
HR
max
(bmp) 189.81 8.54
VO
2,max
(ml/(kgmin)) 48.90 7.62
Fig. 1. Temporal evolution of SpO
2
and HR time series for
two different athletes
[1].
Fluctuations have the potential to convey valuable
insights into the overall health and functioning of the
respiratory system. The impact of modifying “m”
(embedding dimension) and “r” (threshold) on
Approximate Entropy (ApEn) and Sample Entropy
(SampEn) was explored for both HR and SpO
2
time
series.
Fig. 2 depicts values of SampEn and ApEn for
heart rate (HR, represented in red colors) and pulse
oximetry (SpO
2
, represented in blue colors) across
three panels, each corresponding to an athlete with
varying fitness levels based on their VO
2,max
values.
Panel a) corresponds to an athlete with medium
condition (VO
2,max
<40). In this case, for both HR and
SpO
2
, the ApEn and SampEn values seem relatively
constant across different values of “r” for the different
values of “m”. This consistency suggests that, for
athletes with medium fitness levels, increasing the
dimensionality (increasing “m”) doesn't significantly
change the entropy estimates across various threshold
values (“r”). Since lower entropy values indicate more
regularity and less complexity, this suggests that
athletes with medium fitness levels may have more
predictable and regular physiological response.
Panel b) corresponds to an athlete with good
condition (40<VO
2,max
<50). We observe that the
values of ApEn and SampEn are higher than panel a)
but slightly lower than panel c). This increase in
entropy values suggests that as fitness levels improve,
the physiological responses exhibit more complexity.
This could be due to the cardiovascular and respiratory
systems' adaptability, as they become better trained to
handle varying physical challenges.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
67
Fig. 2. ApEn and SampEn for three athletes with a)
medium, b) good and c) excellent condition.
Besides, for both HR and SpO2, the ApEn and
SampEn values seem relatively constant across
different values of “r” for the different values of “m”.
This also suggests that, for athletes with medium
fitness levels, increasing the dimensionality doesn't
significantly change the entropy estimates across
various threshold values (r). Lastly, panel c) shows
ApEn and SampEn for an athlete with excellent
condition (VO2,max>50). There's a noticeable
variability in ApEn and SampEn values across
different r values, especially for m=2. This variability
could be indicative of the more complex physiological
adaptations and responses in athletes with superior
fitness levels. Their cardiovascular system might be
more adaptable and capable of dynamic responses to
different physiological challenges.
Across the panels, SampEn values seem
consistently lower than ApEn values for both HR and
SpO2. ApEn can be biased, especially for short
datasets, as it counts self-matches, leading to higher
values. In contrast, SampEn eliminates this bias by
excluding self-matches, often resulting in lower values
[14, 16].
When examining the complexity and regularity of
physiological time series using Approximate Entropy
(ApEn) and Sample Entropy (SampEn), the choice of
parameters “m” and “r” is crucial. In our case, the
choice of m=2 and r=0.2 is supported by both
empirical and theoretical considerations. As shown in
Fig. 2 when m=2, the entropies provide a stable
measure across different conditions and subjects.
Besides, using m=2 is computationally efficient,
especially for longer time series. As the embedding
dimension increases, the computational demands rise
exponentially. The choice of r=0.2 is based on the idea
of examining patterns that deviate by 20 % of the
standard deviation of the time series. This value is a
common choice in many studies and provides a
balance between sensitivity and specificity.
The choice of m=2 and r=0.2 is also consistent with
many previous studies on physiological time series. It
provides a balance between sensitivity, reliability,
computational efficiency, and discriminative power.
Furthermore, this consistency allows for better
comparability across studies and condition.
Fig. 3 displays the values of ApEn and SampEn for
the specific choices of m=2 and r=0.2. Each dot
represents an individual athlete's entropy values. A
clear trend is observed: as the fitness condition of the
athletes improves, there's an increase in the values of
both ApEn and SampEn. This upward trend signifies
that as athletes achieve higher fitness levels, their
physiological responses become more complex and
less predictable, echoing the intricacies of a well-
trained cardiovascular system.
Additionally, we observe a higher degree of
dispersion in the entropy values among athletes with
excellent fitness conditions. This increased dispersion
may indicate that while all these athletes are at a high
fitness level, the intricacies of their physiological
responses vary more widely. Such variance could be
attributed to multiple factors, including specific
training regimens, genetic factors, or other
physiological aspects that contribute to an athlete's
overall performance and adaptability.
Table 2 presents a detailed comparison of
Approximate Entropy (ApEn) and Sample Entropy
(SampEn) values as a function of the fitness conditions
a)
b)
c)
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
68
of the athletes. The values are presented as averages
(X
󰇜 alongside their respective standard deviations
(SD), providing insights into both the central tendency
and variability of the entropy measures. Athletes with
medium fitness levels have an average ApEn of 0.57
for heart rate (HR), with a standard deviation of 0.13.
This suggests a moderate level of complexity in their
heart rate patterns. For pulse oximetry (SpO2), the
average ApEn is higher at 0.96 with a broader spread
(SD=0.49), indicating greater unpredictability in
oxygen saturation levels for this group. The average
SampEn for HR is 0.33 with an SD of 0.08, indicating
a lower complexity compared to ApEn. The average
SampEn for SpO2 is 0.19, with an SD of 0.04,
reflecting a more regular and predictable oxygen
saturation pattern than its ApEn counterpart.
Fig. 3. a) ApEn and b) SampEn for m=2 and r=0. The
blue dots correspond to the HR data, while the red triangles
correspond to the SpO2 data.
Table 2. Approximate Entropy and Sample Entropy
average and SD 󰇛𝐗
𝐒𝐃󰇜 as a function of the fitness
condition of the athelets.
Furthermore, athletes in good condition show an
ApEn of 0.56 (SD = 0.09) for HR, almost similar to
the medium fitness group. Their SpO2 ApEn is slightly
lower than the medium group at 0.90 with an SD of
0.43. The SampEn for HR increases to 0.36 with an
SD of 0.12. The SpO2 SampEn average is 0.22 with an
SD of 0.10, showing a slight increase in complexity
compared to the medium fitness group.
Lastly, athletes with excellent fitness have an
average ApEn of 0.60 for HR with an SD of 0.09,
marking the highest complexity among the groups.
Their SpO2 ApEn is 0.85 with an SD of 0.42, slightly
lower than the good fitness group. The SampEn for
HR is at its highest for this group, averaging 0.40 with
an SD of 0.15. The SpO2 SampEn remains consistent
with the good fitness group, averaging 0.22, but with
a slightly higher SD of 0.11.
In essence, the table underscores the nuanced
physiological differences that manifest across varying
levels of athlete fitness, captured effectively through
entropy measures. The data also reveals a greater
variability (as seen from the SD values) in entropy
measures, especially in the excellent fitness group.
This might suggest that while these athletes are all
highly fit, there's a diversity in their physiological
responses.
4. Conclusions
The primary focus of this article was to explore the
utility of Approximate Entropy (ApEn) and Sample
Entropy (SampEn) as regularity statistics for female
athletes during incremental exercise. Our results not
only underscore the potential of entropy measures as
diagnostic tools but also illuminate the subtle
physiological differences evident as athletes approach
peak fitness levels.
Athletes possessing medium to good fitness levels
display stable ApEn and SampEn values, signifying a
uniform and predictable physiological response.
Conversely, athletes at the pinnacle of fitness exhibit
greater variability in entropy measures, indicative of
intricate physiological adaptations and a highly
adaptable cardiovascular system.
A notable observation was the consistent bias in
ApEn, which consistently produced higher values
compared to SampEn. This disparity originates from
the inclusion of self-matches in ApEn calculations.
Thus, SampEn emerges as a more reliable measure,
particularly for shorter datasets. It's crucial for
researchers to account for this bias when employing
entropy measures for analysis. Our study sheds light
on the intricacies of physiological time series, with
entropy values offering a window into the complexity
and regularity of physiological responses. This paves
the way for a deeper comprehension of an athlete's
adaptability and overall health. Integrating easily
computable ApEn and SampEn metrics into
monitoring devices has the potential to revolutionize
personalized training interventions. Additionally, the
broad application of these metrics across diverse
research fields promises new avenues for analytical
exploration. Work in this direction is in progress.
a)
b)
Fitness
Condition
ApEn 󰇛𝐗
𝐒𝐃󰇜 SampEn 󰇛𝐗
𝐒𝐃󰇜
HR SpO2 HR SpO2
Medium
0.57
0.13
0.960.49
0.33
0.08
0.19 0.04
Good
0.56
0.09
0.90
0.43
0.36
0.12
0.22 0.10
Excellent
0.60
0.09
0.85
0.42
0.40
0.15
0.22 0.11
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
69
Acknowledgements
A.M.C., D.C., N.S. and P.M-E. thanks support
from ANID Project SA22I0178. A.M.C thank support
from ANID Project SA77210039.
References
[1]. P. Martín-Escudero, A. M. Cabanas, M. Fuentes-
Ferrer, M. Galindo-Canales. Oxygen Saturation
Behavior by Pulse Oximetry in Female Athletes:
Breaking Myths, Biosensors. Vol. 10, Issue 11, 2021
pp. 1-19 .
[2]. Division of Industry and Consumer Education (DICE).
United States Food and Drug Administration. FDA.
Pulse Oximeter Accuracy and Limitations: FDA Safety
Communication. 2021.
[3]. Y. Jiang, J. T. Costello, T. B. Williams, N. Panyapiean,
A. S. Bhogal, M. J. Tipton, et al., A network
physiology approach to oxygen saturation variability
during normobaric hypoxia, Experimental Physiology,
Vol. 1, Issue 106, 2021, pp. 151-9.
[4]. A. L. Holder, A. Wong, The Big Consequences of
Small Discrepancies: Why Racial Differences in Pulse
Oximetry Errors Matter, Critical Care Medicine,
Vol. 2, Issue 50 2022, pp. 335-337.
[5]. H. Azami, J. Escudero, Amplitude and Fluctuation
Based Dispersion Entropy, Entropy, Vol. 20,
2018, 210.
[6]. B. Yan, S. He, K. Sun, Design of a network
permutation entropy and its applications for chaotic
time series and EEG signals, Entropy, Vol. 21,
9, 2019, 849.
[7]. S. E. Benkabou, K. Benabdeslem, B. Canitia
Unsupervised outlier detection for time series by
entropy and dynamic time warping, Knowledge and
Information System, Vol. 54, 2018, pp. 463-486.
[8]. A. Ramírez-Rojas, L. Telesca, F. Angulo-Brown,
Entropy of geoelectrical time series in the natural time
domain, Natural Hazards and Earth System Sciences,
Vol. 11, 2011, pp. 219-225.
[9]. Y. Li, Y. Fan, Complexity measure applied to the
analysis EEC signals, in Proceedings of the 27th
Annual IEEE Conference on Engineering in Medicine
and Biology, Shanghai, China, 1–4 September 2005;
pp. 4610–4613.
[10]. L. Zunino, F. Olivares, F. Scholkmann, O. A. Rosso,
Permutation entropy based time series analysis:
Equalities in the input signal can lead to false
conclusions, Physics Letters, Section A: General,
Atomic and Solid State Physics, Vol. 381, 2017,
pp. 1883-1892.
[11]. A. Delgado-Bonal, A. Marshak, Approximate entropy
and sample entropy: A comprehensive tutorial,
Entropy, Vol. 21, 6, 2019, 541.
[12]. M. Spedding, R. Marvaud, A. Marck,
Q. Delarochelambert, J. F. Toussaint, Aging, VO2
max, entropy, and COVID-19, Indian Journal of
Pharmacology, 54, 2022, pp. 58-62.
[13]. J.M. Yentes, N. Hunt, K.K. Schmid, J.P. Kaipust,
D. McGrath, N. Stergiou, The appropriate use of
approximate entropy and sample entropy with short
data sets, Annals of Biomedical Engineering, Vol. 2,
Issue 41, 2013, pp. 349-365.
[14]. S. M. Pincus, Approximate entropy as a measure of
system complexity, Proceedings of the National
Academy of Sciences, Vol. 6, Issue 88, 1991,
pp. 2297–301.
[15]. J. W. Liou, P. S. Wang, Y. T. Wu, S. K. Lee,
S. D. Chang, M. Liou, ECG Approximate Entropy in
the Elderly during Cycling Exercise, Sensors, Vol. 14,
2022, 22.
[16]. J. S. Richman, J.R. Moorman. Physiological time-
series analysis using approximate entropy and sample
entropy maturity in premature infants Physiological
time-series analysis using approximate entropy and
sample entropy, American Journal of Physiology
Heart and Circulatory Physiology, Vol. 278, 2000,
pp. H2039-H2049.
[17]. S. Pincus, and W. Huang, Approximate entropy -
statistical properties and applications,
Communications in Statistics - Theory and Methods,
Vol. 21, 1992, pp. 3061-3077.
[18]. S. Pincus, Approximate entropy (ApEn) as a
complexity measure, Chaos, Vol. 5, Issue 31, 1995,
pp. 110–117.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
70
(025)
Development of a Smart Irrigation System for Apple Fields
using a LoRaWAN Network
R. Mendicino, S. Tritini, A. Mejia-Aguilar and R. Monsorno
Center for Sensing Solutions, Eurac Research, Via A. Volta, 13/A, Bolzano, Italy
Tel.: +39 0471 055395
E-mail: roberto.mendicino@eurac.edu
Summary: Smart irrigation systems used in agriculture are becoming increasingly important due to the optimization and
management of resources, especially water. Recent climate change events have caused water shortages in many areas, making
it critical to improve irrigation efficiency in agriculture for sustainable agricultural production. We provide a smart irrigation
method to enhance irrigation efficiency, by means of a multi-platform approach, combining a wireless sensor network (WSN)
of soil moisture and a phenocam to monitor plant stress. The developed system consists of three main hardware elements: a
LoRaWAN gateway that collects data from the sensor nodes and, thanks to its internet connectivity, allows the second
component, a phenocam, to collect RGB and infrared photos; and, as the last component, the sensor nodes that are distributed
in the field and measure the soil moisture at different positions and depths. In addition to the hardware, a large software
component is part of the system, with databases, dashboards for displaying data in real-time, and analysis tools.
Keywords: Agriculture smart sensors, LoRaWAN network, Soil moisture sensors, Phenocam.
1. Introduction
The United Nations Convention to Combat
Desertification (UNCCD) reported that drought
affected at least 1.5 billion people and cost US$125
billion globally. Forecasts estimate that by 2050,
droughts may affect over three-quarters of the world's
population [1]. In this scenario, systems that monitor
and optimize the use of water are very important.
Novel sensors, comprising of smart sensors, are able to
take decisions, including different communication
protocols, optimize power consumption, and keep
good resolution and accuracy, and have been
introduced in the agriculture domain for some decades.
In parallel, the capacity to adapt existing sensing
methods to new applications is opening new
opportunities to understand vegetation dynamics,
especially in agriculture. Phenology cameras, or
phenocams, investigate interactions between growth,
phenology, and harvest traits [2].
We propose a system easy to replicate, scale up, to
be efficient, and economically affordable, working for
different farming crops, cost-effective to implement,
and open to integrating different type of sensors.
LoRaWAN has already been used in many Internet
of Things (IoT) applications for the agriculture domain
because of its large communication range, battery life,
high network capacity, and cost-effectiveness, which,
compared to other solutions, allows for a better cost-
effectiveness ratio [2][3]. Starting from a commercial
LoRaWAN gateway, a series of additional circuits and
modifications have been made in order to efficiently
integrate the phenocam. Phenocam has shown to be a
good technology for field phenotyping, as it
investigates interactions between growth, phenology,
and harvest traits [4]. A custom node has also been
developed, with an emphasis on the power budget. The
nodes have been installed in an apple orchard at
Laimburg (Bolzano), inside the project LIDO
(Laimburg Integrated Digital Orchard) [5], and data
collection immediately began.
2. Design of the System
Fig. 1 shows the hardware and software
architecture of the proposed system, which is
composed of: LoRaWAN gateway with the integration
of phenocam (mod. NetCam SC H264), IoT nodes
with custom electronics to minimize power
consumption, LoRaWAN network server, database
and visualization tools and soil moisture and
temperature sensor.
Fig. 1. Schematic of the hardware components that
compose the system.
The integration of the LoRaWAN gateway with
phenocam has been achieved by adding a commercial
voltage regulator circuitry inside the gateway for the
correct power supply of both devices and a relay to
switch on the camera when required.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
71
The communication between the camera and
gateway is guaranteed by Ethernet connectivity, and
the relay is activated by using an internal pin of the
motherboard that, after much reverse engineering, has
been found to be controllable by software. A custom
software for the gateway has been developed that
allows the camera to be switched on, wait for
connectivity, and download the picture directly to the
phenocam API.
The camera has been installed at a height of
approximately 3 meters and pointing to the north to
avoid unwanted light scattering, on the same pole
where the photoelectric panel and battery pack have
been installed.
The IoT nodes use a Pycom Lopy4 as their main
microcontroller, which has an integrated LoRaWAN
communication module, an ADC 16 Bit that acquires
data from the sensors via I2C communication, and a
custom circuit that generates the correct power supply
for the sensors and microcontroller.
Additionally, the LTC6995-2 component switches
the system on every 30 minutes for 30 seconds (the
time necessary to acquire and send the new data) and
reduces the power consumption to 300µW during sleep
mode. A battery of 3100mAh and a 78×100 mm
2
panel,
along with its circuit, are assembled to complete the
node.
Fig. 2a shows the block diagram of the system.
Since the ADC module has only four inputs and each
node needs to collect data from three soil moisture
sensors and three temperature sensors, the number of
channels is not enough.
Therefore, since the temperature is less important
in this application, it has been decided to use the
internal ADC of the Pycom Lopy4 for two measured
temperatures. It is based on Espressif Systems ESP32,
in which the ADC has a higher noise level compared
to the dedicated module.
Fig. 3. In the top part the sensor node data (temp. & soil
moisture) and in the bottom part an example of the
phenocam images (RGB and IR).
This choice will be reflected in the results, where it
is possible to note that the noise level is higher for the
temperature acquired by the Pycom Lopy4 ADC.
Soil moisture and temperature sensors have been
integrated to measure both moisture and temperature.
Moisture measurement uses the principle of frequency-
domain sensors (FDR), which measure the dielectric
constant of the soil in order to measure the volume of
soil moisture content.
Fig. 2. a) Block scheme of the sensor node, b) architecture
of the back-end system.
The output is linear in the range 0-2 V for
volumetric moisture content 0-100 % of soil moisture
and 0-2 V for the temperature -40 C to 80 C.
The sensors have been installed at different depths
between 20-40cm to map the temperatures and soil
moisture in dependence on position and depth.
The LoRaWAN network server has been chosen to
be ChirpStack, a platform that allows the registration
of many LoRaWAN devices in a network. Moreover,
it is possible to create the applications that help to
collect the data coming from the devices, and to
monitor all packets passing through the network. Via
its interface a decoding script has been configured to
receive the data coming from the registered devices
and converts it into the correct format before sending
it to the API to be stored in the database (Fig. 2b).
The system is configured to use an API that links
the network server with the database to save all the
incoming data. A timeseries-based (NoSQL) database,
called InfluxDB, is used to store the devices’
timeseries. The data are displayed on a public
dashboard developed using the Grafana platform [6].
3. Preliminary Results
Fig. 3 shows the dashboard with the related
information of soil moisture over time and the pictures
obtained by phenocam.
The data collection of the soil moisture has a
cadence of 30 minutes.
Due to the analog nature of the circuitry for the
power management of the node, the information is not
synchronous.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
72
Conversely, the acquisition of the phenocam is one
image per hour from 7:00 AM to 5:00 PM.
Although the data is currently displayed on the
dashboard as row data and saved in the database. It is
important to note that the noise became apparent in the
data acquired by the Pycom Lopy4 microcontroller.
This is evident in the graph of the temperature, where
the two temperatures acquired directly by the
microcontroller are much noisier than the others
acquired by the ADC board.
Fig. 4. The trend of soil moisture at different depths (20,
30 and 40cm) and the precipitation data collected
by the Laimburg weather station.
Fig. 4 shows the trend of soil moisture at the three
different depths of 20, 30 and 40cm and the
precipitation data collected by an official weather
station. Specifically, the weather station in Laimburg
(Bolzano) is used, which belongs to the network of
weather stations installed and maintained by the
Provincia Autonoma di Bolzano [7]. The station
collects several parameters such as: air temperature, air
humidity, precipitation, solar radiation, wind speed
and direction, atmospheric pressure, and sunshine
hours per day. The data collected is organized in daily
averages or in single values taken every 5 minutes. In
this case, the parameter of precipitation has been used
to compare the soil humidity.
In order to facilitate the synchronization of the data
acquired by the sensors and the official weather station,
the time has been reported as Unix time.
Soil moisture peaks mainly correspond to the
events of precipitation given by the rain. Since this data
corresponds to the summer season, other peaks are
present due to artificial irrigation.
The level of soil moisture sited at 40cm is more
stable compared to the others, in particular with the one
sited at only 20 cm.
If we observe the data, it is possible to note a few
hours of difference in the maximum peak, this is due
to the time required by the water to penetrate to higher
depths.
Since this season has been characterized by
abundant rains throughout the summer, the soil
moisture data does not show any significant lack of
water or suffering in the plants.
The derived products from the phenocam are:
• Red, Green, and Blue channels (RGB), which are
useful for checking the field condition (orchard
management);
• Near Infrared channel (NIR) to observe details
that are not in the visible range;
• NDVI, the most standard and well-studied index
(-1 unhealthy, +1 healthy) calculated with the formula
NDVI = (NIR – Red) / (NIR + Red);
• reCI, used to have an estimation of the response
to chlorophyll content in leaves nourished by nitrogen,
calculated with the formula ReCI = (NIR / RED) – 1;
• GNDVI, similar to NDVI but using the green
channel of RGB, calculated with the formula GNDVI
= (NIR – Green) / (NIR + Green).
Ma
y
Jul
y
Au
g
us
t
a
b
c
d
Fig. 5. (a) RGB, (b) NDVI, (c) reCI and (d) NIR products.
(d) Time series for an area of interest (red square).
Fig. 5 shows the capacity of our proposed method
to integrate raster information. From the management
perspective, the RGB product seems to be in
accordance with the vegetation period. However,
NDVI and reCI show a decrease during August. More
likely it was an intervention in the orchard (pruning) to
reduce the vigor of the plant (green biomass) while the
fruits consume more available nutrients resulting in
increasing the volume of them.
4. Conclusions
This work intends to present a smart system for
monitoring soil moisture in an apple field using a
multi-sensor approach, by means of soil
moisture/temperature WSN integrated by phenocams
monitoring system and integrating data coming from
the nearest meteorological stations. The intention is to
observe local variability at the level of the trees due to
different soil conditions, water and nutrient
assimilation, as well individual conditions of stress.
The system has been running smoothly since
installation (May 2023) and the data collected is being
analyzed by the agronomist. The results of the analysis
will be used to inform future decisions about the crop.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
73
References
[1]. The United Nations world water development report
2021 (https://www.unesco.org/reports/wwdr/2021/en)
[2]. P. Fraga-Lamas, M. Celaya-Echarri, L. Azpilicueta,
P. Lopez-Iturri, F. Falcone, T. M. Fernández-Caramés,
Design and empirical validation of a LoRaWAN IoT
smart irrigation system. MDPI Proceedings, Vol. 42,
No. 1, 2019, p. 62.
[3]. M. Usmonov, F. Gregoretti, Design and
implementation of a LoRa based wireless control for
drip irrigation systems, in Proceedings of the 2nd
International Conference on Robotics and Automation
Engineering (ICRAE’ 2017), 2017, pp. 248-253.
[4]. H. Aasen, N. Kirchgessner, A. Walter, F. Liebisch,
PhenoCams for field phenotyping: using very high
temporal resolution digital repeated photography to
investigate interactions of growth, phenology, and
harvest traits, Frontiers in Plant Science, 11, 2020,
593.
[5]. LIDO - Laimburg Integrated Digital Orchard
(https://lido.laimburg.it/it/)
[6]. Soil moisture - Eurac Research - LIDO, dashboard
Grafana
(https://cssprocapi01.eurac.edu/grafana/public-
dashboards/e07f4785e8bb42d5897a6ae8f2a35cdd?or
gId=1)
[7]. Provincia Autonoma di Bolzano – Alto Adige, Meteo
Alto Adige, Stazione meteo Laimburg
(https://meteo.provincia.bz.it/stazioni-meteo-
valle.asp?stat_stid=1261)
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
74
(026)
The use of Azure Cloud Tools for Monitoring Indoor Air Quality
L. C. Eduardo 1, C.R.S. Alexandre 2 and A. S. F. Tercio 3
1 Universidade Estadual Paulista - UNESP, Engineering College - Ilha Solteira, São Paulo, Brazil
E-mail: eduardo.cruz23@etec.sp.gov.br
2 Universidade Estadual Paulista - UNESP, Engineering College - Ilha Solteira, São Paulo, Brazil
E-mail: alexandre.cr.silva@unesp.br
3 Federal University of Catalan - UFCat, Department of Computing, Catalão, Goiás, Brazil
E-mail: tercioas@ufcat.edu.br
Summary: Monitoring air quality in indoor collective environments is important for well-being and health care. This paper
presents an indoor air quality monitoring system using ESP8266 and sensors DHT22, MICS6814, and GP2Y1014AU0F. The
monitoring, storage, and processing of data are done using Azure IoT Hub, IoT Central, Blob Storage, and Power BI cloud
services. The main objective is to monitor air quality through the Internet using as many cloud services as possible, facilitating
data visualization and eliminating the need for hardware infrastructure for data storage, availability, and processing. Tests were
conducted in an indoor environment, reading data such as temperature, humidity, CO, NH3, and NO2. These data can be viewed
via IoT Central, stored as objects and CSV files in Blob Storage, and represented in graphs through Power BI.
Keywords: Monitoring; Indoor air quality; Internet of Things; Cloud computing; Web application.
1. Introduction
Changes in air quality due to human activities can
cause various health problems, making it necessary to
monitor air quality in enclosed environments. Many
papers related to air quality have been developed based
on the Internet of Things (IoT), as discussed in [1-3].
This paper presents an air quality monitoring
system using the ESP8266 microcontroller and sensors
DHT22, MICS6814, and GP2Y1014AU0F.
The collected data is sent to the cloud using Azure
IoT Hub, stored in Blob Storage, and manipulated by
IoT Central and Power BI services.
2. Methodology
This project uses the ESP8266 microcontroller
connected to the internet by WiFi to read and send
sensors data to the Azure IoT Hub cloud using the
MQTT (Message Queuing Telemetry Transport)
protocol.
The ESP8266 acts as an MQTT publisher [4],
authenticating with the Azure cloud service using
libraries provided by the cloud service itself. The
microcontroller provides the necessary information,
such as the service subscription, the ID of the device
connected to the IoT Hub, and the connection key
provided by Azure.
The IoT Hub service acts as the MQTT broker,
collecting the received messages and forwarding them
to the IoT Central service, which acts as the MQTT
subscriber and also allows for data visualization
through the internet [5].
For storage and visualization of raw data, the Blob
Storage service is used, which stores the data in two
containers: one with the data formatted as objects and
another containing the data in CSV (Comma-Separated
Values) spreadsheet format. Fig. 01 illustrates the
structure and connections between the project's
elements.
Fig. 1. Complete structure of the project
with the connections between its elements.
The data stored in the Blob Storage service can be
accessed by other systems within the Azure service
through HTTP (Hypertext Transfer Protocol) or
HTTPS (Hypertext Transfer Protocol Secure) requests,
as well as externally through APIs (Application
Programming Interfaces) [6].
3. Results
For this paper, the IoT device was deployed in a
home office residential environment from June 12th to
June 16th, 2023. The environment is ventilated by a
fan and is used by only one person.
The data can be viewed almost in real-time through
IoT Central using any personal device with internet
access.
The data stored in a container in the Blob Storage
service as a CSV file is read by the Microsoft Power
BI app, allowing for the creation of charts for each data
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
75
point. Table 1 presents the organized data read from
the CSV file.
In Fig. 2, the charts generated by the Power BI app
related to temperature and humidity are presented.
Table 1. Data stored in a CSV file
by the Blob Storage service.
Fig. 2. Graph with collected data on temperature
and humidity.
To access the data stored in the CSV file in the Blob
Storage service through the Power BI tool, you simply
need to provide the access key provided by the cloud
service and add it as a data source within Power BI.
This key is available in the container information
of the Blob through the URL (Uniform Resource
Locator) access parameter, just copy the access URL
and add it to Power BI in the data source options. [6].
Fig. 3 shows the charts generated by the Power BI
app using the data stored in the CSV file related to
gases CO, NO
2
, NH
3
, and suspended particulate matter
levels in the air.
4. Conclusions
This paper presented an IoT system for reading
telemetry data collected by sensors interconnected
through the ESP8266, using Azure cloud services for
storage and monitoring of indoor air quality.
The cloud services facilitated the creation of the
user interface for data visualization and storage, as
well as communication with the Power BI app, used
for data analysis. The main challenge encountered was
selecting the cloud services that met the project's
requirements.
For future implementations, the IoT Edge cloud
service could be added for remote triggering of air
quality control devices such as air conditioners, fans,
or other equipment installed in the environment.
Fig. 3. Data collected on particles and gases.
Acknowledgements
This paper was partially funded by the
Coordenação de Aperfeiçoamento de Pessoal de Nível
Superior - Brasil (CAPES) - funding code 001. The
2nd author has a PET sholarship.
References
[1]. D. Wall, P. McCullagh, I. Cleland e R. Bond,
Development of an Internet of Things solution to
monitor and analyse indoor air quality, Internet Things,
Vol. 14, June 2021, p. 100392.
[2]. R. Mumtaz et al., Internet of Things (IoT) Based
Indoor Air Quality Sensing and Predictive Analytic—
A COVID-19 Perspective, Electronics, Vol. 10, No. 2,
January 2021, p. 184.
[3]. W. A. P. Ferreira, Rede neural ARTMAP Fuzzy
implementada em hardware aplicada na previsão da
qualidade do ar em ambiente interno, dissertação de
doutorado, UNESP - Universidade Estadual Paulista,
Ilha Solteira, 2021.
[4]. Microsoft Learn (https://learn.microsoft.com/pt-
br/azure/iot/iot-mqtt-connect-to-iot-hub)002E
[5]. HiveMQ (https://www.hivemq.com/blog/mqtt-
essentials-part2-publish-subscribe/).
[6]. Microsoft (https://learn.microsoft.com/pt-
br/azure/storage/blobs/storage-blobs-overview).
CO NH
3
NO
2
PM 10 PM 2.5 Tem perature °C Humidity
Min 1 0 3 1 1 2 1 8 4 0
Máx 1 0 6 18 6 31 5 5
Min 0 0 2 6 3 16 41
Máx 1 0 6 14 6 26 6 3
Min 0 0 4 9 4 13 50
Máx 1 0 9 11 6 19 7 1
Min 1 0 4 5 5 12 55
Máx 1 0 8 10 5 16 7 3
Min 1 0 6 6 2 12 50
Máx 3 0 9 10 6 22 7 5
16/06/2023
12/06/2023
13/06/2023
14/06/2023
15/06/2023
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
76
(028)
Visible Light Communication for Indoors Automated Guidance Vehicles
P. Louro 1.2, M. Vieira 1,2,3 and M. A. Vieira 1,2
1 ISEL-Polytechnic Institute of Lisbon, Portugal
2 UNINOVA-CTS and LASI; Lisbon, Portugal
3 NOVA School of Science and Technology, Lisbon, Portugal
E-mail: paula.louro@isel.pt
Summary: The advent of devices with wireless communication capabilities has generated increased interest in indoor
navigation. Several wireless technologies have been proposed for indoor location, as the traditional Global Positioning System
has a poor performance in a closed space. This research proposes the use of an indoor localization system based on Visible
Light Communication (VLC) to support guidance and operational tasks of Autonomous Guided Vehicles (AVG). The research
is focused on the development of the guidance VLC system, transmission of control data information and decoding techniques.
Trichromatic white LEDs are used as transmitters and photodiodes with selective spectral sensitivity are used as receivers. The
downlink channel establishes an infrastructure-to-vehicle link (I2V) and provides position information to the vehicle. The
decoding strategy is based on accurate calibration of the output signal. Characterization of the transmitters and receivers,
description of the coding schemes and the use of different modulations will be discussed.
Keywords: Visible light communication, Autonomous guides vehicle, Indoor positioning, Guidance, OOK,
Manchester.
1. Introduction
Automated guidance vehicles (AGV) are mobile
robots powered by electricity that operate
independently. The navigation of the AGV can follow
a fixed route or a free ranging route. Several
technologies can be used to provide guidance to the
AGVs [1], ranging from magnetic, optical or radio
based technologies. Visible Light Communication
(VLC) is an optical wireless communication
technology operating in the visible range of the
spectrum, that provides several advantages, such as,
license-free spectrum, high-speed, immunity to
electromagnetic radiation and security, among others
[2]. As it can be used in localization applications, in
this paper we propose the use of an AGV guidance
system based on Visible Light Communication (VLC)
able to provide information on the current position of
the vehicle and enable it to navigate autonomously.
VLC [3].
This system uses white tetrachromatic LEDs
operating as optical signal transmitter’s and a:SiC:H
pinpin heterostructure as receivers [4-6]. These
photodetector devices exhibit active filtering,
amplification, and selective sensitivity [7, 8] in the
visible range. Different visible wavelength signals are
encoded in the same optical transmission path, so the
device multiplexes the different optical channels,
performs filtering processes (amplification, switching,
and wavelength conversion), and outputs a
multiplexed signal [9, 10]. The modulated signal
transmitted by each emitter can be recovered by
decoding this signal [11, 12]. For a reliable calibration
curve, each photocurrent level must be accurately
regulated [13, 14]. To decode the multiplexed signal,
we will use On-Off keying modulation and Manchester
codes [15]. This system is intended for indoor
positioning and guidance of autonomous guided
vehicles (AGV) used to transfer materials from pickup
places to drop off places [16] inside an automated
warehouse. The system attends bidirectional
communication with links Infrastructure-to-Vehicle
(I2V) and Vehicle-to-Infrastructure (V2I). The link
I2V provides an indoor localization information inside
the warehouse enabling navigation services [17].
Operational information for operation inside the
warehouse is also transmitted to the AGVs. The links
V2I and V2V provide cooperation services to enhance
system performance [18, 19].
The indoor localization system involves optical
wireless communication, computer-based algorithms,
smart sensor and optical sources network, which
constitutes a transdisciplinary approach framed in
cyber-physical systems.
2. VLC System Description
The VLC transmitter uses commercial RGB LEDs
provide three different wavelengths, centered on the
red, green, and blue bands (620 nm, 530 nm and
470 nm) with standard linewidths (ranging from 24 nm
to 38 nm), high luminous intensity (in the range from
340 mcd up to 980 mcd) and wide half intensity angle
(120 º). Along the indoors space, LED bulbs provide
simultaneous lighting and communication, defining
navigation cells (Fig. 1a).
The VLC receiver used in the VLC system is a
photodiode composed of a pinpin heterostructure
based on a-SiC:H/a-Si:H. It exhibits active filtering
and amplification properties and a selective sensitivity,
as its design was tailored to address a wavelength
sensitive device in the visible spectrum (Fig. 1 b).
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
77
a) b)
Fig. 1. a) VLC transmitters; b) spectral sensitivity
of the VLC revceiver.
In this photodiode, two a-Si:H pins are mounted on
top of one a-SiC:H pin, which allows the device to be
used over the full visible spectrum. The device is
operated under reverse bias to improve collection
efficiency. Steady state optical bias using short visible
wavelength (400 nm) is used to improve amplification
of the longer wavelengths and attenuation of the
short ones.
Specific data codes are needed to define the
communication link and the type of message to be
transmitted. The structure used for coding the
information transmitted by the I2V channel with a
word of 64 bits.
Data transmission demands the use of a specific
modulation scheme. Here we will use both OOK and
Manchester. A comparative evaluation between both
modulations will be discussed. OOK assigns different
levels of amplitude to each of the data bits we wish to
modulate, with a bit time duration of bit. Manchester
assigns both levels to each bit, one per bit time duration
bit, but it is the transitions from on to off (“on-off”) or
off to on (“off-on”) that distinguish between '0' and '1'
data bits.
In the test scenario used in this work the AGVs can
move in a linear warehouse configuration with racks
on either side of the aisle for picking up materials, as
displayed in Fig. 2. A matrix notation that specifies the
row and column of the navigation cells is used to
number the cells. The AGV at the initial position
(position 1) is directed to move to an intermediate
position (position 2) and then to the destination
(position 3).
Fig. 2. Layout of the warehouse space including VLC
transmitters and AGV successive positions (initial,
intermediate and destination).
3.
VLC System Description
Specific data codes are needed to define the
communication link and the type of message to be
transmitted. In every channel, it was used synchronous
transmission based on a data frame of fixed length.
Synchronization of the frames can be enabled using
different approaches. The SoT is placed at the
beginning of the frame and the EoT at the end. Then a
TYPE block with 4 bits is used to define the type of
message (0000 in request/acknowledge mode, 0011 in
standard/update mode). The complete structure of the
data frame has the format displayed in Fig. 3.
.
Fig. 3. Data frame structure of the VLC communication
channel.
The block labelled GEO-LOCATION (16 bits)
identifies the cell and footprint. The cell identification
is coded as XXX0YYY0, where XXX addresses the
line and YYY the column of the cell. The footprint is
produced by coding the R and B emitters with four bits
set to 1 and four bits set to 0, while the R' and B'
emitters are coded with two bits set to 1 and two bits
set to 0, and then two bits set to 1 and two bits set to 0.
The 36 bits MESSAGE block addresses specific
instructions transmitted to the user and dependent on
the type of communication mode.
4. Modulations Comparison
In Fig. 4 it is displayed the photocurrent signal
measured inside footprint #1 (region covered by the
four optical signals RGBV) a position 2 using the OOK
and Manchester modulations.
The output resultant signal acquired by the receiver
device shows different levels of photocurrent that can
be assigned to the correspondent optical excitation. As
the device exhibits capacitive effects, the photocurrent
level exhibits a rising or falling slope, dependent
whether the transition occurs from a lower to an upper
level, or vice-versa. Consequently, this effect becomes
more evident when two or more adjacent bits have the
same state. As OOK modulation simply uses '0's or '1's
for bit states, the signal is likely to contain many
adjacent symbols ('1s' or '0s') which reinforce this
effect.
400 450 500 550 600 650 700
10
-9
10
-8
10
-7
530 nm
430 nm 626 nm
Photocurrent (A)
Wavelength (nm)
Background light
Without
With
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
78
(a)
(b)
Fig. 4. Transmitted data message at the intermediate
position (2) using the modulation: a) OOK; b) Manchester.
4. Conclusions
The proposed application focuses on the use of VLC
based system to support the guidance of AGV. The
physical layer of the communication system was fully
characterized, from transmitters and receivers to the
propagation channel modelling and data coding. Two
modulation techniques, OOK and Manchester, were
used to infer bit decoding performance. As the
multiplexed signal results from multiple optical
channels, its waveform is complex. It is necessary to
use decoding techniques to determine the correct bits
transmitted by each optical channel. Improvements to
the decoding technique include parity check error
control.
Acknowledgements
This research was funded by UIDB / 00066 / 2020 /
UIDP / 00066 / 2020 and by IPL/2022 /
POSEIDON_ISEL.
References
[1]. Durrant-Whyte, H., Rye, D., Nebot, E., Localization of
Autonomous Guided Vehicles. In: Giralt,
G., Hirzinger, G. (eds), Robotics Research, Springer,
London, 1996.
[2]. Pathak, P. H.; Feng, X.; Hu, P.; Mohapatra, P. Visible
Light Communication, Networking, and Sensing: A
Survey, Potential and Challenges, IEEE
Communications Surveys Tutorials, 17, 2015,
pp. 2047–2077.
[3]. Paula Louro, Manuela Vieira, Manuel A. Vieira,
Bidirectional visible light communication, Opt. Eng.
59, 12, 2020, 127109.
[4]. M. Vieira, M. Fernandes, J. Martins, P. Louro,
A. Maçarico, R. Schwarz, and M. Schubert, Improved
Resolution in a p-i-n Image Sensor by Changing the
Structure of the Doped Layers, Amorphous and
Heterogeneous Silicon Thin Films-2000, Mat. Res.
Soc. Symp. Proc., S. Francisco, April 24-28 USA,
Vol. 609, 2000.
[5]. M. A. Vieira, M. Vieira, V. Silva, P. Louro, Optical
signal processing for indoor positioning using a-SiCH
technology, Proc. SPIE 9891, Silicon Photonics and
Photonic Integrated Circuits V, 98911Z, May 13,
2016.
[6]. P. Louro, M. Vieira, M. Fernandes, J. Costa,
M. A. Vieira, J. Caeiro, N. Neves, M. Barata, Optical
demultiplexer based on an a-SiC:H voltage controlled
device, Phys. Status Solidi C, 7, No. 3–4, 2010,
pp. 1188– 1191.
[7]. P. Louro, V. Silva, I. Rodrigues, M. A. Vieira,
M. Vieira, Transmission of Signals Using White LEDs
for VLC Applications, Materials Today: Proceedings,
3, 3, 2016, pp. 780–787.
[8]. M. A. Vieira, M. Vieira, P. Louro, V. Silva,
A. S. Garção, Photodetector with integrated optical
thin film filters, Journal of Physics: Conference Series,
421, 2013, 01201, http://iopscience.iop.org/1742-
6596/421/1/012011/.
[9]. M. Vieira, M. A. Vieira, P. Louro, MUX/DEMUX SiC
receiver for visible light communications,
Microsystem Technology, Vol. 28, 2022,
pp. 1587–1592.
[10]. Louro, P., Silva, V., Vieira, M. A., Vieira,
M., Viability of the use of an a-SiC:H multilayer
device in a domestic VLC application, Phys. Status
Solidi C, 11, No. 11–12, 2014, pp. 1703–1706.
[11]. P. Louro, M. Vieira, M. A. Vieira, Geolocalization and
navigation by visible light communication to address
automated logistics control, Opt. Eng., 61, 1, 2022,
016104.
[12]. P. Louro, J. Costa, M. Vieira, M. A. Vieira,
Y. Vygranenko, Use of VLC for indoors navigation
with RGB LEDs and a-SiC:H photodetector, Proc. of
SPIE, Vol. 10231, Optical Sensors, 2017, 102310F-2.
[13]. M. Vieira, M. A. Vieira, P. Louro and P. Vieira,
Geolocation and Wayfinding Services Using Visible
Light Communication, Sensors & Transducers,
Vol. 245, Issue 6, October 2020, pp. 49-56.
[14]. M. Vieira, M. Fernandes, P. Louro, A. Fantoni,
Y. Vygranenko, G. Lavareda, C. Nunes de Carvalho,
Image and color sensitive detector based on double
p-i-n/p-i-n a-SiC:H photodiode, Mat. Res. Soc. Symp.
Proc., Vol. 862, 2005, A13.48.
[15]. M. A Vieira, M. Vieira, P. Louro, L. Mateus, P. Vieira,
Indoor positioning system using a WDM device based
on a-SiC:H technology, Journal of Luminescence, 191,
2017, pp. 135-138.
[16]. Faiza Gul, Syed Sahal Nazli Alhady, Wan Rahiman,
A review of controller approach for autonomous
guided vehicle system, Indonesian Journal of
Electrical Engineering and Computer Science, Vol.
20, No. 1, October 2020, pp. 552-562.
[17]. M. A. Vieira, M. Vieira, P. Louro, P. Vieira, Redesign
of the trajectory within a complex intersection for
0246
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
Photocurrent(a.u)
Time (ms)
0246810
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
Photocurrent(a.u)
Time (ms)
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
79
visible light communication ready connected cars, Opt.
Eng. 59, 39, 2020, 097104.
[18]. M. A. Vieira, M. Vieira, P. Vieira, P. Louro, Vehicle-
to-vehicle and infrastructure-to-vehicle
communication in the visible range, Sensors &
Transducers 218, 12, 2017, pp. 40-48.
[19]. M. A. Vieira, M. Vieira, P. Vieira, P. Louro, Optical
signal processing for a smart vehicle lighting system
using a-SiCH technology, Optical Sensors, 10231,
2017, 102311L.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
80
(029)
Wind Estimation via UAV Parameters and Artificial Intelligence
related to Ultrasonic Anemometer Measurements
Michael Kurz 1, Federico Mothes 2, Markus Kreuzer 3 and Alexander Knoll 4
1 2 3 Munich University of Applied Sciences, Lothstr. 64, 80335 Munich, Germany
Tel.: +4915781711447
E-mail: Michael.kurz@hm.edu
Summary: With rapid advances in the unmanned aerial vehicle (UAV) field and their ever-growing popularity especially in
a wide range of civilian and commercial applications, UAV operations in urban areas become inevitable. Still, low-level flight
missions in urban landscape pose a significant challenge not the least because of turbulences. Therefore, it is necessary for
UAVs to receive information about present or upcoming turbulences in close-up range early enough, to reduce the risk of
critical situations such as crash. Currently, extensive methods are established to measure wind conditions. They range from
flow sensors to ultrasonic anemometers that mark the state-of-the-art. The wind-sensors are either fixed to one location, which
leads to a spatial limited measurement coverage, or when mounted on a flying device, influence the flight dynamics due to
bulky appearance. In this paper, a novel approach for UAVs to estimate the 3D/2D wind vector via artificial neural networks
from an ultrasonic anemometer and simulated respectively real UAV quadcopter parameter is presented. The research question
to be addressed is whether the artificial intelligence approach can complement or even replace costly wind measurements
sensors.
Keywords: UAV, Wind speed and orientation estimation, Ultrasonic anemometer, Artificial neural network, Urban air
environment, Quadcopter.
1. Introduction
A new era of transportation is on the horizon and
therefore UAV’s are more and more likely to be
integrated in the civil sector as transportation system.
So far, there is no uniform legislation and integration
concepts for the infrastructure are still under
development [1]. By then, major requirements must be
fulfilled regarding social factors, system factors,
aircraft factors and safety-related factors [2]. Especially
the safety-factor plays a major role when it comes to the
acceptance by the public to be used as transportation
system.
Due to the structure and the flight dynamics, UAVs
are more likely to feel the presence of external
disturbance than other aircrafts. Therefore, strategies
and intelligent algorithms must be developed, to fulfill
all safety related requirements and increase reliability.
One of those external disturbances is the wind,
which greatly affects UAVs compared to other aircraft
and is characterized by sudden change in speed and
direction [3]. Therefore, a key challenge in the use of
UAVs as transportation systems is to measure and
estimate the surrounding wind in urban areas, to
develop algorithms and flight-plan strategies to keep
the UAV stabilized during the flight and avoid
“turbulent” areas to prevent major catastrophes [4].
Estimating such wind disturbances and using this data
to develop flight strategies or inform flight controls and
other traffic partners can improve safety in general.
To verify this approach and to answer the question,
if artificial neural networks can complement or
supplant wind sensors, an additional UAV mounted
with an external ultrasonic anemometer serves as
benchmark system. Also, measurement data generated
from the ultrasonic anemometer is used as training data
for the neural network.
This paper proceeds as follow. As reminder of this
disquisition, we first describe the external ultrasonic
anemometer (Section II). In Section III, the preliminary
information about the UAV Mathematical Model.
Section IV presents the wind vector estimation method
using artificial neural networks. Section V
Conclusions.
2. Ultrasonic Anemometer
Wind estimation with UAVs is not a new research
field to the science community and has been conducted
with many different methods/approaches. Generally,
those approaches can be classified in two categories:
indirect methods, that measure the response of
the UAV to external disturbance and determine
the required characteristics of the wind directly
from internal control sensors,
or direct methods, that use external wind sensors
deployed on the UAV.
Wind estimation with the ultrasonic anemometer
belongs to the direct approach. In addition to
conventional anemometers, is the ultrasonic
anemometer capable to measure the wind speed in
3-dimensional direction according to the UAV’s
heading [5].
In this paper, an ultrasonic anemometer is deployed
on a UAV as seen in Fig. 1.
Those scenarios do not replicate real life scenarios
of UAVs; therefore, this paper presents a realistic
3D/2D urban area wind vector estimation technique
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
81
based on the spikes of the UAVs flight dynamic
parameters caused by wind via neural networks.
Fig. 1. Quadcopter with a deployed
Ultrasonic-Anemometer [5]
3. UAV Mathematical Model
Considering the quadcopter to be symmetrical
consisting of a 6 DOF rigid body, the dynamics
(translation and rotation) can be derived by Newton-
Euler formalism. For describing the dynamics, an Earth
fixed frame I and a Body fixed frame B in the center of
gravity (cog) of the UAV is specified (see Fig. 2) [6].
Using the chain rule for a rotation body frame, the law
of motions becomes:
Fig. 2. Quadrocopter model.
Translation
Newton’s
second
law
𝐼

𝑚𝑉
󰇗
𝜔𝑚𝑉
𝑓
(1)
Rotation
Euler’s
rotation
equation
𝐼𝜔
󰇗
𝜔
𝐼𝜔
𝜏
(2)
𝑰𝟑𝟑 - describes the identity matrix of the UAV,
𝒎∈𝑹 is the total mass of the UAV, 𝑽𝑩
󰇗󰇟𝒖 , 𝒗,𝒘󰇠 -
is the linear acceleration, 𝝎𝑩󰇟𝒑 𝒒 𝒓󰇠
󰇗 - is the angular
acceleration and 𝑰∈𝑹 - is the moment of inertia.
𝑭𝑩
𝑹𝟑 - are the propulsion forces and 𝝉𝑩∈𝑹𝟑 -
respectively the propulsion moment that act on the
UAV, all with respect to the body-fixed frame B.
Equations of Motion
As a result, the complete dynamic model governing
the UAVs motion and can therefore be used as input for
neural networks is as follows:
𝑢󰇗𝑣󰇗
𝑤󰇗𝑟𝑣𝑞𝑤
𝑝𝑤𝑟𝑢
𝑞𝑢𝑝𝑣 𝑔𝑠𝑖𝑛𝜃
𝑔𝑐𝑜𝑠𝜃𝑠𝑖𝑛𝜙
𝑔𝑐𝑜𝑠𝜃𝑐𝑜𝑠𝜙1
𝑚0
0
𝐹
1
𝑚󰇯𝐹

𝐹

𝐹

󰇰
(3)
𝑝󰇗
𝑞󰇗𝑟󰇗




𝑞𝑟




𝑝𝑟




𝑝𝑞

𝜏

𝜏

𝜏

𝐽
𝜃󰇗

𝐽
𝜙󰇗
0
(4)
4. Turbulence Vector Estimation Using
Neural Networks
In general, an artificial neural network is a
computing system modeled in the image of the brain. It
consists out of many neurons, which receive, process,
and transmit signals to each other
[7].
Regarding this paper, wind data generated by the
ultrasonic anemometer (benchmark system) in addition
to the UAVs flight dynamic and control parameter,
obtained by the simulated and real UAV model, shall
serve as network input for the NN-training. Target is to
estimate specified wind speed and direction acting on
the UAV’s body in order to comply or eliminate
additional wind sensory devices.
5. Technical Description
Wind speed and directions measurements in the
field with ultrasonic anemometers are conducted
according to typical coverage path planning for UAVs
in urban environments [8].
Therefore, the UAV follows dedicated waypoints in
a fixed manner, to measure wind profile characterized
as turbulences. The waypoints are always located on the
lee side (downwind side) of buildings with the height
of at least 15 m.
Regarding the estimation process with the LSTM,
the ultrasonic anemometers shall serve as benchmark
but also as reference system that provides the wind data
for training and estimation process of the LSTM
network.
The LSTM itself will be trained on the measured
wind data in at least 1 dimension at first, with a
frequency of 35 Hz, but also with UAV parameters
such as position drift, tilt angles and rates, rotational
speed of the rotors as well as velocity and acceleration
of the drone itself. Measured wind data and simulation
frequency are adapted to each other. The hyper
parameter such as activation function, layer, batch size
or input parameter of the LSTM are constantly
adjusted, to increase accuracy of estimation compared
to the ultrasonic anemometer.
6. Conclusion
By means of a reproducible wind estimation with
artificial intelligence related to ultrasonic anemometers
but also state of the art wind estimation sensors in
general, a basis for AI-wind estimations via LSTM is
created. In comparison with measurement data obtained
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
82
from the ultrasonic anemometer, a reliable wind
estimation especially in urban areas can be achieved.
Especially in the case of urban air traffic, real time wind
estimations without bulky and costly sensor devices
will increase affordability of UAVs used as air taxis for
provider and user, also safety factors related to control
and accidents,
The main contributions of this work are:
Increase safety related factors regarding the
flight dynamics of the UAV by omit bulky
sensors
Circumvention the need of specialized tools
Provide a cost-effective alternative for sampling
wind speed and direction
The ultrasonic anemometer measurement system
mounted on a UAV as well as the developed LSTM-
algorithm results will be presented.
References
[1]. A. Bauranov and J. Rakas, Designing airspace for urban
air mobility: A review of concepts and approaches,
Progress in Aerospace Sciences, Vol. 125, 2021,
p. 100726.
[2]. Q. Long, J. Ma, F. Jiang, and C. J. Webster, Demand
analysis in urban air mobility: A literature review, J Air
Transp. Manag, Vol. 112, 2023, p. 102436.
[3]. B. H. Wang, D. B. Wang, Z. A. Ali, B. Ting and
H. Wang, An overview of various kinds of wind effects
on unmanned aerial vehicle, Measurement and Control,
Vol. 52, No. 7–8, 2019, pp. 731–739.
[4]. P. Abichandani, D. Lobo, G. Ford, D. Bucci, and
M. Kam, Wind measurement and simulation techniques
in multi-rotor small unmanned aerial vehicles, IEEE
Access, Vol. 8, 2020, pp. 54910–54927.
[5]. W. Thielicke, W. Hübert, U. Müller, M. Eggert, and
P. Wilhelm, Towards accurate and practical drone-
based wind measurements with an ultrasonic
anemometer, Atmos Meas Tech, Vol. 14, No. 2, 2021,
pp. 1303–1318.
[6]. G. P. Falconí, J. Angelov, and F. Holzapfel, Hexacopter
outdoor flight test results using adaptive control
allocation subject to an unknown complete loss of one
propeller, in Proceedings of the 3rd IEEE Conference
on Control and Fault-Tolerant Systems (SysTol’ 2016),
2016, pp. 373–380.
[7]. E. Akgün and M. Demir, Modeling course
achievements of elementary education teacher
candidates with artificial neural networks, International
Journal of Assessment Tools in Education, Vol. 5,
No. 3, 2018, pp. 491–509.
[8]. A. Majeed, S. O. Hwang, A Multi-Objective Coverage
Path Planning Algorithm for UAVs to Cover Spatially
Distributed Regions in Urban Environments,
Aerospace, Vol. 8, No. 11, 2021, pp. 343-377.
.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
83
(030)
Digital Twin-based Models of Human Activities, Localization, and Energy
Consumption of WBAN Network using IMU Sensors
Noureddine Boujnah 1, Rafika Brahmi 2 and Ridha Ejbali 3
1 engCORE center, Carlow Institute, SETU, Ireland
2,3 Research Team in Intelligent Machine (RTIM), University of Gabes, Tunisia
E-mail: noureddine.boujnah@setu.ie, rafikabrahmi47@gmail.com, ridha_ejbali@ieee.org
Summary: Wireless Body Area Network (WBAN) is a network of sensors collecting data from the human body such as
temperature, blood pressure, heart and brain activity, and many other human physiological variables. Inertial Motion Unit
(IMU) sensors collect acceleration and angular speed data to predict human motion and activities and assist WBAN energy
consumption and positioning. This paper provides the theoretical framework of a WBAN network equipped with IMU sensors
to be integrated into a digital twin. A digital twin is a digital replica of the human and his environment consisting of models
and data and responsible for decision-making. This paper highlights the IMU data's possible usage in the digital twin
framework. Due to the limited capability of the body network controller (BNC), a digital twin can monitor and assist WBAN
networks.
The functionalities of the digital twin developed in this work are human activities prediction, positioning, and power
consumption efficiency model in the WBAN network. Features will use available networks and sensor parameters, initial
position conditions to build the theoretical model, propose a strategy for power consumption and recognize the human activity
using the data-based model. The outputs of this paper are positioning and energy efficiency theoretical models and a data-
based HAR model elaboration.
Keywords: Inertial motion unit, Wireless body area network, Digital twins, positioning, Signal to noise ratio, Energy
efficiency.
1. Introduction
Wireless Body Access Network (WBAN) is a
small-scale wireless network attached to the human
body and dedicated to healthcare monitoring and
wellbeing applications. It consists of tiny sensor nodes
attached to the body, measuring physiological signals
such as blood pressure, glucose, temperature, cerebral
signal, and motion [1]. The collected data are
processed and encapsulated into small packets and
sent to the body network controller, the body network
controller (BNC), which is placed in a central position
in the body, and performs additional tasks such as data
compression, aggregation, and transmission. This
paper highlights the role of the IMU sensor (Inertial
Motion Unit) in the WBAN network and the modeling
of the digital twin.
The digital twin is an intelligent copy of the cyber-
physical world with advanced capabilities such as
prediction and decision-making [2].
This paper proposes three main functionalities of a
healthcare digital twin using kinetic data collected by
IMU sensors for human action recognition,
positioning, and energy consumption reduction. We
present the training and testing results of HAR
classification using LSTM (Long Short-Term
Memory). And provide the theoretical framework for
positioning sensors and energy consumption WBAN.
Fig. 1 describes our approach in this paper.
Fig. 1 The digital twin receives IMU data containing accelerations and angular speed. The digital twin comprises data-driven
activity recognition models and a theory-based prediction model.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
84
The digital twin will assist the WBAN network in
modeling human activities using collected data and
simulating the nodes' position and energy
consumption.
The remainder of this paper will be organized as
follows: we refer to some previous works in the second
section, in the third section, we describe the position
model using accelerometer and gyroscope data, we
define the metric used to assess energy efficiency and
propose a path selection algorithm for transmitted data
in section four. Section five will be dedicated to
modeling Human Action using realistic data collected
using IMUs. Finally, section six concludes our paper.
2. Related Works
Several research works were carried out to
highlight the role of collected data using sensors in
wireless networks [3]. Inertial sensors (Inertial
Measurement Unit) are used jointly with the camera to
describe accurate 3D human motion in [4], in [5]
authors model the body posture in conjunction with
transmission mode using a stochastic approach to
reduce energy consumption under link capacity
constraint. Recent work on [6] showcases the
possibilities of applying a digital twin for indoor
healthcare using WBAN and associated sensors and
highlights possible deployments and improvements.
Localization using wireless technologies was used
to determine transmission nodes, using theory-based
and data-driving techniques to improve accuracy.
However, it requires convenient theoretical models
and a large amount of data samples; related works on
localization can be found in [7] and [8], where authors
propose jointly the theoretical and machine learning
approaches to find the appropriate model for macro
localization.
3. Sensor Position Model
In this paper, we assume each node is equipped
with an IMU sensor, and collected data can be stored
temporally and transmitted opportunistically. Data
collected by accelerometer sensor generates 3X1 time
series vectors 𝑎󰇛𝑡󰇜 measured in m/s2, the output of
gyroscope is a 3X1 vector describing the angular
speed of the sensor(rad/s) around three axis: Two axis
(Gx) and (Gy) defining the plane parallel to the ground
and one vertical axis (Gz), G is virtual point at the
human body with the simplest motion. The current
position, with respect to a fixed reference frame, can
be approximated using the relationship between the
sensor’s position of node l, sensor’s speed,
acceleration 𝑎
󰇍
󰇍
󰇍
and rotational parameters retrieved
from IMU:
𝑋
󰇍
󰇍
󰇍
󰇛𝑡󰇜𝑋
󰇍
󰇍
󰇍
󰇛𝑡󰇜
ℎ𝑉
󰇍
󰇍
󰇛𝑡󰇜+
𝑅(𝜃󰇛𝑡󰇜)𝑅(𝜑󰇛𝑡󰇜)𝑅(𝜓󰇛𝑡󰇜)𝑎
󰇍
󰇍
󰇍
󰇛𝑡󰇜
(1)
where
𝑡: discrete time,
h=𝑡-𝑡,
𝑉
󰇍
󰇍
󰇍
: Speed of sensor l approximated as:
𝑉
󰇍
󰇍
󰇛𝑡󰇜
𝑉
󰇍
󰇍
󰇛𝑡󰇜+ℎ𝑅(𝜃󰇛𝑡󰇜)𝑅(𝜑󰇛𝑡󰇜)𝑅(𝜓󰇛𝑡󰇜)𝑎
󰇍
󰇍
󰇍
󰇛𝑡󰇜
The 3 rotational matrices, 𝑅, 𝑅and 𝑅are given by:
𝑅(𝜃󰇛𝑡󰇜)󰇭cos𝜃󰇛𝑡󰇜sin𝜃󰇛𝑡󰇜0
sin𝜃󰇛𝑡󰇜cos𝜃󰇛𝑡󰇜0
001
󰇮
𝑅(𝜑󰇛𝑡󰇜) 10 0
0 cos 󰇛𝜑󰇛𝑡󰇜󰇜sin 󰇛𝜑󰇛𝑡󰇜󰇜
0sin 󰇛𝜑󰇛𝑡󰇜󰇜cos 󰇛𝜑󰇛𝑡󰇜󰇜
𝑅(𝜓󰇛𝑡󰇜)=cos 󰇛𝜓󰇛𝑡󰇜󰇜0sin 󰇛𝜓󰇛𝑡󰇜󰇜
010
sin 󰇛𝜓󰇛𝑡󰇜󰇜0 cos 󰇛𝜓󰇛𝑡󰇜󰇜
𝜃, 𝜑 and 𝜓 are the rotational angles (Euler’s angles)
around (Gz), (Gx) and (Gy), respectively, and linked
to gyroscope instantaneous output as:
󰇱𝜃𝑖󰇛𝑡𝑛󰇜𝜃𝑖󰇛𝑡𝑛1󰇜ℎ𝜃𝑖󰇗󰇛𝑡𝑛󰇜
𝜑𝑖󰇛𝑡𝑛󰇜𝜑𝑖󰇛𝑡𝑛1󰇜ℎ𝜑𝑖󰇗󰇛𝑡𝑛󰇜
𝜓𝑖󰇛𝑡𝑛󰇜𝜓𝑖󰇛𝑡𝑛1󰇜ℎ𝜓𝑖󰇗󰇛𝑡𝑛󰇜 (2)
𝜃󰇗, 𝜑󰇗 and 𝜓󰇗 are the 3 angular speeds collected by the
gyroscope, the gyroscope measures the angular speed.
Moreover, we need to add further assumptions
regarding the rotation order, because𝑅 ,
𝑅 𝑎𝑛𝑑 𝑅does not commute with respect to the
matrix product. To determine the right order selection,
we need to add more conditions. The acceleration
𝑎
󰇍
󰇍
󰇍
󰇛𝑡󰇜 is expressed in Eq.1 using the IMU frame at
time 𝑡 , the acceleration and speed can be
calculated iteratively using angular speeds and initial
conditions. The mathematical model at the digital
twin can be further improved and simulated taking
into consideration that product 𝑅(𝜃)𝑅(𝜑)𝑅(𝜓)
is not commutative.
We can express the instantaneous distance
between the sensor and BNC as function of transmitter
and receiver antenna parameters such as transmitted
power, received power and antenna gains.
4. Energy Reduction
4.1 Determination of Communication Path
In this section we define the criteria used in this paper
to establish a link between a node and the BNC with
maximized energy efficiency, algorithm used for
instantaneous path determination is distance based,
and get as input the distance matrix 𝐷󰇛𝑡󰇜󰇛𝑑󰇜
whos elements are given by:
𝑑󰇛𝑡󰇜𝑋󰇛𝑡󰇜𝑋󰇛𝑡󰇜 (3)
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
85
The path selection is performed by minimizing and
updating the following cost function:
𝑔󰇛𝑋󰇜𝑋𝑋󰇛𝑡󰇜𝑋𝑋󰇛𝑡󰇜 (4)
𝑔󰇛𝑋󰇜 is iteratively minimized when a selected node is
far from the transmitting one and near to BNC under
the following constraint: 𝑋󰇛𝑡󰇜𝑋𝒓, with 𝑋
is the position of the ith selected node, 𝑋󰇛𝑡󰇜 is the
position of transmitting node and 𝑋󰇛𝑡󰇜 is the
position of BNC , r is the node’s transmission range, r
depends on the WBAN technologies, the propagation
environment, the sensitivity at the receiver, and
technical requirements for the network. If 𝑋󰇛𝑡󰇜
𝑋󰇛𝑡󰇜𝑟 then no need for relays.
The following algorithm for path selection, to be
implemented in the digital twin, is used for path
selection:
Algorithm (communication path selection)
0- Path initialization
1- Find 𝑋=𝑚𝑖𝑛
𝑔󰇛𝑋󰇜
Under constraint: 𝑋󰇛𝑡󰇜𝑋𝑟
2- i
𝑖
3- path=[path, 𝑁𝑜𝑑𝑒]
4- if 𝑁𝑜𝑑𝑒 𝐵𝑁𝐶 then,
Find the next relay node
else End
The operation of path selection will be performed
at digital twin level and decision will be
communicated to BNC, positions and distances are
determined using collected information from IMUs,
the digital twin transmits to BNC the selected path and
finally BNC inform all WBAN nodes about the
decision. Discussion about communication
procedures and message exchanges between DT and
BNC will be planned for future works.
4.2. Energy Efficiency
Studying energy consumption in WBAN can help
increase battery life, basics models for energy
consumption and evaluation metrics were proposed in
[9], this paper highlights all possible sources for
energy dissipation in a WBAN. In this paper we
correlate the WBAN energy consumption with digital
twin models to propose fast actions and decisions to
reduce energy usage. Each node of WBAN can be
located within a dynamic range of the BNC, the star
topology approach, where all nodes transmit to BNC
has several drawbacks including high power usage,
interferences, and packet loss. Nodes relaying will be
used to reduce power usage and improve quality of
service. The communication path consists of the
transmitting node, relays and the BNC node. The role
of the relay node is to receive and forward amplified
signal to the next node in the path. In this section, we
aim at maximizing the energy efficiency at the digital
twin given by the formula:
𝑒𝑒󰇛󰇛󰇜󰇜
󰇛󰇜 (5)
where 𝑆𝑁𝑅󰇛𝑝𝑎𝑡ℎ󰇜 is the total Signal-to-Noise Ratio
between the transmitter and the destination node (BNC
node), can be formulated using the SNR’s cascade
rule, and it can be formulated as:
𝑆𝑁𝑅󰇛𝑝𝑎𝑡ℎ󰇜

󰇛󰇜


 (6)
And, 𝐸󰇛𝑝𝑎𝑡ℎ󰇜 is the total energy, dissipated by nodes,
per communication path and given by:
𝐸󰇛𝑝𝑎𝑡ℎ󰇜𝑁𝐸𝐸𝐸 (7)
with,
𝐸󰇡1
󰇛𝛼𝜇󰇜𝑑

 󰇢
(8)
where N is the number of nodes per link(one node
transmitting and N-1 relaying) ,𝑊 is the system
bandwidth, K is the Boltzmann constant, T the
temperature in Kelvin, 𝜌 stands for the link data rate
expressed in bit/s, 𝛼󰇡
󰇢, c is the speed of light
in a vacuum, 𝑓 is the operating central frequency and
𝜇 the amplification ratio of the node’s power
amplifier(PA), 𝑃 is the transmitted power at the
antenna, 𝐸
is the energy per transmitted bit.
Distances between nodes are determined using
position estimation. The optimization variable are 𝑃
and 𝜇, 𝐸 is the energy used for sensing and data
processing per node and 𝐸 the energy consumed at
BNC and 𝐸 the total energy for signal transmission
and amplification, it depends on sampling frequency
and the amount of data to be processed. Power
amplifier gain, frequencies and transmitted power
information can be found in [10].
5. Activity Recognition
5.1 Methodology
The digital twin encompasses both theoretical
models for localization, path selection and energy
consumption and data driven techniques for human
action recognition based on IMU data collected.
In our study, we address the problem of human
activity recognition (HAR) by using deep neural
network models based on long-term memory layers
(LSTM). The time series data collected by IMU
sensors are pre-processed and segmented into sampled
shapes at regular intervals and used to train and test
LSTM models. We explored different LSTM
architectures, with variable number of layers, to
identify the most accurate and robust model for human
activity recognition. The selected LSTM architecture
will be implemented on the Edge for real-time activity
recognition, and then integrated into a digital twinning
framework. Fig. 2 illustrates the various steps of the
proposed method.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
86
.
Fig. 2. The Human Activity Recognition (HAR) Sensor Framework consists of four steps: (a) Data collection
using smartphones, (b) Preprocessing the raw data, (c) deep learning models for HAR (d) Evaluate each model
using K-fold cross validation.
5.2. Data Collection and Preprocessing:
The dataset was collected from a group of 4 people,
with heights ranging from 1.54 to 1.78 meters. The
IMU data was collected using smartphones fixed to the
waist of each participant. Each person performs
repeatedly one of the following activities: standing,
walking, sitting and falling. The minimum number of
repetitions is 7 for each activity. Each activity was
performed for duration of 6 seconds, and data were
recorded at a sampling rate of 100 Hz. The data
collection process was simplified using the Physics
Toolbox Smartphone application, a flexible tool for
recording sensor data. All collected data was then
transferred to the PC, where it was stored as a CSV file
for further pre-processing and analysis. The number of
samples of each activity is presented in Fig. 3. After
the collection of the IMU data, a variety of pre-
processing techniques were performed to the dataset
before feeding it into the deep learning models.
Preprocessing includes the following steps [8]:
a- Applying 5-point moving average filters for all
samples to smooth and de-noise the signal.
b- Global normalization using the maximum and
minimum values of the recordings to maintain the
magnitude information of each activity.
c- Increase of the number of epochs by using sliding
window overlap. In this work, a sliding window
size of 120 with 50% overlap was used.
d- Data segmentation: the dataset used is split into
70 % for training and 30% for model testing.
Fig. 3. Label distribution.
5.3. Deep Learning Models for HAR
In this work, we used and trained four deep
learning models with LSTM architecture, as presented
in Fig. 4. Each model consists of LSTM cells, a fully
connected layer, and a decision layer offering
4 potential classes.
The LSTM models are trained with a learning rate
of 0.0009, a batch size of 1024 and Adam optimzer.
We use Python 3.9 with TensorFlow and Keras
libraries to build models.
The following metrics are used to evaluate the
classification performance of various LSTM
architectures:
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
87
- Accuracy = 

- Precision = 

- Recall = 

- f1-score =2 .∗

- The Confusion matrix (CM): CM plots the
true labels horizontally and the predicted
labels vertically.
TP: True Positive, TN: True Negative, FP: False
Positive, FN: False Negative.
5.4. Results
The results of the experiments are discussed in this
section. The LSTM networks trained on our dataset
were evaluated using the cross-validation protocol [8].
Fig. 5 illustrates a graph comparing the accuracy
before and after applying the cross-validation to
predict human activity. Details on accuracy, precision,
recall, and F1-score for various LSTM models in
Table 1. and Table 2.
Fig. 4. Deep Learning model Structures - Model 1: LSTM-1L, Model 2: LSTM-2L,
Model 3: LSTM-2L2, and Model 4: LSTM-3L.
Fig. 5 Accuracy comparison for LSTM models with and without cross-validation for activity prediction.
Table 1. Performance metrics of LSTM networks used in
the experiment by 2-fold cross validation protocol.
Architec
ture
Avg.
Accuracy
(%)
Avg.
Precision
(%)
Avg.
Recall
(%)
Avg.
F1
score
(%)
Model 1 91.90 92.12 91.87 91.85
Model 2 93.40 93.47 92.87 93.02
Model3 94.00 93.80 93.79 93.73
Model 4 92.80 92.78 92.34 92.53
Table 2. Performance metrics of LSTM networks used in
the experiment by 10-fold cross validation protocol.
Architecture
Avg.
Accu
racy
(%)
Avg.
Precision
(%)
Avg.
Recall
(%)
Avg.
F1
score
(%)
Model 1 94.65 94.49 94.52 94.75
Model 2 95.50 95.23 95.16 95.19
Model 3 96.10 96.08 96.04 96.03
Model 4 94.15 93.94 93.89 93.87
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
88
According to results from Table 1, Table 2 and
Fig. 5 the proposed model 3 achieves an accuracy
equal to 97.15 % without cross validation and
Precision 94 % and 96.10 % using 2-fold and 10-fold
cross validation.
CMs in Fig. 6 indicate that the model achieves an
accuracy of 87.86 % for the class” falling” for 2-fold
cross validation and 91.43 % for 10-fold cross
validation. However, misclassification still exists,
particularly for "sitting" and "falling" classes and
requires more significant data.
Fig. 7 is a plot of the accuracy as function of
number of epochs for LSTM model-3 for training and
test. Accuracy shows a stable behavior after epoch 120
with a value higher than 90 %.
Fig. 6. Normalized Confusion Matrix of model 3 for 2-Fold Cross Validation (a) and 10–fold cross validation (b).
Fig. 7. Training and test of LSTM models 3 for 4 classes HAR and associated accuracy.
6. Conclusions
In this short paper, we propose theory based and
data based models in the framework of digital twin and
WBAN. First, we present the theoretical models of
positioning of WBAN sensors and a technique to
assess energy consumption using networking
parameters and collected data from IMU sensors.
Moreover, a data driven model is proposed using IMU
realistic data. an LSTM models were used for Human
Activity Recognition (HAR).
This work will be further improved in the future to
include simulators for human action, data security and
integration of next generation 6G techniques.
References
[1]. Reichman, A, Takada, J., Body Communications, in
Pervasive Mobile and Ambient Wireless
Communications, Roberto Verdone and Alberto
Zanella (eds.), Springer, London, UK, 2012.
[2]. S. Mihai et al., Digital Twins: A Survey on Enabling
Technologies, Challenges, Trends and Future
Prospects, IEEE Communications Surveys &
Tutorials, Vol. 24, No. 4, 2022, pp. 2255-2291.
[3]. Zhang, C., Patras, P., Haddadi, H., Deep Learning in
Mobile and Wireless Networking: A Survey. IEEE
Communications Surveys Tutorials, 21, 3, 2019,
pp. 2224 – 2287.
[4]. Li, C., Yu, L., & Fei, S., Real-Time 3D Motion
Tracking and Reconstruction System Using Camera
and IMU Sensors, IEEE Sensors Journal, 19, 15,
2019, pp. 6460 - 6466.
[5]. Noureddine Boujnah, Fedia Mars Energy saving in
WBAN networks under rate constraints, in
Proceedings of the 4
th
International Conference on
Control Engineering and Information Technology
(CEIT'2016), Hammamet, Tunisia, 2016, pp. 1-4.
[6]. Cai, J., Chen, J., Hu, Y. et al., Digital twin for healthy
indoor environment: A vision for the post-pandemic
era, Front. Eng. Manag., 10, 2023, pp. 300–318.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
89
[7]. N. Boujnah and P. Korbel, Crowdsourcing based
terminal positioning using linear and non-linear
interpolation techniques, in Proceedings of the
Advances in Wireless and Optical Communications
Conference (RTUWO’ 2016), Riga, Latvia, 2016,
pp. 101-106.
[8]. R. Brahmi, N. Boujnah, G. Ben Abdennour and R.
Ejbali, Social Distancing elaboration for indoor
environment using machine learning techniques, in
Proceedings of the International Wireless
Communications and Mobile Computing Conference
(IWCMC’ 2022), Dubrovnik, Croatia, 2022, pp. 1022-
1027.
[9]. Bilandi, N., Verma, H. K. & Dhir, R., Performance and
evaluation of energy optimization techniques for
wireless body area networks, Beni-Suef Univ J Basic
Appl Sci, 9, 2020, 38.
[10]. N. G. El-Feky, D. M. Ellaithy, M. H. Fedawy, Ultra-
wideband CMOS power amplifier for wireless body
area network applications: a review, International
Journal of Electrical and Computer Engineering
(IJECE), Vol.13, No. 3, June 2023 , pp. 2618-2631.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
90
(031)
Sensing the Mechanical Properties of AlN Thin Films
using Micromechanical Membranes
Aditya 1, T. Sommer 1,2, M. Althammer 3,1 and M. Poot 1,2,4
1
Department of Physics, TUM School of Natural Science, TU Munich, 85748 Garching, Germany
2
Munich Center for Quantum Science and Technology (MCQST), 80799 Munich, Germany
3
Walther-Meißner-Institut, Bayerische Akademie der Wissenschaften, 85748 Garching, Germany
4
Institute for Advanced Study, Technical University of Munich, 85748 Garching, Germany
E-mail: menno.poot@tum.de
Summary: The current interest in quantum technologies calls for the development of novel materials and hybrid structures.
Understanding the mechanical properties of a material can be a challenge, especially at the nanoscale. We use the
eigenfrequencies of in-house fabricated silicon nitride membranes to extract the stress in a film that is deposited on top. The
high stress results in sharp resonances that can be located precisely so that the mechanical properties of the top layer can be
determined accurately. We highlight this approach using aluminum nitride – an important material for on-chip quantum optics
and optomechanics - grown onto the micromechanical membranes. The detection is done optomechanically by exciting the
modes using a piezo actuation and detecting the vibrations in the reflected laser light. For this, different lasers are at our
disposal. The resonances of a wide variety of highly stressed membranes are measured. The frequencies follow the expected
inverse length dependence of a stressed membrane and depend on the thickness of the top layer. Using finite-element
simulations, the stress in the bilayer is determined. A cross-over between compressive and tensile stress is observed as a
function of the AlN thickness.
Keywords: Optomechanics, Film stress, Aluminum nitride (AlN), MEMS, Silicon nitride (SiN), Bilayer membrane.
1. Introduction
Micromechanical membranes provide an important
platform for a wide variety of optomechanical
experiments. This can range from scanning force
microscopy [1], to study cavity optomechanical
backaction [2], the observation of hybridization of
degenerate eigenmodes [3], topological energy transfer
[4], all the way to radiative heat transfer mediated via
Casimir fluctuations [5]. Another important aspect of
these very thin membranes is that they can be used to
sense materials that are placed on top [6]. This
provides an interesting route to measuring the
mechanical properties of a variety of materials that can
be deposited or grown on top of existing high-stress
silicon nitride (SiN) membranes.
We make membranes with and without the second
layer on top and measure their eigenmodes. The
dimensions of our membranes can be varied by design
and typical sizes are 10 s to 100 s of um in width and
length. The thickness is determined by the SiN film
thickness and is typically 100s of nm [3]. We illustrate
that we can determine the stress in an aluminum nitride
(AlN) layer grown on top of a suspended SiN
membrane. AlN is an important material for the
emerging field of quantum technologies, especially
when photonic integrated circuits with nonlinear optics
or optomechanics are involved [7-9].
2. Samples and Setup
The membranes are made on chips that consist of a
330 nm thick SiN layer on top of a 3.3 um thick silicon
oxide (SiOx) layer. Underneath these two layers is the
silicon carrier. The stoichiometric SiN is grown
commercially using LPCVD and has a tensile stress of
about 1100 MPa. Release holes are defined using
electron-beam or optical lithography followed by
reactive ion etching. The number of holes and their
distance set the size of the membrane. A layout of a
typical chip can be seen in Fig. 1. Holes in the SiN
layer expose the underlying oxide and by immersing
the chip into buffered hydrofluoric acid (BHF), the
exposed SiOx is etched isotropically, resulting in
circularly expanding drums originating at the etch
holes. The etching is continued until these “drums” are
no longer supported by “pillars”, thus forming a
suspended SiN membrane. The finite selectivity of
BHF between SiN and SiOx results in slightly taped
profiles of the SiN. This is also visible in optical
reflection maps.
Fig. 1. Layout for a 6 × 10 mm chip with 120 membranes
of different sizes indicated in blue.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
91
Next, AlN is grown on top using DC magnetron
sputtering as detailed in Ref. [10]. Using X-ray
diffraction, it is confirmed that the film is of good
crystalline quality and has a c-axis orientation. It is
known from literature that the growth process and the
final thickness influence the stress in the AlN [11]. It
is thus important to have a quick and reliable method
to determine the stress in the deposited film and in the
resulting bi-layer membrane.
The chips are glued onto a piezo element for
actuation and mounted in a vacuum chamber as shown
in Fig. 2.
Fig. 2. Schematic of the measurement setup. Two different
lasers can be used for measuring the driven response of the
bilayer membranes that are mounted on a piezo actuator and
placed in the vacuum chamber. For clarity, the green laser
light path is not shown in full. Adapted from Ref. [3].
In short, a red HeNe laser (633 nm, Mellers Griot)
is focused onto one of the membranes – or any other
mechanical resonator - and the reflected light is
collected on a high-speed photodetector (Newport 818-
BB-21). Since the reflection depends on the distance
between the membrane and the highly reflecting Si,
this provides a sensitive way to measure the local
displacement of the membrane and can also be used to
map the eigenmodes [3]. The driven response is
measured using a network analyzer (HP 4396A).
Typically, first overview scans are done and then
zooms of the resonances are taken at higher resolution.
From the resonance peaks, the eigenfrequencies and
quality factors are obtained by fitting a harmonic
oscillator (or Duffing) response to the data. This
procedure is repeated semi-automatically for the
different membranes on each chip. Then the next chip
with a different AlN thickness is inserted and measured
again, resulting in large datasets that are processed
using Matlab scripts. Using these, the responses can be
viewed, fitted, and the fit results can be studied as
function of the membrane parameters. Fig. 3 shows a
selection of four membranes with identical size on the
four different chips. The peaks correspond to the
fundamental (i.e., the (1,1) mode [3]) out of plane
eigenmodes. A clear dependence on the AlN thickness
can be seen: the thicker the AlN, the lower the
resonance frequency. Here it is noted that in the
overviews, the driving power is relatively high and that
the peaks appear somewhat distorted, but care is taken
to have zooms with enough resolution and lower
excitation so that the frequencies can be determined
precisely.
Fig. 3.
Driven response near the fundamental
eigenfrequency for the nominally-same membrane on
different chips with different AlN thicknesses. The AlN
layer clearly shifts the resonances to lower frequencies.
The readout always works well for SiN-only
membranes, but for bilayer membranes with different
AlN thicknesses it is - unlikely but still - possible that
the derivative of the reflectivity w.r.t. the displacement
vanishes. Hence, alternatively a green (532 nm;
OneFive Katana) and even a violet laser (405 nm, not
shown) can be used to sense the motion. Fig. 4
compares reflectivity maps for two laser wavelengths
and chips with different AlN thicknesses. The
differences in contrast between the membrane and
support are clearly visible. The different lasers (see
Fig. 2) allow selection of the best color for the
optomechanical readout.
Fig. 4 Reflectivity maps for a 3x3 membrane.
The top (bottom) row is taken at 633 (532) nm.
The left (right) column has 77 (140) nm of AlN grown onto
the suspended silicon nitride membrane
.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
92
3. Resonance Frequencies
The driven response is measured for all
480 membranes (120 membranes with varying size per
chip as shown in Fig. 1, 4 chips with different
thicknesses t
AlN
). The fundamental eigenfrequencies of
the same membrane on four different chips were
plotted in Fig. 3; from zooms the exact resonance
frequency is determined and this is plotted against the
size of the membrane in Fig. 5. The strongly varying
frequency with membrane size L
eff
makes it beneficial
to convert the frequency to the speed of sound, which
– for an idealized membrane - equals the square root of
the thickness-weighted sum of the stress and mass
density of the bilayer. With the known properties of the
SiN layer, the stress in the AlN film can be extracted.
In practice, the membrane shape is affected by the
hydrofluoric acid release, resulting in a slightly more
complex geometry. For the required precision,
bandstructure calculations using finite element
simulations are used to connect the speed of sound to
the material properties, in particular the thickness
dependence of the stress in the AlN thin film.
Fig. 5. Fundamental resonance frequency for membranes of
different size with different amounts of AlN on top. The
slope of -1 in this log-log plot indicates that f 1/L
eff
i.e.
that the membranes are stress dominated.
4. Thickness Dependence of the AlN Film
Stress
The data in Fig. 3 and 5 shows a clear dependence
on the AlN thickness. This is on one hand due to the
additional mass, and on the other hand due to a
difference between the stress in SiN and AlN. The
latter is extracted using finite-element simulations.
Interestingly, Fig. 6 shows that the stress transitions
from compressive to tensile. We believe that this is due
to the complex growth of AlN on SiN [7] and a finer
sweep of the film thickness will be done to study this
in more detail.
Fig. 6. Extracted stress in the AlN film σ
AlN
vs
the thickness t
AlN
.
5. Conclusions and Outlook
As proof of principle, we have used our SiN
micromechanical membranes to determine the stress in
AlN thin films that were grown on top. Although here
we focused on AlN, our method is not limited to this
material, as in principle any (semi-) transparent
material may be sensed. With slight changes to the
optical setup (specifically: heterodyne detection), even
that requirement may be lifted in the future. The
method is thus applicable to any transparent or
reflecting thin film that can be grown onto our
suspended SiN membranes.
Acknowledgements
We thank P. Soubelet for discussion, J. Röwe for
initial measurements, D. Hoch and J. Röwe for
assistance with nanofabrication, and M. Müller for
growth of the AlN. Funding: DFG EXC-2111-
390814868 and TUM-IAS (DFG & EU FP7 291763).
References
[1]. D. Hälg, T. Gisler, Y. Tsaturyan, L. Catalini, U. Grob,
M.-D. Krass, M. Héritier, H. Mattiat, A.-K. Thamm,
R. Schirhagl, E. C. Langman, A. Schliesser,
C. L. Degen, and A. Eichler, Membrane-based
scanning force microscopy, Physical Review Applied,
Vol. 15, 2021, L021001.
[2]. J. D. Thompson, B. M. Zwickl, A. M. Jayich,
F. Marquardt, S. M. Girvin and J. G. E. Harris, Strong
dispersive coupling of a high-finesse cavity to a
micromechanical membrane, Nature, Vol. 452, 2008,
pp. 72-75.
[3]. D. Hoch, K.-J. Haas, L. Moller, T. Sommer,
P. Soubelet, J. J. Finley and M. Poot, Efficient
Optomechanical Mode-Shape Mapping of
Micromechanical Devices, Micromachines, Vol 12,
2021, 880.
[4]. H. Xu, D. Mason, L. Jiang, and J. G. E. Harris,
Topological energy transfer in an optomechanical
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
93
system with exceptional points, Nature, Vol. 537,
2016, pp. 80–83.
[5]. K. Y. Fong, H.-K. Li, R. Zhao, S. Yang, Y. Wang, and
X. Zhang, Phonon heat transfer across a vacuum
through quantum fluctuations, Nature, Vol. 576, 2019,
pp. 243–247.
[6]. P.-L. Yu, T. P. Purdy, and C. A. Regal, Control of
material damping in high-Q membrane
microresonators, Physical Review Letters, 108, 2012,
083603.
[7]. G. Terrasanta, T. Sommer, M. Müller, M. Althammer,
R. Gross, and M. Poot, Aluminum nitride integration
on silicon nitride photonic circuits: a hybrid approach
towards on-chip nonlinear optics, Optics Express,
Vol. 30, 2022, pp. 8537–8549.
[8]. X. Liu, A. W. Bruch, and H. X. Tang, Aluminum
nitride photonic integrated circuits: from piezo-
optomechanics to nonlinear optics, Advances in Optics
and Photonics, Vol. 15, 2023, pp. 236–317.
[9]. L. Fan, C.-L. Zou, M. Poot, R. Cheng, X. Guo, X. Han,
and H. X. Tang, Integrated optomechanical single-
photon frequency shifter, Nature Photonics, 10, 2016,
pp. 766–770.
[10]. G. Terrasanta, M. Müller, T. Sommer, S. Geprägs,
R. Gross, M. Althammer, and M. Poot, Growth of
Aluminum Nitride on a Silicon Nitride Substrate for
Hybrid Photonic Circuits, Materials for Quantum
Technology, Vol. 1, 2021, 021002.
[11]. L. Xie, H. Zhang, X. Xie, E. Wang, X. Lin, Y. Song,
G. Liu, and G. Chen, Structure and optical properties
of AlN crystals grown by metal nitride vapor phase
epitaxy with different V/III ratios, ACS Omega, Vol. 7,
2022, pp. 23497–23502.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
94
(032)
Video Stream Processing for an Autonomous Tunnel Drainage Rover
A. L. Giordano 1, T. Schachinger 2, V. Micic Batka 2 and B. G. Zagar 1,3
1 Montanuniversität Leoben, Chair of Electrical Engineering, 8700 Leoben, Austria
2 ÖBB Infrastruktur, 1020 Vienna, Austria
3 Johannes Kepler University Linz, 4040 Linz, Austria
E-mail: alessandro-leone.giordano@stud.unileoben.ac.at
Summary: Drainage pipes in tunnels are very complicated to service but still need rather frequent inspections in order to
detect the initiation of deposited scales stemming from carbonate that tends to precipitate when the pH-value of the mountain
water rises with the release of gaseous carbon dioxide CO2 due to a decreasing water pressure. Drainages allow to design
tunnels without resistance to ground water pressure. We present a camera-based optical sensor, developed for an autonomous
operating tunnel drainage rover, that is able to detect and quantify deposited calcite constricting more or less severely the free
cross section the drainage rover needs to safely navigate the pipe. The pipe's cross section is imaged via an area-scan camera
at a frame rate adopted to the rover's speed so that at least every 50 mm of movement a frame is acquired. The presented image
processing software is segmenting and classifying each frame into the pipe's wall, the pipe's free lumen, eventually existing
water, and calcite.
Keywords: Tunnel drainage inspection, Autonomous tunnel rover, Optical sensor, Digital image processing, Calcite deposit
detection.
1. Introduction
In Austria – like in other middle European
countries – railway and motorway tunnels are a very
important part of the national infrastructure. Depend-
ing on many boundary conditions, there are various
design possibilities for these tunnels. One of the most
important factors for the design of the tunnel structure
is its position in relation to the groundwater level. If
the tunnel is below the water table, in many cases the
water pressure has to be relieved by drainages. Particu-
larly in mountainous areas with overburdens of more
than several hundred meters, this is the only way the
tunnel structures can be technically build. Within the
tunnel structure, there are drainages. These drainages
usually have a diameter of 160 to 250 mm. They are
made of thermoplastics. Polypropylene is used in cur-
rent construction projects because of its good
mechanical properties.
In a great number of the tunnels, precipitations
consisting of calcium carbonate occur inside the
drainages [1]. This is either due to the calcium content
of the ground water or especially due to the dissolution
of portlandite of the cement bearing support elements
used in tunneling works. These precipitations are
called “scales”. They can grow slowly (several
centimeters thickness over years) or very quickly
(several centimeters thickness during several weeks)
and have different mechanical properties. If the
drainages are clogged due to these scales, the water
pressure in the surrounding of the tunnel structure
increases. As a result, either the tunnel structure may
suffer damage or water seeps into the tunnel. Both
cases lead to a reduced availability of the tunnel for
railway operation.
Currently, the total length of Austrian Railways’
(ÖBB) tunnels is about 254 km, with about 445 km of
drainages. With the completion of major projects
Brenner Base Tunnel, Koralm Tunnel, Semmering
Base Tunnel und Granitztal Tunnel, a total of about
1,081 km of drainages will have to be maintained in
ÖBB tunnels [2]. In this paper we report on the
development of an inspection rover which can operate
without influence on the railway traffic and
automatically navigate through the rather constricted
drainage pipes, thereby recording the calcite scales,
the water table and measure environmental conditions,
like temperature, the pH-value, and the conductivity of
the runoff water within the pipe, all as functions of the
position along the pipe. Fig. 1, top shows the CAD
model of the tunnel rover, where the frontal green
structure is the video camera to record a continuous
video stream. Fig. 1, bottom shows a photo of the
realized fully functional rover’s prototype still
produced by 3D printing.
2. Autonomous Navigation Software
Since servicing areas within the drainage are
spaced between 333 m and up to 500 m at most, the
rover needs to be given the capability to autonomously
advance through the drainage system and thereby
avoiding any obstacles, like thicker than navigable
calcite scales, etc.
It was decided to use a computer vision approach
to provide it these capabilities, although tactile sensors
operating like rodent whiskers, or ultrasonic Time-of-
Flight sensors like employed by bats were initially also
considered as options. The latter were dropped in favor
of the image processing scheme since drainage run-off
renders them unsuitable most of the time.
The developed image processing software needs to
look ahead some limited distance, identify any
obstacles there might be (most of the time it simply
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
95
will be calcite deposits), determine the free cross
section of the pipe and indicate to the rover’s drive to
either advance normally, stop to allow for finer
resolution image processing aimed at ultimately
deciding whether the rover will be able to pass the
obstacle, or, to conclude returning back is the only
viable option. Although software meeting these
requirements are known to exist for autonomous
vehicles already [3], transferring these algorithms into
the less general environment of a drainage pipe and
running them on the very limited processing power
available at the rover’s processing unit is not a
straightforward endeavor.
Besides, the software needs to annotate any
acquired cross section with flags indicating the
severeness of the constriction, its position along the
track, and all analogue measurements, like tempera-
ture, pH value, conductivity, etc.
Fig. 1. Top, CAD view of the rover, with the forward-
looking camera indicated in green. Bottom, the realised 3D
printed prototype.
3. Processing of the Video Stream
The drainage rover has a size of approximately
500 mm length and an overall diameter of 180 mm. It
is equipped, besides other measurement hardware,
with a miniaturised forward-looking RGB video-
camera (see Fig. 1, top in green) with a resolution of
720 by 576 pixels. The optics has a focal length, which
given the camera’s physical size offers a strong wide
angle imaging capability but gives rather strongly
distorted images (barrel distortion), that needs to be
equalised for correct geometrical measurements [4].
The rover is designed to autonomously inspect a few
kilometres of pipe between automatically recharging
its batteries. During inspection the task of the rover’s
image processing is to identify and record calcite
deposits, their mass and, ideally, cross-sectional
distribution. Due to the rather limited processing
power of the rover’s power saving image and signal
processing hardware designing an efficient and
effective code is crucial.
Video frames are acquired at the standard rate of
25 frames per second. The rover is advancing at
around 1 m/s, thus each frame covers approximately
4 cm. The scene illumination is achieved by a white
light LED offset from the camera’s axis by 2 cm and
illuminating under a beam angle of 160 ° with a half
value angle of 120 °. The LED’s intensity is set such
as to bring the exposure time down to approximately
5 ms thus keeping image blur in check. To ensure a
sufficiently dense video inspection, each frame of the
recording taken by the rover’s frontal camera is read
in and processed further.
Since calcite deposits can distinctively/visually be
separated from the drainage pipe, as they appear in
different colour schemes (Fig. 2, left), a colour seg-
mentation approach is examined. By creating colour-
based masks the frame is segmented into a binary
image depicting preferably only calcite while the
drainage pipe itself is blackened out. Converting the
incoming frame’s colours to their complementary
form as seen in Fig. 2 on the right simplifies adjusting
the image’s RGB settings to achieve an enhanced
contrast. As the segmentation is colour-based and the
changes in contrast are detected by the gradient oper-
ators, increasing the contrast is a vital step for a more
accurate segmentation [5].
Fig. 2. Left, original input frame. Right, frame with
complementary colour and enhanced contrast.
The original input image as well as the one with an
enhanced contrast are both in RGB colour space. Even
though a segmentation based on this very colour space
is possible, due to the irregular lighting conditions
inside the pipe, converting the image to the L*a*b*
colour space [6] resulted in a much better segmenta-
tion. Implementing thresholds for the three L*a*b* pa-
rameters, that were optimised using 3 video sequences,
and further using them to create and apply the mask m
1
(Eqn. 1), outputs a binary black and white image.
Given areas that do not correspond with the
adjusted thresholds will therefore not be shown and
are blackened out whereas the pixels that are in the ad-
justed thresholds’ range, preferably only calcite,
remain visible and are whitened out (Fig. 3, right).
(
1
)
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
96
Due to wide angle imaging, there is a higher than
usual dynamic range in exposure the processing has to
take care of. In particular, problems arise for over-
exposed parts of the image, especially areas close to
the rover and slots wherefrom the drainage water
enters, where colour that is defined by the ratio of the
a* and b* parameters, tend towards appearing washed-
out white and are hence not masked out as seen in
Fig. 3 on the right.
Fig. 3. Left, complementary image. Right, m
1
to mask out
background.
To improve the image segmentation following the
error-prone first mask m
1
, a second mask m
2
is
implemented and applied to restrict the region of in-
terest (ROI). This is achieved by autonomously cutting
off and hence blackening out extensive bordering re-
gions (Eqn. 2) to only keep the rectangular centre of
the image as seen in Fig. 4 on the right to be considered
further.
(2)
There are indeed other methods to define an ROI
adjusted for each frame, for instance by implementing
a circular Hough transform to detect circular structures
[7, 8]. Yet, due to its numerical complexity of O(N 3)
of the circular Hough Transformation [9] it’s use is
prohibitive for the hardware given and is therefore not
adapted to operate during the online inspection of the
drainage pipe as it does not allow to recognise calcite
deposits in real-time.
Fig. 4. Left, m
1
. Right, m
2
to define the ROI.
Due to the irregular lighting conditions and other
environmental parameters, for instance drainage water
inside the pipe reflecting strongly, certain areas are
masked out even though they, indeed, are calcite struc-
tures. Since the aim is to also quantify the calcite
deposit’s mass, we are also interested to identify and
measure the convex hull [10] of the calcite layer.
Holes within a mask are unphysical and thus filled
using the morphological filling operation [11], 12] to
showcase the entirety of the detected calcite scale as
seen in Fig. 5 on the right. To increase both the speci-
ficity and the sensitivity of the detector as to also avoid
mistakenly whitened drainage slots, a morphological
hit-or-miss operator [13] matched to the shape of these
slots is implemented, which, if found in the binary
image, removes structures corresponding to these very
shapes. Even though this method is efficient to suc-
cessfully mask out the drainage slots, it mistakenly
masks out actual calcite regions as well wherefore the
proper sizing of the implemented shape is crucial.
Fig. 5. Left, m
2
. Right, image with filled holes by the
morphological filling operation.
Subsequently to aid the rover’s navigation, the
sizes of calcite obstacles need to be classified as im-
passable or traversable. This is done by constricting
the cross section by the bounding box’ outlines and
checking for sufficient clearance. A singular bounding
box is a numeric array with 4 entries [x, y, width,
height], x and y define the upper left corner [14].
To process more than just one bounding box within
each frame (see Fig. 6), every potential bounding box
surrounding whitened sections in the current frame is
added to a data structure. To now classify the calcite
scales by size, the height and width parameters of each
bounding box are taken from the data structure and
multiplied to receive the bounding box’ area ai which
is then added to a one-dimensional array A (Eqn. 3). If
ai exceeds a defined minimum size (here set to 1000
pixels), the rectangle will be depicted in the final
image, if not, this box’ entry will be removed from A.
In Eqn. 3 the elements of A; a
1
, a
2
, … are the
determined areas of bounding boxes, n is the total
number of detected bounding boxes before extracting
the ones that are smaller than the defined minimum
size.
(3)
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
97
Now, only calcite structures that exceed the size
limit defined by the rectangle’s minimum area (here
1000 pixels or more) are shown being surrounded by
a bounding box as depicted in Fig. 7 on the left.
Fig. 6. Image including all bounding boxes still without
classification.
Further information like current location or estima-
ted size can be added to the bounding box’ description.
If the calcite scale ahead of the rover evokes a risk of
getting stuck, the video processor can send a halt sig-
nal to the rover’s drive via Controller-Area-Network-
Bus (CAN-Bus) which will cause a hold and possibly
a reversal to the previous docking station to reconsider
action. After further inspection the decision of pro-
ceeding as usual and driving onwards or initiating
backtracking can be made.
On the left in Fig. 7, the final result, the binary
image including the bounding box surrounding a white
pixel field is depicted. Since it is not possible to dif-
ferentiate the white pixels by their distance to the rover
as the used video-camera only supports 2D imaging,
the binary image is not a reliable source to define the
calcite deposit’s volume and can therefore only be
used to estimate coarsely the volume. This issue might
be improved by using a Time-of-Flight camera which
essentially can depict a 3D image and is consequently
helpful to determine the calcite volumes, too [15]. To
make the output image more identifiable, the binary
image is invisibly layered underneath the original
input frame, while the bounding boxes are still visible.
Hence, as seen in Fig. 7 on the right, the written output
frame now essentially shows the incoming image
including an appropriate bounding box correctly indi-
cating a calcite scale.
As depicted in Fig. 7 on the left, during the rover’s
inspection tour inside the drainage pipe, calcite de-
posits ahead the rover can occur. In case the deposits
reach a size that might risk the rover’s safe continua-
tion of its inspection, a backtracking firmware is inte-
grated. The aim of the backtracking module is to firstly
secure the rover from possible obstacles and to sec-
ondly offer a more sufficient analysis of the calcite
deposit ahead to then indicate whether or not back-
tracking is the only viable option. To decide whether a
deposit is potentially unsafe to cross or not, the area of
the bounding box surrounding the frontal calcite
deposit is defined, as they provide a sufficient estimate
of the calcite deposit’s size. Once a bounding box’s
area exceeds the chosen size of risk (here 6000 pixels),
a signal can be sent via CAN-Bus to the rover which
then executes a code to stop. To analyse the given
calcite deposit more precisely, the rover then back-
tracks for a defined distance (some cm) and a height
detecting module operating on m2 seen in Fig. 4 is
started. Firstly, all occurring white pixels are counted
to determine the calcite’s cross-sectional area to
additionally estimate the calcite deposit’s volume.
Fig. 7. Right, binary image including bounding box. Left,
output image indicating calcite deposit with bounding box.
To further define the space between the drainage
pipe’s ceiling and the calcite deposit to essentially
determine if the rover can pass safely, the drainage
slots which are falsely indicated to be calcite and are
therefore depicted in the binary image are utilised.
Since the binary image only consists of two values, 0
for black and 1 for white, the image’s matrix is filtered
row by row to find the first and therefore highest in
elevation occurring white pixel in the frame. This very
pixel is part of a drainage slot and is hence associated
with the pipe’s ceiling. Once found, this very column
is filtered to discover an accumulation of white pixels
which indicates a calcite deposit. Subtracting the
column’s value of the highest pixel by the one of the
lowest essentially results in the passing height. This
module executes a more precise analysation of the
calcite deposit and can therefore better determine
whether backtracking is viable or continuing onwards
is safe without getting stuck. As the matrix analysation
to determine the passing height is time costly and
exceeds the given processing power, this module is not
a reliable source to determine calcite structures in time
during the rover’s driving.
4. Conclusions and Outlook
High calcium concentrations in drainage water can
cause drainage pipes used for tunnelling systems to
accumulate calcite deposits which, if not handled in
time, increase in size. Once a calcite deposit reaches a
certain size, the drainage system is at risk of clogging
and hence can negatively affect tunnelling operations.
A rover is implemented to autonomously navigate
through the drainage pipes while recording calcite de-
posits as well as other environmental parameters of in-
terest. To successfully recognise calcite deposits along
the drainage pipe, an online image processing system
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
98
is implemented on the incoming video stream from the
camera positioned on the rover’s front. We can de-
monstrate successful autonomous operation of the
rover based on a rather simple and thus power saving
image processing protocol, that aims at detecting ob-
stacles ahead of the rover’s current position. Since
these obstacles are scale deposits a colour processing
scheme proved sufficient as is shown with some major
intermediate steps in Fig. 8.
Future work will be directed towards using Time-
of-Flight camera’s capabilities to more precisely mea-
sure not only the free cross section of the pipe but also
the total volume of scales ahead of the rover’s posi-
tion.
Fig. 8. From top left to bottom right, single frame of the
continuous video stream, inverted to complementary
colours, created L*a*b*-segmented binary image, define
ROI, delineate image and fill holes, implement bounding
boxes and classify bounding boxes by size.
Acknowledgements
The authors gratefully acknowledge the partial fi-
nancial support for the work presented within the
framework of the EU Horizon 2020 Programme under
grant GA 101012456, Programme In2Track3 / Shift2-
Rail - CFM.IP3-01-2020 Innovation Action, and ÖBB
- Infrastruktur, Vienna.
References
[1]. Stefanie Eichinger, Assessment and formation mech-
anisms of scale deposits in tunnels of the ÖBB-Infra-
struktur AG – A subproject of the Task Force
Drainage, Geomechanics and Tunneling, 13.3, 2020,
pp. 273–285.
[2]. Tobias Schachinger et al., Current research by ÖBB
Infrastruktur AG on scale monitoring without track
closures, Geomechanics and Tunnelling, 11.3, 2018,
pp. 277–285.
[3]. Christoph Stiller, Promises and Challenges of Auto-
mated Vehicles, plenary lecture, in R. Moreno-Díaz,
F. Pichler, A. Quesada-Arencibia, editors, in
Proceedings of the 17th International Conference on
Computer Aided Systems Theory (EUROCAST’ 2019),
Las Palmas de Gran Canaria, Spain, February 17 - 22,
2019, Springer Notes in Computer Science Book ,
2020, 12013.
[4]. Jean-Yves Bouguet, Camera Calibration Toolbox for
Matlab (1.0), CaltechDATA, 2022.
[5]. The MathWorks, Image Processing ToolboxTM User’s
Guide, Designing and Implementing Linear Filters for
Image Data - Contrast Enhancement Techniques,
2023, pp. 8-69–1-72, URL:
https://de.mathworks.com/help/pdf_doc/images/imag
es_ug.pdf (28. 8. 2023).
[6]. Steven Bleicher, Contemporary Color: Theory and
Use, 3rd edition, Taylor & Francis Ltd, 2023.
[7]. Mark David Jenkins, Tom Buggy, Gordon Morison,
An imaging system for visual inspection and structural
condition monitoring of railway tunnels, in
Proceedings of the IEEE Workshop on Environmental,
Energy, and Structural Monitoring Systems (EESMS),
Milan, Italy, Jul. 24 - 25 2017.
[8]. L. Attard, C. J. Debono, G. Valentino, M. Di Castro,
Vision-Based Tunnel Lining Health Monitoring via
Bi-Temporal Image Comparison and Decision-Level
Fusion of Change Maps, Sensors, 21, 2021, 4040.
[9]. Yonghong Xie, Qiang Ji, A new efficient ellipse de-
tection method, in Proceedings of the International
Conference on Pattern Recognition, 2002.
[10]. David Avis, David Bremner, Raimund Seidel, How
good are convex hull algorithms?, Computational
Geometry, 7, 5–6, 1997, pp. 265–301.
[11]. The MathWorks, Image Processing ToolboxTM User’s
Guide, Code Generation for Image Processing
Toolbox Functions - Generate Code for Object Detec-
tion, 2023, pp. 21-5–21-21, URL:
https://de.mathworks.com/help/pdf_doc/images/imag
es_ug.pdf (28. 8. 2023).
[12]. Pierre Soille, Morphological Image Analysis; Prin-
ciples and Applications, Springer, 2nd edition, 2003.
[13]. Anil K. Jain, Fundamentals of Digital Image Pro-
cessing, Pearson, 2015.
[14]. MathWorks,
https://de.mathworks.com/help/vision/ref/bboxwarp.h
tml#d124e195212 (28.8.2023).
[15]. Infineon Technologies AG, REAL3TM image sensor
family, 3D depth sensing based on Time-of-Flight,
2015, URL:
https://www.infineon.com/dgdl/Infineon-
REAL3+Image+Sensor+Family-PB-v01_00-
EN.PDF?fileId=5546d462518ffd850151a0afc2302a5
8 (28. 8. 2023).
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
99
(033)
An IoT Communication Platform for Interactive Buildings Energy
Management System
L. Mihet-Popa
Østfold University College, Department of Engineering, Kobberslagerstredet 5, 1671 Fredrikstad; Norway
Tel.: + 4792271353
E-mail: lucian.mihet@hiof.no
Abstract: More than 40 % of the world’s energy is used in buildings, from which 50 % of energy is wasted because of non-
existing or inefficient building energy management system (BEMS). By using a BEMS energy consumers are more connected,
better controlled, monitored and managed, and the smart building has emerged. This paper deals with development of an IoT
communication platform for building energy management systems. This is done by using an edge/cloud-computing
architecture based on communication modules and service providers. We propose an IoT platform based cloud-computing
which is able to monitor the power consumption and thermal battery storage functionalities from a local solar production in
real-time. The platform will be part of an innovative and integrated BEMS approach that aims at achieving a highly efficient
management of heterogenous energy resources, integrating and combining the management of different energy technologies
into the BEMS. A lab setup with a real-time interface and data processing, for monitoring and controlling the power generation
and consumption for different type of buildings, will be designed to optimize, test and validate the platform.
Keywords: Building energy management system, Interactive buildings, IoT, Cloud-computing, Digitalization.
1. Introduction
Buildings are a major cause of carbon emissions
and building sector is responsible for around 40 % of
energy consumption and for about 30 % of CO2
emissions [1-2].
Three technology factors that make and drive
decarbonization feasibility have been identify. There is
the transition to a new energy system, new enabling
technologies and digitalization [3-4].
Decarbonization is a critical transition that is
affecting all businesses and is reshaping the
foundations of the energy industry. A key driver of this
energy transition is policy, spearheaded on a global
scale by the Paris Agreement [5]. In parallel to this,
two other pillars transforming the energy landscape,
decentralization and digitalization, are significant
enablers of decarbonization, making it more
technically feasible than ever before. These technical
developments are helping overcome some of the
challenges of decarbonization and are expanding the
opportunities for optimization and new business
models.
There is a clear need for accelerating and financing
building investments and leverage smart, energy-
efficient technologies in the building sector. Smart
buildings integrate cutting edge ICT-based solutions
for energy efficiency and energy flexibility, which can
effectively assist in creating healthier and more
comfortable buildings with lower energy consumption
and lower CO2 emissions [6].
1.1. Energy Transition
Energy is the lifeblood of an economy and without
reliable, sustainable and affordable energy, society
cannot prosper. The transition from the old to the new
energy systems is done by integrating the renewable
energy sources into the new energy systems making
decarbonization more feasible.
The energy transition refers to the transition from
energy supply from fossil and nuclear fuels to
renewable energy sources, with the aim of creating a
new, sustainable energy system with almost zero CO2
emissions.
Europe’s ambition is to achieve a secure,
sustainable, and affordable energy system. The
European Commission has set itself the aim of
reducing emissions by 55 % by 2030 and reaching
climate neutrality by 2050 [6]. The European Energy
Transition Readiness Index highlights the key role
infrastructure is playing in driving decarbonization in
the region.
1.2. Enabling Technologies
The current technology revolution is leading to
unwitnessed shifts in the economy, society, business,
and individuals. A great effort is being made
worldwide toward clean energy in order to protect the
environment and improve operational efficiencies and
customer services for better grid availability and
reliability.
New enabling technologies contain storage
technology and flexible energy loads that help
overcome the intermittency issues.
Smart buildings, also known as intelligent
buildings or automated buildings, are structures that
leverage advanced technologies and systems to
enhance the efficiency, comfort, safety, and
sustainability of the building and its occupants.
These buildings integrate various interconnected
devices, smart sensors, and automation systems to
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
100
optimize operations, improve energy management,
and provide intelligent services.
This energy transition (transformation) is pushed
by several factors, which include electrification,
decentralization, and digitalization.
1.3. Digitalization
Digital technologies comprising communications,
flexibility dispatch systems, smart meters, data
standards, and IT systems constitute a key enabler for
flexibility markets. Digital technology is emerging in
smart homes to facilitate DSM. Digitalization of the
network is an obvious opportunity for costeffective
development and management of the electricity system
with high returns in cost and quality to serve.
Digitalization of energy and building technology
allows better control of hardware, technologies and
software. More than that, digitalization supports
electrification and decentralization by having better
management, which includes automatic control,
consumption realtime optimization, and interaction
with customers [7-8].
2. Interactive Buildings and BEMS
Energy consumers are more connected, and the
smart building has emerged: with new technologies,
such as renewable energy and energy storage located
close to or as part of a building’s infrastructure (as well
as digitalization and connectivity of energy
management inside the building), consumers are
becoming smarter and transforming the way they can
operate.
An interactive building is able to turn buildings into
a source of energy, using load management techniques,
optimization within buildings and demand flexibility.
Building automation is one of the most important
requirements for DR (demand response) and DSM
(demand-side management) without which customers
could not respond to price signal in real-time.
A typical traditional building has a Building
Energy Management System (BEMS) that is limited to
HVAC and perhaps lighting, access control, and power
monitoring. It is simply control used to monitor for
problems and do basic controls. Both societal and
technology factors are driving the evolution of BEMSs
from being primarily an HVAC control system to
being more of a smart building system integration
platform for proactive monitoring, control, and
automation.
2.1. Demand Response and Demand-Side
Management Techniques
Demand Response (DR) programs fall into the
category of loadmanagement programs to encourage
customers to decrease their electric consumption in
response to a change in the price of electricity over
time which, in turn, addresses the economic use of the
grid. DR programs help utilities reduce the power
consumed, preserve energy, redistribute power
consumed, enhance system reliability, reduce energy
prices, and increase economic efficiency [7, 8].
Demand-side management (DSM) is a group of
programs consisting of the planning, implementing
and monitoring activities of electric utility that are
designed to encourage consumers to modify their level
and pattern of electricity usage. The main objective is
to provide customers with continuous and efficient
energy in the long term with the less costs. In fact,
DSM contains two main principles for achieving
supply-demand balancing, load shifting and energy
efficiency.
DR and DSM are two distinct procedures, DR
ensures short-term load response to improve the
energy consumption profile while DSM is applied for
long-term planning, such as for shifting the load peak
over time. Anyway, they are capable of being used
together in synchronicity.
2.2. Next-Generation Building Energy
Management Systems (BEMSs)
Building owners and system integrators face
increasing pressure to save more energy, reduce costs,
and maintain availability all while enhancing occupant
experience and well-being. Achieving these objectives
is best solved by a new type of BEMS available today
that goes well beyond HVAC controls. These modern
next-generation BEMSs benefit stakeholders by being
a more open integration platform that uses IoT, cloud
computing, data analytics, and artificial intelligence
technologies to get more out of your available
resources and connected systems.
The building energy management system (BEMS)
is a critical tool for operating a building safely,
efficiently, and reliably. However, focusing on energy
efficiency and sustainability, combined with
fundamental changes in resident needs and
expectations, are straining traditional BEMS
implementations pushing them to grow and evolve. At
the same time, advancements in cloud computing, IoT,
analytics, and artificial intelligence are leading to new
and broader capabilities, such as increasing demand
for energy efficiency and sustainability, changing
occupant requirements and expectations, and
emergence of newer IT, IoT, and smart building
technologies to being more of a smart building system
integration platform for proactive monitoring, control,
and automation.
3. IoT Communication Platform-based
Cloud Computing Solutions
The IoT is a pretty new concept in the architecture
of communication devices and connects sensors and
devices such as local battery storage, rooftop solar PV,
home appliances and smart meters (SMs) through the
internet, enabling information gathering and exchange.
The IoT is basically composed by digitization of
assets, collection of data and computational algorithms
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
101
to control the system. Cloud-based control systems
would enable the management of these devices.
The IoT framework is equipped to handle large
volumes of data generated by multiple DERs
(distributed energy resources) within future smart
energy systems via additional tools based AI
algorithms. In this context, cloud-based platforms are
key players in storing and intelligently using the data
generated by the IoT in a highly flexible manner and
deploying a multitude of applications in parallel.
Cloud platforms bring computation robustness and
flexibility for interfacing multiple technologies and
data based algorithms while providing on-demand
parallel computing resources.
Multiple industrial communication protocols
enable device-to-device, device-to-gateway, and
gateway-to-cloud communication, depending on
bandwidth, security, resilience to noise, losses, error
detection or interoperability.
4. Design Procedure, Modeling Approach
and Workflow Management
In order to develop an IoT platform and to provide
end-to-end solutions for customers’ energy
management needs, based on demand side
management for energy & cost savings, we started with
a detailed modelling approach for an electric water
heater (EWH) for DR based on one- and two-
dimensional parabolic Partial Differential Equation
(PDE). EWHs have a great potential in implementing
DR control strategies because the power consumption
has a high correlation with daily load patterns and can
be used as energy storage devices when is needed [9].
The dynamic behavior of EWHs is essential for
designing successfully DR controls. We developed the
models in MATLAB and Simulink software tools,
before to integrated with dSPACE RTI, as can be seen
in Fig. 1.
Fig. 1. Workflow management.
The smart heater we modeled, validated by
measurements, and tested can estimate and regulate its
stored energy (state-of charge) based on the
temperature measurement, the inflow of water, and the
heater power.
The state of charge in the tank is the energy left,
which is the initial energy in the tank 𝐸0 plus the
change in energy divided by the maximum energy:
0
max
100[%]
dE
Edt
SOC E

,
(1)
where the initial energy and maximum energy can be
calculated as follows:
00 0
EmcT
,
(2)
where 𝑚
0
and 𝑇
0
are the initial water mass and water
temperature respectively.
max max max
EmcT
(3)
where 𝑚𝑚𝑎𝑥 and 𝑇𝑚𝑎𝑥 are the maximum capacity
and maximum temperature of the water tank
respectively.
Next step was to design and build a controller in
order to minimizing the cost of energy consumption
for the price signal while maintaining the user comfort
level, as shown in Fig. 2. The comfort parameter
quantifies the tradeoff between cost savings and user
comfort. For optimal operation states of EWH based
on forecasting price we used a ML algorithm
developed in Python and integrated with our
previously developed MATLAB-Simulink model, as
co-simulation environment.
It also incorporates demand side management
features meaning that it can be optimized through a
machine learning algorithm to improve the energy
efficiency and reduce the energy consumed
by 20 % [10].
Fig. 2. A basic block diagram representation.
5. Performance Evaluation and Testing
Procedure
The lab setup based on an IoT communication
platform it is shown in Fig. 3. The setup is based on a
real-time interface (RTI) with a graphical user
interface (GUI) called dSPACE ControlDesk.
Electric
Water H eater
Cloud
Data Access
Management
Smart House User Control
DOITSMART ER - Controller
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
102
This platform is generated via MATLAB-Simulink
software program and can be used, among other things,
for algorithms development and testing. IoT sensors
signal conditioning and data analytics can also be
performed. The sensor signals are replicated with a
Tektronix AFG31000 series, and the signals are given
into the control panel of the dSPACE for RTI based
analysis of the model.
Fig. 3. An experimental lab setup based on an IoT platform
designed for testing different components/devices of a
BEMS.
In parallel with the experimental lab setup, we
developed a cloud computing solution’s including data
from the hot water heater, as shown in Fig. 4. The
smart meter is monitoring the data in real-time and
send it to the cloud via a HAN adaptor.
DR can achieve the supply–demand balance
through controlling the electric consumption which is
monitored through the smart meter. The BEMS
(aggregator control) makes the decision based on
control signals from service provider after comparing
the demand and PV generation in order to achieve
market balance with supply–demand balance.
Fig. 4. IoT platform-based cloud-computing approach.
The platform will include a completely new
generation of water heater, capable of storing energy
like a thermal battery, that allows control from mobile
devices and the app myUplink.
The water heater can function as an energy battery,
because of information from the smart home system it
can store hot water when electricity is cheapest.
Electricity prices often vary a lot over the course of a
day, often depending on how heavy the load is on the
electricity grid. When the electricity is at its most
affordable or you have access to self-produced
electricity via a solar cell system, you can choose for
the boiler to raise the temperature in the tank
automatically in relation to what you usually have as a
set point. In other words, you only use energy when it
is cheapest. When the water reaches a given
temperature, it switches off and does not need to be
switched on again for many hours. When you draw
water at a time when electricity is expensive, the
temperature in the water heater gradually drops
without switching on. This way you avoid using
energy on the hot water during this period.
As the boilers can be connected to a smart house
system, they can be integrated into the management of
the entire house, then you will get the most out of the
opportunities this provides.
The proposed platform is based on the most
popular standards Z-wave and Zigbee, which can be
connected to third-party products and applications via
a single app. Smart home solutions that can be
controlled with apps are currently in great demand, and
this innovation enables them to adapt to both customer
groups and a general desire for a better climate.
Via the app for the smart house system, you can
control the smart water heater together with other
electricity consumption in the home, such as electric
cars and space heating.
6. Conclusions
The goal of this work was to highlight the energy
transition, new enabling technologies and
digitalization of energy in interactive buildings and to
present an IoT communication platform developed for
testing practical solutions for a BEMS. The envisioned
and targeted energy optimization solutions include the
development of an innovative Internet of Things (IoT)
algorithm system awareness, considering a secure
communication and data processing approaches,
reducing energy demand and equivalent CO2
emissions. The developed solution will bring novelty
based on Edge computing and AI and will integrate
sensors and building functionalities into one common
gateway, giving the possibility for different service
providers to operate on the same hardware, with open
data source and scalability.
Acknowledgements
This work was supported in part by EEA Grants,
DOITSMARTER and Increased knowledge on RES and
Energy Efficiency.
References
[1]. IEA. Electricity information: Overview. Paris, France,
2021, http://www.iea.org/reports/electricity-
information-overview.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
103
[2]. IRENA. Global Renewables Outlook. Energy
Transformation 2050, International Renewable Energy
Agency, Abu Dhabi, 2020.
[3]. Y. Wu, Y. Wu, J. M. Guerrero, J. C. Vasquez,
Digitalization and decentralization driving transactive
Energy internet: key technologies and infrastructure,
International Journal of Electrical Power and Energy
Systems, 126, A, 2021, 106593.
[4]. What digitalization can do to alleviate the energy
crisis, Siemens report, 2022, Switzerland,
www. siemens.com/smartinfrastructure
[5]. Paris Agreement, Web Portal
(https://climate.ec.europa.eu/eu-action/international-
action-climate-change/climate-negotiations/paris-
agreement_en).
[6]. European Commission services towards the
development of a smart readiness indicator for
buildings, Support for setting up a Smart Readiness
Indicator for buildings and related impact assessment,
December 2017.
[7]. Shady S. Refaat, Omar Ellabban, Sertac Bayhan,
Haitham Abu-Rub, Frede Blaabjerg, Miroslav M.
Begovic, Smart Grid and Enabling Technologies,
Wiley, 2021.
[8]. Salman K. Salman, Introduction to the Smart Grid-
Concepts, Technologies and Evolution, IET, 2017.
[9]. Z. Xu, R. Diao, S. Lu, J. Lian, and Y. Zhang, Modeling
of Electric Water Heaters for Demand Response: A
Baseline PDE Model, IEEE Transactions on Smart
Grid, Vol. 5, No. 5, September 2014, pp. 2203 - 2210.
[10]. A. F. Meyabadi, and M. H. Deihimi, A review of
demand-side management: Reconsidering theoretical
framework, Renewable and Sustainable Energy
Reviews, 80, 2017, pp. 367-379.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
104
(034)
Geospatial Sensor-based Approach to Provide Defibrillators
by using Drones in Mountain Areas: A Study Case in South Tyrol, Italy
E. Fajardo-Figueroa 1, R. Mendicino 1, S. Tritini 1, M. van Veelen 2, G. Vinetti 2, G. Ristorto 3,
S. Mayrgündter 4, G. M. Bianco 5, L. Meng 6 and A. Mejia-Aguilar1
1 Eurac Research, Center for Sensing Solutions, Drususallee 1, 39100, Bozen, Italy
2 Eurac Research, Mountain Emergency Medicine, Drususallee 1, 39100, Bozen, Italy
3 MAVTech s.r.l., Via Ipazia 2, 39100, Bozen, Italy
4 NOI Techpark, Volta-Straße 13/A, 39100, Bozen, Italy
5 University of Rome Tor Vergata, Civil and Computer Science Engineering, Rome, Italy.
6 Technical University of Munich, Chair of Cartography and Visual Analytics, Munich, Germany
Tel.: + 39, fax: + 87654321
E-mail: abraham.mejia@eurac.edu
Summary: The use of geospatial infrastructure supported by novel sensors, autonomous and piloted aerial platforms, and
computational tools are impacting civil applications such as risk management, particularly interesting in mountain rescue
operations. Here, we propose a semi-automatic geospatial system able to identify the coordinates of a distress call, map them
in the entire Province of South Tyrol, identify possible areas to execute the maneuvers of drone operations, and generate the
drone flight plan able to identify possible obstacles and keeps a relative height of flight following normative or user
configuration to harmonize joint missions (e.g., helicopters). The hardware component consists of a drone platform
(MAVTech), RGB and thermal cameras, and a light defibrillator. A remote user interface confirms the coordinates by detecting
the victim (thermal) and providing situation awareness (RGB) to the operators to finally drop the defibrillator.
Keywords: Geospatial system, Drones, Sensor imagers, Defibrillator.
1. Introduction
During the last years, the introduction of drones in
risk management protocols has dramatically opened
new possibilities to assist people in danger [1], to
determine the nature and extent of risk (eg. natural
hazards [2]), to effectively coordinate first responder
teams [3] and to provide assistance by means of small
first-aid packages [4].
In addition, novel sensors such as infrared cameras
[5], long-range communication protocols [6], new
computational methods, have contributed to
accelerating the traditional methods of rescue
operations.
However, operations in hostile environments, such
as mountain scenarios, require complex methods, high-
skilled responders, and usually the complete chain of
rescue depends on weather conditions, and optimal
performance of technology. In the end, time is the main
factor to overcome to successfully give support to any
victim.
Therefore, in this work we propose a combined
approach based on geospatial infrastructure (GIS
technology), open data models that are used to map
emergencies in the alpine region of South Tyrol, Italy,
and sensor-based data fusion to optimally fly an aerial
platform, to identify, localize and confirm any victim,
and to provide a basic package that contains one
defibrillator to be used in case of cardiac arrest.
The main novelty of this work relies on two
aspects: the semi-automatic GIS system is able to
identify the area of a distress call, map it, suggest a
possible area to execute the Aerial Unmanned Vehicle
(UAV) maneuvers (eg. parking), and to create the
flight plan. The file is uploaded in-site and the mission
is automatically performed, supervised by an operator
(due to legislation). The system fusion the data coming
from the distress call, the actual flight plan (telemetry),
integrates thermal and visible range imagery to detect
the victim and provide a situational awareness to the
pilot who transmits such information to the hospital
and/or medical assistance.
2. Study Area
For this study, we selected two representative
mountain scenarios in the alpine Province of South
Tyrol, Italy, in the Corvara in Badia municipality
(latitude 46.539885°, longitude 11.896100°, elevation
1800 m.a.s.l.), for two different seasons (summer and
winter). The selected areas are highly visited by
tourism, and attracted by the hiking and skiing seasons
respectively (Fig. 1).
3. Method
As an initial step, we elaborate a cartographic
approach consisting of the identification of suitable
places to perform the maneuvers of drones for
mountain rescue operations. These places, or drone-
ports, have a relatively small area of operations, with
access to electricity to charge batteries and good phone
signal coverage and preferably close to roads where
ambulances can assist the victim. To elaborate it, we
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
105
use the landuse and landcover products from the
Province of South Tyrol. We used open- access remote
sensing data (Sentinel, digital surface models) and
available public datasets of the Province of South
Tyrol [7]. Then, we overlayed this product to the
regulated areas for flying civil drones in Italy. Despite
the fact that first responders have an allowance to fly
drones in emergencies at any elevation, we suggest
keeping the civil rules of flying in the direction of
future normative identified as U-Space where any
aircraft should maintain a minimum and maximum
elevation to share the volume of airspace [8]. The
suggested approach will overcome many of these
difficulties.
Fig. 1. Corvara in Badia, South Tyrol, Italy.
Fig. 2. Cartographic approach to localize drone-ports to
execute autonomous missions.
Then, to attend the distress call, we made use of the
GeoResq application widely used by the Italian
mountain rescue teams [9]. Georesq is based on the
activation of the internal GPS installed in most modern
cell phones. Once the distress call is activated, the
system sends the actual position to a central emergency
station. It is possible to indicate the type of emergency,
such as a cardiac arrest.
With this coordinate, our proposed method
identifies the most suitable place drone-port, assuming
it is operative and ready to operate. Then, it was
possible to create a flight plan, whose origin starts in a
parking spot, reaching as a final point the coordinate of
the victim. It is a sequence of points, that contain a
theoretical model for coordinate, elevation, and speed
with respect to ground.
The tested drone for this study is a MAVTech Q4X
equipped with a RGB camera and disposable
defibrillator delivered by an automatic parachute
system with an autonomy of 32 min. Its navigation
system includes different accessories, including strobe
lights, RTK GPS, a secondary GPS antenna, and a First
Person View (FPV) camera [10], Fig. 2. In addition we
used an infrared camera to detect and localize the
victim.
Fig. 2. UAV Platform, MAVTech, model Q4X
(curtesy of MAVTech).
3. Preliminary Results
We tested the above-mentioned approach in
summer conditions. Using a simulated coordinates
distress call, the system elaborated the flight plan by
maintaining the U-space regulations Fig. 3.
Fig. 3. U-space regulations and UAVflight plan strategy.
We use a telemetry approach based on the sensor
data provided by the UAV (instrumentation on board)
that consists of an Inertial Measurement Unit (IMU),
compass, GPS. These data are collected, and
transferred to a central station via 3G/4G
communication router, in a dedicated online
repository. It is based on the Internet of Things
simplified approach able to combine diverse
typologies of information (sensor-based) as well as
actuators (defibrillator provision) [11]. It also includes
raster information provided by the thermal (FLIR Vue
Pro) and RGB cameras.
We fusion the imagery with the simplified IoT
telemetry information (Fig. 4) provided by the UAV to
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
106
confirm the position of the victim. With the
approximate height, the coordinates, the attitude of the
drone, and the characteristics of the optics (field of
view) [12].
Fig. 4. Simplified IoT application with compass, altitude
and ground speed information.
Later, we identify a thermal anomaly Fig 5a, and
confirm the discrepancies on visible image Fig 5b.
Fig. 5. (a) Infrared and (b) visible imagery.
Finally, we determine and confirm the position
projected to the pixel information to deliver the AED
to the victim (Fig. 6)
(a)
(b)
Fig. 6. Provision of the AED (a) view from the ground
(victim) and (b) from the drone.
4. Conclusions
It has been demonstrated the capacity to improve
the localization of a potential victim in need of a
defibrillator with the support of modern geospatial
tools (GIS) and modern platforms and sensors
(UAV+dedicated payload). The capacity to transfer the
actual information of the drone to central stations
(telemetry) opens opportunities to use such platforms
in emergencies such as earthquakes, eruptions, fires,
and mountain emergency accidents as described here.
Acknowledgements
The research leading to these results has received
funding from the South Tyrol FUSION grant program
under the project “DRONE-AED” funded by Cassa di
Risparmio Foundation and from Eurac Internal Project
“ESLAB@NOI” 2023.
References
[1]. A. Kobaszyńska-Twardowska, J. Łukasiewicz,
P.W. Sielicki, Risk Management Model for Unmanned
Aerial Vehicles during Flight Operations, Materials
(Basel), 15, 7, 2022, p. 2448.
[2]. A. Román, A. Tovar-Sánchez, D. Roque-Atienza,
I.E. Huertas, I. Caballero, E. Fraile-Nuez, G. Navarro,
Unmanned aerial vehicles (UAVs) as a tool for hazard
assessment: The 2021 eruption of Cumbre Vieja
volcano, La Palma Island (Spain), Science of The Total
Environment, Vol. 843, 2022, 157092.
[3]. J. Lemayian, J. Hamamreh, First Responder Drones for
Critical Situation Management, in Proceedings of the
Innovations in Intelligent Systems and Applications
Conference, 2019, pp. 1-6.
[4]. M. J. van Veelen, G. Roveri, A. Voegele, T. Dal
Cappello, M. Masè, M. Falla, I. B. Regli, A. Mejia-
Aguilar, S. Mayrgündter, G. Strapazzon, Drones
reduce the treatment-free interval in search and rescue
operations with telemedical support – A randomized
controlled trial, The American Journal of Emergency
Medicine, Volume 66, 2023, pp. 40-44.
[5]. T. M. Dawdi, N. Abdalla, Y.M. Elkalyoubi, B. Soudan,
Locating victims in hot environments using combined
thermal and optical imaging, Computers and Electrical
Engineering, 85, 2020, 106697.
[6]. G. M. Bianco, R. Giuliano, F. Mazzenga, G. Marrocco,
A. Mejia-Aguilar, LoRa System for Search and
Rescue: Path Loss Models and Procedures in Mountain
Scenarios, IEEE Internet of Things Journal, 8, 3, 2021,
pp. 1985 - 1999.
[7]. Rete civica dell’Alto Adige, GeoCatalogo, 2023.
http://geokatalog.buergernetz.bz.it/
[8]. European Union Aviation Safety Agency (EASA),
Commission Implementing Regulation (EU) 2021/664
of 22 April 2021 on a regulatory framework for the U-
space, 2023.
[9]. Corpo Nazionale Soccorso Alpino e Speleologico,
GeoResq, Google Play, 2023. https://web.georesq.it/
[10]. MAVTec s.r.l., 2023. https://www.mavtech.eu/
[11]. Damaševičius, R., Bacanin, N.; Misra, S., From
Sensors to Safety: Internet of Emergency Services
(IoES) for Emergency Response and Disaster
Management, J. Sens. Actuator Netw., 12, 3, 2023, 41.
[12] Sandino, J., Maire, F., Caccetta, P., Sanderson,
C., Gonzalez, F., Drone-Based Autonomous Motion
Planning System for Outdoor Environments under
Object Detection Uncertainty, Remote Sens., 2021, 13,
4481.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
107
(035)
Zinc Tin Oxide Nanostructures Synthesized by the Microwave
Hydrothermal Method Applied to Gas Sensors
R. A. Silva 1, M. G. Masteghin 2 and M. O. Orlandi 1
1 São Paulo State University, Engineering, Physics and Mathematics Department, Araraquara, Brazil
2 University of Surrey, Advanced Technology Institute, Guildford, United Kingdom
Tel.: +55 1633019892
E-mail: ranilson.angelo@unesp.br
Summary: Semiconductor metal oxide materials with ternary or quaternary type structures have been highlighted by the
properties resulting from the type of structural alignment of the constituent elements. This work reports the growth mechanism
of zinc tin oxide-based structures synthesized by microwave-assisted hydrothermal methods and gas sensing performance for
NO2, H2, and CO gases. The materials were characterized by X-ray diffraction (XRD), scanning and transmission electron
microscopy (SEM and TEM). ZnSn(OH)6, ZnSn(OH)6/ZnSnO3, and Zn2SnO4/SnO2 structures were obtained varying the
synthesis time of 4h, 12h, and 24h, respectively. Gas sensor measurements showed ZnSnO3 a lower detection limit to NO2, in
which a 12-fold increase in electrical resistance is estimated in the presence of 1 ppb NO2. Moreover, the Zn2SnO4/SnO2
exhibited higher selectivity and a sensor response to NO2 relative to H2 and CO. Hence, the results demonstrate that ultra-
selective and high-performance gas sensor devices can be created through nanostructures growth engineering.
Keywords: Gas sensor, Semiconductor metal oxide, Nitrogen dioxide, Environmental pollutant, Microwave-
assisted hydrothermal.
1. Introduction
The growing demand for high-performance devices
for detecting harmful gases to humans and the
environment has encouraged the research of new
materials with different electrical properties and
unique morphologies. Among the leading gases
monitored nowadays are carbon monoxide (CO),
nitrogen dioxide (NO2), and hydrogen (H2), all coming
mainly from burning fossil fuels. The chemo-resistive
semiconducting metal oxide (SMOx) nanostructures
have been highlighted for gas detection due to their low
cost and scalability. These devices generally present
high sensitivity; however, the selectivity must be
improved for the sensor responses to be reliable.
Tin dioxide (SnO2) and zinc oxide (ZnO) are the
most used gas-sensing materials [1,2]. SnO2
nanostructures in the presence of 100 ppm of NO2 and
CO gases showed a sensor response (𝑅𝑅
and
𝑅 𝑅
) of 65 and 1.5, respectively [3]; while
“flower”-like ZnO structures presented 𝑅𝑅
=
70 and 𝑅 𝑅
= 2 for the same concentrations of the
analytes [4]. On the other hand, studies have shown
that SnO2/ZnO heterostructures exhibit promising
properties for gas detection. SnO2/ZnO nanostructures
showed an increase of about 16- and 32-fold in sensor
response for 10 ppm NO2 compared to pure SnO2 and
ZnO [5]. For zinc tin oxide (ZTO) nanostructures such
as ZnSnO3 and Zn2SnO4, an anti-moisture behavior
was observed, i.e., the sensor response was minimally
affected by water vapor [6]. Although ZTO shows
interesting properties so far, few studies are available
for gas sensing due to its challenging synthesis and the
thermodynamic instability of the metastable ZnSnO3
phase. Therefore, this study evaluates the structural
properties and sensing response of ternary compounds
ZnSn(OH)6, ZnSnO3, and Zn2SnO4/SnO2 with unique
morphologies obtained from microwave-assisted
hydrothermal synthesis.
2. Experimental
2.1. Synthesis of the ZnO and ZTO Nanostructures
For the ZnO synthesis, the precursor’s zinc acetate
dihydrate (Zn(CHCO)₂ꞏ2HO) and sodium hydroxide
(NaOH) in a molar ratio of 1:14 were placed inside a
sealed polytetrafluoroethylene (PTFE) reaction cell.
The synthesis was conducted in a microwave for 20
minutes at 120 ˚C.
A mixture of ZnO nanostructures with tin (IV)
chloride pentahydrate (SnCl45HO) were used as
precursors in synthesizing ZTO nanostructures using
the procedure previously described for the reaction
times of 4 h, 12 h and 24 h at 170 °C.
2.2. Structural and Morphological
Characterizations
The synthesized materials' phase and crystallinity
were analyzed by XRD using a Rigaku RINT2000
diffractometer and a TEM using a Philips CM200
microscope with a 200 keV beam. The materials
morphology was examined in a JEOL JSM-7500F
microscope (FEG-SEM).
2.3. Gas sensing Measurements
Gas sensing measurements have been carried out
using NO2, H2, and CO as analyte gases, with
concentrations ranging from 5 to 100 ppm diluted in
dry synthetic air via mass flow controllers (MKS). The
devices were measured at 150 ˚C, 200 ˚C, and 250 ˚C.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
108
The electrical resistance change was monitored by
applying a voltage of 0.5 V using a source/measure
unit (Agilent 34972A). Sensor responses (signals)
were calculated based on the Rgas/Rbaseline (for NO2) or
Rbaseline/Rgas (for reducing gases, CO and H2), where
Rgas is the device resistance when exposed to the
analyte gas and Rbaseline corresponds to the steady-state
resistance in air reference (baseline). Selectivity was
defined as the sensor response in the presence of NO2
relative to the sensor signal when the device was
exposed to reducing gases.
3. Results and Discussion
The XRD structural (Fig. 1) and SEM
morphological (Fig. 2) analysis show that after 4h,
only the cubic tin zinc hydroxide phase (ZnSn(OH)6)
with cuboctahedron shape is obtained, while the
synthesis time of 24h resulted in a mixture of the zinc
stannate (Zn2SnO4) and tin dioxide (SnO2) phases with
a “silver touch” cacti structure. On the other hand, an
intermediate time of 12h resulted in a tube-like hollow
microstructure obtained from the mixture of
ZnSn(OH)6 and zinc tin-oxide (ZnSnO3).
Fig. 1. XRD spectra for samples synthesized for 4 h, 12 h,
24 h, and standard for ZnSn(OH)6, and ZnSnO3.
Fig. 2. SEM images of samples a) 4 h, b) 12 h, and c) 24 h.
Fig. 3a compares sensor response to reducing (CO
and H2) and oxidizing (NO2) gases, showing an
expected drop in resistance for an n-type
semiconductor in the presence of CO and H2.
Selectivity values under 100 ppm exposure to analyte
gas are shown in Fig. 3b. All compositions of the ZTO
system showed excellent selectivity for NO2 about H2
or CO, notably in the 24h in which the sensor response
was ~1600 times greater for NO2 than for H2 and
~1500 times greater for NO2 than for CO.
Fig. 3. a) Gas sensing response for the 24h sample device
operated at 200 ˚C. b) Selectivity in the presence of NO2
divided by the sensor signal in the presence of interferent
gases.
4. Conclusions
Hierarchical nano-heterostructure proved a
promising approach to improve the required selectivity
of SMOx NO2 sensors. The results showed that the
synthesis process for the ZTO nanostructures allowed
the transition from the ZnSn(OH)6 phase to the
metastable ZnSnO3, which decomposes into
Zn2SnO4/SnO2 heterostructures. Regarding gas
detection properties, the ultra-low NO2 detection limit
was achieved by the ZnSnO3, while the Zn2SnO4/SnO2
presented superior selectivity towards reducing gases.
Acknowledgments
This work was supported by PROPe – UNESP
public notice 13/2022, CNPq, CAPES, and FAPESP.
References
[1]. S. Das and V. Jayaraman, SnO2: A comprehensive
review on structures and gas sensors, Progress in
Materials Science, Vol. 66, 2014, pp. 112–255.
[2]. Y. Kang, F. Yu, L. Zhang, W. Wang, L. Chen and
Y. Li, Review of ZnO-based nanomaterials in gas
sensors, Solid State Ionics, Vol. 360, 2021, p. 115544.
[3]. P. H. Suman, A. A. Felix, H. L. Tuller, J. A. Varela,
M. O. Orlandi, Comparative gas sensor response of
SnO2, SnO and Sn3O4 nanobelts to NO2 and potential
interferents, Sensors Actuators B: Chemical, Vol. 208,
2015, pp. 122–127.
[4]. M. Chen, Z. Wang, D. Han, F. Gu and G. Guo, High-
sensitivity NO2 gas sensors based on flower-like and
tube-like ZnO nanomaterials, Sensors Actuators B:
Chemical, Vol. 157, 2011, pp. 565–574.
[5]. Z. Zhang, M. Xu, L. Liu, X. Ruan, J. Yan, W. Zhao,
J. Yun, Y. Wang, S. Qin and T. Zhang, Novel
SnO2@ZnO hierarchical nanostructures for highly
sensitive and selective NO2 gas sensing, Sensors
Actuators B: Chemical, Vol. 257, 2018, pp. 714–727.
[6]. L. Du, H. Zhang, M. Zhu and M. Zhang, Construction
of flower-like ZnSnO3/Zn2SnO4 hybrids for enhanced
phenylamine sensing performance, Inorganic
Chemistry Frontiers, Vol. 6, 2019, pp. 2311–231.
a) b)
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
109
(036)
Replication of a DSC Device Using 3D Computational Modelling:
Correction of Heat Flow Diagrams of Selected Geopolymers by Processing
the Experimental Data
V. Kočí
Czech Technical University in Prague, Faculty of Civil Engineering,
Department of Materials Engineering and Chemistry,
Thakurova 7/2077, 166 29 Prague 6, Czech Republic
Tel.: + 420 2 2435 7125
E-mail: vaclav.koci@fsv.cvut.cz
Summary: Calorimetric measurements have been still struggling with few challenges, the thermal inertia treatment belongs
among the most significant ones, especially in the field of thermal kinetics. Beside neglecting or ignoring its effects, empirical
approaches are usually applied by practitioners to deal with it. This paper goes beyond the practice, exploiting an advanced
method based on computational replication of the real experimental device including all the heat transfer processes involved
during the measuring procedure. An inverse solution of problem enables a direct identification of the heat source in the sample
which is the main difference from experimental approaches that rely on delayed/distorted signal readout outside the sample.
The modelling results revealed, the thermal inertia neglection might lead to a temperature shifting by up to 15 °C in case of
geopolymers tested.
Keywords: Differential scanning calorimetry, Thermal inertia, Geopolymer, Computational modelling, Systematic errors,
Heat flow evolution, High temperatures.
1. Introduction
Despite the progress that have been made in the
field of calorimetric measurements, there are still
some negative factors affecting the results measured.
The application of sophisticated electronic devices is
very helpful and contributive at the first sight.
However, being considered as a gray box, end-user
can hardly analyze measuring subprocesses to reveal
the origin of measuring errors and to make appropriate
actions subsequently to eliminate them. Thermal
inertia is one of the most frequently mentioned factors
of which effect might be further magnified regarding
to specifics of devices, sample size, heating rate,
sensors positioning and signal processing in particular
[1]. The end-users usually follow producers’
guidelines to obey recommended setups and
measuring process that that has been empirically
identified to be less affected by the systematic
errors [2].
Following up on the previous research [3], this
paper goes beyond such a practice. A calibrated
computational replication (model) of a selected
differential scanning calorimeter is used to fit the
experimental outputs. Once fitted, the grey-box label
can be “removed” by closer looking inside the
computational version of the device to analyze such
processes that led to the same outputs as the
experiment itself. In this way, the effect of the sensors
positions, thermal inertia and other sources of errors
can be involved enabling correct results to
be extracted.
2. Materials and Methods
2.1. Geopolymers Studied
The geopolymer sample used in this research has
been taken from the previous research [4] and
represents one of the mixtures (labeled as C5 in the
original publication) prepared within the course of
composition optimization. It is based on ceramic
powder (CP) used as the precursor which is activated
using sodium water glass (WG, silicate modulus equal
to 1.6) and NaOH pellets dissolved in water (W).
Standard siliceous sand (S) with the particle size up to
2 mm is used as the filler. The dosage of particular
constituents is summarized in Table 1.
Table 1. Geopolymer composition.
CP (
g
) S (
g
) WG (
g
) NaOH (
g
) W (
g
)
403 1097 175 22 152
2.2. Heat Flow Diagrams
Differential scanning calorimetry has been
exploited to observe heat evolution difference between
reference and measuring crucible, in which the
geopolymer sample has been accommodated. The
LabSys Evo/DSC calorimeter (Setaram Inc.) has been
used for this measurement, exposing 50 µg of the
sample to the heating rate of 5 K/min. Using set of
thermocouples combined with a calibrated conversion
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
110
function, the heat flow curve can be plotted by means
of evaluation of the different heat evolution in
particular crucibles.
2.3. DSC Model and Data Processing
Regardless of a DSC device type and its data
processing precision level, the common weak point of
all these devices originates from physical limitations
of the thermocouples positioning as they cannot be
directly inserted into a sample. They are usually
sticked to the outer surface of a crucible in which the
thermal process takes place. The heat evolved by the
sample must be therefore transferred on a certain path
before is detected. Within this transfer, some of the
heat can be consumed or redistributed or it easily takes
some time which results in signal delay and distortion.
The empirical techniques proposed to deal with this
phenomenon are well-known, but their versatility and
quick application possibility are on expense
of accuracy.
Since the mathematical model of the DSC
device/sensor replicates single details of the
calorimeter so that the modelled heat transfer path is
nearly identical to the experimental one (see Fig. 1),
the model represents a powerful tool in which such
heat function of the reacting sample is sought, that
provides the same output (when processed by the
model) as the real experimental device. It means, the
solution of this inverse problem is basically going
deeper in the “grey box” by direct identifying the heat
source located exactly in the place of its origin. This is
the main difference from the experimental approach in
which results rely on the signal (distorted and delayed)
readout OF the thermocouples outside the sample. Due
to the page limit, please refer to [4] for more details
about the model principles, construction, calibration or
validation as well as exploitation of a selected
advanced seeking method.
Fig. 1. Computational replication of the DSC sensor
(in the device oriented vertically).
3. Results and Discussion
A brief comparison of the modelling and the
experimental outputs (see Fig. 2) confirmed the
expectations: the corrected output is shifted right along
the x-axis by approximately 15 °C. This fact indicates
the original signal recorded experimentally is linked to
temperature values slightly lower than the actual
temperature of the sample. The shift might be caused
by some heat absorption and/or redistribution by metal
components of the DSC, the crucibles walls in
particular. The absolute difference of the heat flows
detected is relatively small as geopolymers are known
to be stable at high temperatures which makes them
high-temperature resistant. The evaluation software of
the DSC device automatically applies corrections
following given empirical rules, but still the data
produced is not fully cleaned of systematic errors.
Fig. 2. Heat flow diagrams of the sample studied.
4. Conclusions
The extended abstract presented a computational
approach developed for correction of systematic errors
typical for calorimetric measurements. As the sample
studied, geopolymer mixture has been tested. The
modelling procedure led to temperature shift
correction by 15 % approximately. The geopolymer
studied showed to be thermally stable as not any
significant exo/endo reaction have been revealed when
exposed up to 10 %. The major heat power peak
accounted for less than 1 W/g.
Acknowledgements
This research has been supported by the Czech
Science Foundation under project No. 22-03474S.
References
[1]. S. Vyazovkin, How much is the accuracy of activation
energy affected by ignoring thermal inertia?,
International Journal of Chemical Kinetics, Vol. 52,
Issue 1, 2020, pp. 23-28.
[2]. V. Kočí, J. Šesták, R. Černý, Thermal inertia and
evaluation of reaction kinetics: A critical review,
Measurement, Vol. 198, 2022, 111354.
[3]. V. Kočí, J. Maděra, A. Trník, R. Černý, Heat transport
and storage processes in differential scanning
calorimeter: Computational analysis and model
validation, International Journal of Heat and Mass
Transfer, Vol. 136, 2019, pp. 355-364.
[4]. V. Kočí, D. Koňáková, V. Pommer, M. Keppert,
E. Vejmelková, R. Černý, Exploiting advantages of
empirical and optimization approaches to design alkali
activated materials in a more efficient way,
Construction and Building Materials, Vol. 292, 2021,
123460.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
111
(038)
Exploring Sustainable Printed Paper Sensors for Analyzing Cure Behavior
and Detecting Cracks in Composites
A. Mahendran, N. Gupta, C. Koren and H. Lammer
Kompetenzzentrum Holz GmbH, WOOD KPLUS, Altenberger Strasse 69, A-4040, Linz, Austria
Tel.: +43 4212 494-8016
E-mail: a.mahendran@wood-kplus.at
Summary: The article focuses on the application of sustainable printed paper sensors for characterizing the cure behavior and
crack detection in fiber-reinforced composites. It highlights the limitations of conventional embedded sensors and introduces
printed paper sensors as an alternative solution. These sensors utilize paper as a flexible and low-cost substrate. The study
employed printed paper sensors with interdigitated electrodes to monitor the curing behavior of resin-impregnated paper. The
feasibility of using printed sensors for crack detection was also investigated. Mechanical tests, including bending and dropping
ball tests, were conducted to evaluate the performance of the printed paper sensors. Resistance change was measured as an
indicator of sensor response under applied loads. The results showed favorable sensing behavior, with the embedded sensor
exhibiting a strong response to applied loads. In conclusion, the study demonstrated that printed paper sensors are suitable for
monitoring the cure behavior of composites and can be used for crack detection under impact loading conditions.
Keywords: Printed paper sensor, Cure monitoring, Crack detection, Sustainable, Composites.
1. Introduction
Fiber-reinforced composites are widely employed
in various structural applications [1]. Ensuring the
safety, reliability, and durability of these composites
necessitates structural health monitoring, which
involves the detection and evaluation of potential
damage, defects, or changes in their condition. Real-
time monitoring of the curing process during
composite manufacturing is also crucial to determine
the time required for complete curing. However,
conventional embedded sensors made from polymeric
substrates are bulky, act as foreign bodies, and can
compromise the mechanical strength of the
composites. The state-of-the-art in micro-crack
detection is electroluminescence imaging, where
conventional electroluminescence was used to inspect
the solar cell cracks [2]. Some authors also proposed
an approach that utilizes a hybrid combination of deep
learning models, including convolutional neural
networks and Bayesian probabilistic analysis, for
robust vision-based crack detection [3].
Printed paper sensors have emerged as a viable
alternative to address these challenges. These sensors
leverage the flexibility and cost-effectiveness of paper
as a substrate, while benefiting from the functionality
provided by printed electronic components [4].
Moreover, printed paper sensors are environmentally
sustainable as they can be decomposed at the end of
their life cycle. Printing techniques like screen
printing, inkjet printing, and flexographic printing
enable the deposition of conductive inks, functional
materials, and sensing elements onto the paper
substrate.
In our recent study, printed paper sensors based on
interdigitated electrodes were utilized to predict the
curing behavior of phenol and melamine formaldehyde
resin-impregnated paper. Additionally, the feasibility
of using printed sensors for crack detection was
investigated. The crack detection system aims to
identify measurable changes that indicate delamination
of the composite structure, typically caused by induced
force or impact. To achieve this, a complex sensor
structure is required, and the "Hilbert Curve," a
rectangular structure with space-filling properties, was
chosen as the most effective and straightforward
design compared to other options [5]. The Hilbert
curve was printed on the paper substrate using silver
and carbon inks for crack detection purposes.
2. Materials and Methods
The interdigitated electrode paper sensor and its
specifications used in this analysis are shown in Fig. 1.
These paper sensors were developed at Wood K Plus,
Austria, and had an electrode length and spacing of
300×300 μm.
The DEA measurements were conducted using a
dielectric analyzer from Novocontrol Technologies
(Germany). A sinusoidal voltage at 10000 Hz was
applied between the electrodes to generate an electric
field. The electrical properties such as permittivity,
conductivity, specific resistance, and loss factor were
measured as a function of time at that frequency.
Conductivity is calculated from the loss factor, directly
related to the ions' mobility and the resin's cure state.
To detect cracks, a specific pattern (shown in Fig. 2)
consisting of silver and carbon ink was printed onto a
thin paper substrate. Subsequently, the printed sensors
were embedded within the composites using the
vacuum bag technique. The feasibility of the printed
paper sensors was assessed through mechanical tests,
including bending and dropping ball tests. The
resistance change corresponding to the applied load
was measured as an indicator of performance.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
112
Fig. 1. (a) Sensor design with distances in mm, (b) a silver print of a 300/300 [μm/μm] sensor on paper;
Reprinted with the permission of [4].
Fig. 2. Space filling Hilber curve pattern
on the paper substrate.
3. Results and Discussions
The sensing behavior of the developed paper
sensors were compared to commercially available
polyimide-based sensors was analyzed. The
experiments were done using paper sensors with silver
as well as carbon inks and commercial sensors. The
results are shown in Fig. 3. It can be observed from Fig.
3a that there are some differences in the maximum
conductivities of different sensors, but this difference
is attributed to the different initial capacitance of the
sensor, which may be different due to the wire
connections. To compare the performance of all
sensors, the conductivity results were integrated and
normalized, as shown in Fig. 3b.
It can be observed that all the sensors including
commercial sensor and paper sensors (with silver as
well as carbon ink) provides same curing time of resin.
In the middle of curing the results are a bit different
because of different interaction between resin and
substrate as well as ink material. But paper sensors
served the purpose by showing the same curing time as
commercial ones.
To evaluate the crack detection sensor's
performance, compressive and tensile loads were
applied. The embedded sensor exhibited a highly
favorable response corresponding to the loads applied
on the composites as shown in Fig. 4.
Following the dropping ball impact test, certain
samples exhibited visible cracks, resulting in a loss of
signal. Other samples only displayed changes in
resistance values. Fig. 5 illustrates how the damage
incurred by the composites led to a loss of
connectivity.
Fig. 3. Dielectric cure monitoring of MF resin-impregnated
paper using different sensors: a) Conductivity vs. time,
b) Conversion factor vs. time.
Fig. 4. Change in resistance with respect to type of applied
load on the printed crack detecting sensor: (a) Tension-
Grey, (b) Compression- Orange.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
113
Fig 5. Microscopic image showcases the composite
structure following the drop ball test, exposing the
propagation of cracks within the composite material.
4. Conclusions
The cure monitoring application is well suited for
the printed paper sensors, which exhibit performances
comparable to commercial interdigitated sensors. The
initial study indicates the potential of paper sensors for
crack detection under impact loading conditions.
Notably, the presence of a significant crack resulted in
a loss of signal, while minor cracks caused only
minimal changes in resistance.
Acknowledgements
This work was supported by the Austrian Research
Promotion Agency (FFG) under the project “i
3
Sense”.
Grant number 888361.
References
[1]. Prashanth, S., Subbaya, K.M., Nithin, K. and
Sachhidananda, S. Fiber reinforced composites-a
review, J. Mater. Sci. Eng, Vol. 6, Issue 3, 2017,
pp. 2-6.
[2]. Dhimish, M., Holmes, V., Solar cells micro crack
detection technique using state-of-the-art
electroluminescence imaging, Journal of Science:
Advanced Materials and Devices, Vol. 4, Issue 4,
2019, pp. 499-508.
[3]. Fang, F., Li, L., Gu, Y., Zhu, H., Lim, J. H., A novel
hybrid approach for crack detection, Pattern
Recognition, Vol. 107, 2020, pp. 107474.
[4]. T. Stockinger et al., High porous, ultra-thin paper
sensors An option for successful sensor integration,
Sensors Actuators A Phys., Vol. 350, 2023, p. 114098.
[5]. Sagan, Hans, Space-filling curves, Springer, New
York, 1994, pp. 9-30.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
114
(041)
Exploration of Phage Display Peptides as Novel Sensing Materials
for Highly Sensitive and Selective Biomimetic Optoelectronic Nose
V. Escobar 1,2, C. Hurot 1, S. Brenet 1, M. El Kazzy 1, N. Scaramozzino 2, R. Mathey 1,
A. Buhot 1 and Y. Hou 1
1
Grenoble Alpes University, CEA, CNRS, IRIG-SyMMES, 17 Rue des Martyrs, 38000 Grenoble, France
2
Grenoble Alpes University, CNRS, LIPhy, 38000 Grenoble, France
Tel.: + 33 4 38 78 94 78
E-mail: vanessa.escobar@cea.fr, yanxia.hou-broutin@cea.fr
Summary: The development of a biomimetic electronic nose that can reliably detect and discriminate between Volatile
Organic Compounds (VOCs) at a level comparable to biological olfactory system has not yet been achieved and remains a
scientific challenge. The present work aims to bridge the gap between the performance of the Surface Plasmon Resonance
Imaging (SPRI)-based optoelectronic nose and its biological counterpart by integrating novel sensing materials into a
cross-reactive peptide microarray. In particular, novel peptides obtained through phage display were used as selective sensing
materials to test VOCs belonging to the BTEX family (Benzene, Toluene, Ethylbenzene, Xylene). Reliable discrimination of
VOCs with similar chemical structures, including isomers, was achieved even after three months of initial chip preparation.
More than 90 % repeatability was obtained after implementing normalization strategies. We demonstrate that these materials
represent a reliable, biomimetic way to sense and differentiate between structurally-similar VOCs.
Keywords: Electronic nose, Volatile organic compounds, Peptide, Phage display, Surface plasmon resonance imaging.
1. Introduction
For decades, biological olfaction has been a source
of inspiration for the development of biomimetic
systems that can detect and recognize odors through
the detection and discrimination of Volatile Organic
Compounds (VOCs) [1]. The remarkable capabilities
of the mammalian sense of smell were elucidated in
1991 [2]. One of the important conclusions that was
drawn is that the majority of olfactory receptors (ORs)
act in a combinatorial manner, rather than a specific
recognition for odorant molecules. Thus, each aroma is
encoded as a distinct “receptor code”, making it
possible to recognize and discriminate between them.
Because of this, a relatively small number of receptors
located in the epithelium can detect a quantity of
different odors many orders of magnitude higher.
Although the majority of receptors are
“generalists” acting in a combinatorial way, there are
some “specialist” receptors with a higher degree of
selectivity towards certain target molecules [3].
In 2018, our team reported for the first time a
highly sensitive and selective opto-electronic nose
(opto-eN) based on coupling Surface Plasmon
Resonance Imaging (SPRI) and cross-reactive peptide
microarrays [4].
The purpose of the present work is to further
improve the discriminatory power and limit of
detection of our system through the exploration of
novel sensing materials taking inspiration from the
mechanism of action of the human nose. Herein, we
aim to complement the performance of a previously
designed array of cross-reactive peptides through the
addition of peptides with higher selectivity towards
phenyl-containing VOCs belonging to the BTEX
family (Benzene, Toluene, Ethylbenzene, Xylene).
For this purpose, novel peptides were obtained
through phage display, an in vitro technique that exerts
selective pressure on the expression of peptides with
higher affinity to a target through cycles of
high-affinity purification [5] (Fig. 1).
Fig. 1. The five main steps of a phage display experiment.
2. Materials and Methods
Phage Display. For target preparation, gold-coated
glass slides were functionalized with aromatic
hydrocarbons using thiol-gold chemistry. The
Ph.D.™-7 Phage Display Peptide Library (New
England Biolabs, Australia) was used according to the
protocol provided by the kit. Briefly, the phage library
was incubated onto non-functionalized gold plates for
a 1 h negative selection. After, the plates were
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
115
discarded and the supernatant containing unbound
phage was incubated onto functionalized gold plates.
After 1h, plates were washed with TBS buffer 10 %
Tween-20 10 times, then eluted with Glycine buffer
0.2 M. Bacteriophages were sequenced after four
selection rounds. The identified peptides were
synthesized and incorporated onto sensor microarray.
Sensor Microarray. 19 biomolecules were chosen
as sensing materials, including: an internal negative
control (NC) that is known to not be sensitive to VOCs,
ten cross-reactive peptides (P1-P10) with different
physic-chemical properties (neutral, hydrophobic,
hydrophilic, positively and negatively charged) and
eight peptides chosen through phage display
(P11-P18). Peptides were deposited in quadruplicates
onto a gold-coated NBK7 glass prism (Edmund Optics,
U.S.) using a noncontact microspotting robot (Scienion
AG, Germany) and incubated for 18h to allow for the
formation of Self-Assembled Monolayers (SAMs). For
confidentiality reasons, the sequences of these peptides
are not disclosed herein.
Volatile Organic Compounds. Toluene (99.8 %),
m-Xylene ( 99 %), p-Xylene ( 99 %), o-Xylene
(97 %), 1-butanol (99.9 %), 1-hexanol ( 99 %),
cyclohexane (99.5 %) and methylcyclohexane
( 99 %), were purchased from Sigma Aldrich and
handled under a fume hood. Samples were mixed into
mineral oil to promote gradual evaporation.
Surface Plasmon Resonance Imaging. Our SPRI
system is in the Kretschmann configuration, where the
gold surface is irradiated through the prism
illuminating the whole peptide microarray and
variations in reflectivity are captured and measured by
a CCD camera. A fluid bench composed of a reference
line of dry purified air, and a VOC analyte line, each
controlled by a mass-flow controller (El-Flow,
Bronkhost) and a pressure controller (El-Press,
Bronkhorst), is used for controlled sample injections.
3. Results
Sensor response to different VOCs. Sensorgrams
are obtained by plotting %R versus time for VOC
injections. At the equilibrium plateau, replicate
responses are averaged and a distinct equilibrium
pattern is obtained for each VOC. Then, the response
is normalized using the following equation:
𝑅 %∗
%
, (1)
where N is the number of analyzed species.
Principal Component Analysis (PCA). Once
normalized, the multidimensional data set is fitted onto
a 2D plane using PCA. Fig. 2 shows PCA score plots
demonstrating that cross-reactive peptides are not fully
able to discriminate in between aromatic
ring-containing VOCs (Fig. 2a), contrarily to peptides
obtained by phage display (Fig. 2b), which achieve
better discrimination even when using less sensors.
Thus, it is clearly demonstrated that the introduction of
novel peptides on the chip improves the performance
of our optoelectronic nose.
(a)
(b)
Fig. 2. PCA score plots for Xylenes and Toluene using
a) cross-reactive peptides (P1-P10) and b) phage display
peptides (P11-P18). Inset: PCA graphs of variables.
4. Conclusions
In the present work, we demonstrate the excellent
discriminatory capabilities of novel materials for the
detection and discrimination of VOCs in the BTEX
family. Thus, for the first time, we demonstrate the
feasibility of coupling highly selective “specialist”
peptides with a cross-reactive array of “generalist”
peptides to further mimic the mechanism of action of
the biological sense of smell and improve the
performance of the optoelectronic nose.
References
[1]. M. El Kazz y, et al ., An Overview of Artificial Olfaction
Systems with a Focus on Surface Plasmon Resonance
for the Analysis of Volatile Organic Compounds,
Biosensors, Vol. 11, Issue 8, 2021, 244.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
116
[2]. L. Buck, R. Axel, A novel multigene family may
encode odorant receptors: a molecular basis for odor
recognition, Cell, Vol. 65, Issue 1, 1991, pp. 175-187.
[3]. J. Bohbot, J. Dickens, Selectivity of odorant receptors
in insects, Frontiers in Cellular Neuroscience, Vol. 6,
2012.
[4]. S. Brenet, et al., Highly-Selective Optoelectronic Nose
Based on Surface Plasmon Resonance Imaging for
Sensing Volatile Organic Compounds, Anal. Chem.,
Vol. 90, Issue 16, August 2018, pp. 9879-9887.
[5]. G. P. Smith, V. A. Petrenko, Phage Display, Chem.
Rev., Vol. 97, Issue 2, April 1997, pp. 391-410.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
117
(042)
Calibration of a Hail-Impact Sensor based on Piezoelectric Transducers
F. Blasina, A. Echarri and N. Pérez
Facultad de Ingeniería, Universidad de la República, Julio Herrera y Reissig 565, Montevideo, Uruguay
E-mail: fblasina@fing.edu.uy
Summary: Hail is a climatic phenomenon that poses numerous measurement challenges, hindering the advancement of hail
casting compared to other meteorological phenomena. Reliable hail sensors benefit society because, though sporadic,
hailstorms are highly damaging, endangering both lives and goods that matter for societal well-being. In recent years, automatic
solutions have emerged as viable options, some of which utilize piezoelectric elements. A comprehensive understanding of
these sensors, their potential for improvement, and their limitations is vital for informed technology adoption and enhancement.
This work presents our studies on the design process and calibration of a hail sensor, examining the outreach of an instrument
based on acoustic waves generated by impacts on an exposed structure. Consequently, we developed a disk-shaped hail sensor
equipped with two piezoelectric transducers for determining impact energy. We believe that this work contributes to a better
understanding of the operation of hail sensors based on acoustic impacts.
Keywords: Acoustic, Automatic, Calibration, Design, Hail, Piezoelectric.
1. Introduction
Automatic hail measurement provides the
opportunity to catch up in hail casting capabilities,
currently relegated. This work presents the design of
an electroacoustic automatic hail sensor that measures
impact energy, the hail frequency and registers the
beginning and end of events. To record the hail
impacts, the sensor uses piezoelectric transducers
integrated into a plate that is deliberately placed in
exposition to the elements.
Hailstorms are spatiotemporally restricted but
intense. The impact energy during the hail events
yields a considerable loss of material goods [1, 2].
There are reasonable hypotheses for modeling
hailstones: they are taken as homogeneous ice spheres
that reach terminal velocity before impact [3].
The classical hail measurement device, the hailpad, is
a square of foamy material covered with an aluminum
sheet, where hailstones leave dents [4]. Hailpads are of
single-use and must be collected and analyzed after a
hailstorm. They do not permit any temporal analysis.
Automatic recording devices have been developed for
long time, but those using acoustic signal processing
are state of the art. Acoustoelectric hail sensors rely on
the impact of hailstones against a mechanical
embodiment that is exposed to the storm. As a
consequence of each impact, waves are generated
within the embodiment, which is called an acoustical
cavity. The waves are converted to electrical signals by
means of transducers, for which piezoelectric
diaphragms are an excellent choice: when a strain is
applied on them, they generate a voltage signal.
2. Sensor Design
We designed a hail sensor prototype focusing in the
acoustical cavity characteristics for estimating the
energy of the impacts regardless the point on the plate
where the hits occur (Fig. 1). There are crucial
considerations to take, such as material, shape, size,
disposition of the piezoelectric transducers, and
holding of the instrument.
The experience of prospective experimentation
showed us that simple shapes make it easier to model
the behavior of the signals. At first, we considered that
signals with a richer frequency spectrum would
contain more information regarding the energy of the
impact, so we experimented with shapes and materials
that favored multiple incidences of the waves on the
borders of the plate before reaching the transducer,
such as steel and aluminum squares and non-regular
geometrical shapes. After careful analysis of the
obtained signals, the conclusion was that the waves
with fewer incidences on the borders are good enough
as to estimate the energy, whereas having shorter
signals (Fig. 2) is an advantage to separate the signals
from different impacts that are close in time.
Particularly, the central symmetry of a circle helps to
decide where to situate the piezoelectric elements.
Therefore, the shape of our sensor embodiment is a
25 cm-diameter disk.
Fig. 1. Left: Hail sensor we designed. Right: Monitoring
interface that uses the information yielded
by the hail sensor.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
118
Some solutions that have recently appeared
commercially, such as Hailflow HF-41 (ISAW
Products), Hailsens IOT2 (Hyquest Solutions), and
Hail Sensor HDI3 (Sommer) do also use disks as
resonant cavities. Within the academic references,
more complex shapes such as octagonal plates [5] and
tridimensional objects [6] were preferred.
The material of our sensor is acrylic, which has
enough weather [7] and impact resistance when the
disk is sufficiently thick, to fulfill the requirement of a
lifespan for many years. This material is highly
standardized in composition, affordable, and easy to
work with. The piezoelectric diaphragms can be easily
embedded, obtaining a waterproof design. We
constructed a device that is a total of 1.4 cm thick,
where the upper layer is 1 cm thick and two thin layers
contain and protect the transducers. The damping of
this material attenuates a typical signal in less than
20 ms, allowing us discern up to 50 impacts per
second. Metals such as aluminum and steel were
discarded due to lower damping.
The acrylic plate is fixed with its whole bottom face
resting on a base, with a homogeneous foam that
isolates the mechanical waves produced by the impacts
from other hard volumes. We considered using
punctual supports such as in a table, but this provokes
an undesired variety of vibration modes depending on
the position of the impact concerning the stands.
Having equal support for the whole face is the way of
having the simplest model for the waves’ behavior.
The principal transducer of the sensor is a single
Murata 7BB-12-94 piezoelectric placed at the center of
the plate. We use the signals generated by this
transducer to measure impact frequency and estimate
impact energy.
Fig. 2. Comparison of signals obtained when using
aluminum and acrylic as materials for the plate. When using
acrylic, the signal is much shorter, which favors
distinguishing impacts that occur close in time.
We noticed the importance of considering the
attenuation of the signals that reach the transducer for
impacts of equal energy but at different distances of the
1 isaw-products.com/rainflow-rf4/
2 hyquestsolutions.com.au/
products/hardware/meteorology/hailsens-iot-hail-
monitoring-system
transducer, which can affect the result by up to 90%.
Therefore, we included an auxiliary transducer, that is
a piezoelectric ring situated at the edge of the plate,
which is used only to determine the distance of interest
by means of the difference of times of flight between
transducers and compensate for the damping.
3. Mathematical Models
The relationship between the energy of the
acquired signals and the impact energy is given by:
𝐸 𝐺𝐸

/
𝐴
󰇛𝑑󰇜, (1)
where 𝐸 is the quantity of interest, 𝐸
is the
energy of the signal generated by the impact of a stone
when hits the center of the plate, and d is the distance
between the actual point of impact and the center of the
plate. The constant 𝐺 takes electric energy into
impact energy for the constructed device, considering
its electric circuitry and must be calibrated. The
function 𝐴󰇛𝑑󰇜 models the wave attenuation:
𝐴
󰇛𝑑󰇜𝛼󰇛1𝛼󰇜𝑒, (2)
where 𝛼 and 𝛽 are parameters that must be calibrated.
For automatically recognizing the distance d, we use
the difference of times of flight, Δ𝑡, of the waves to the
border and the center of the disk. Modeling the central
piezoelectric as a punctual element and the auxiliary
piezoelectric as a narrow ring at the edge of the plate,
the relationship is
󰇫
𝑑𝛾𝛿Δ𝑡
Δ𝑡𝑡
 𝑡
, (3)
where 𝛾 is a parameter that should be approximately
half of the radius of the plate, and 𝛿 is a calibrated
parameter since the speed of propagation of the waves
on the material is nontrivial. The absolute times of
flight from the point of the impact to the border,
𝑡
, and to the center, 𝑡
, are unknown, but
their difference can be computed by using cross-
correlation or custom signal-processing techniques.
4. Calibration
We performed the calibration by dropping steel
balls that impact with known energy. In this stage, we
considered the bounce energy employing high-speed
camera videos, to accurately estimate the amount
transferred to the plate. The method of dropping steel
balls, usually referred as Energy Matching, is of
regular use in hail-sensor calibration [8].
3 sommer.at/en/products/wind-weather/hail-sensor-hdi
4 murata.com/en-
global/products/productdetail?partno=7BB-12-9
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
119
To control the impacts, we designed and
constructed a device we named SGran [9]. One
advantage of this device is that it lets us perform up to
seven identical impacts in a row, remotely
commanding the device from a user interface. The
device uses electromagnets to drop the steel balls with
null initial velocity, achieving an excellent impact-
point and impact-energy repetitiveness, which is
crucial for calibrating (1) – (3).
A detailed work on the calibration process has been
submitted for revision at this moment. The values of
the calibrated parameters are in Table 1, whereas the
obtained curves are shown in Figs. 3 - 5.
4. Conclusions
We designed and constructed an acoustoelectric
sensor prototype that automatically estimates the
impact energy of projectiles. The transducers we used
are piezoelectric diaphragms.
Table 1. Values of the calibrated parameters.
Parameter Value Unit
𝐺
5.70 Dimensionless
𝛼 0.0237 Dimensionless
𝛽 0.0368 𝑚𝑚
𝛾 44.9 𝑚𝑚
𝛿 149 𝑚𝑚/𝑚𝑠
Fig. 3. Calibration curve. Estimated impact energy
for a given signal energy, when the impact occurs at the
center of the plate. Corresponds to (1) when 𝑑0.
Fig. 4. Calibration curve. Signal attenuation factor
regarding distance of the impact point to the center of the
plate. Corresponds to (2).
Fig. 5. Calibration curve. Distance of the impact point to
the center the plate, estimated using the difference of times
of flight of the generated wave from the impact point to
each of the transducers. Corresponds to (3).
We observed that using a disk for the shape of the
acoustical cavity of the sensor has advantages
regarding simplicity in signal processing.
Since having shorter signals facilitates the
distinction of impacts that occur close in time, we
chose acrylic as the material for the disk.
We present the basics of the calibration process and
the obtained curves, and explain the considerations for
a reliable estimator.
While calibrating, we recorded high-speed camera
videos that showed that the error introduced by not
considering the bounce of the projectile is insignificant
in most of the cases.
References
[1]. T Marshall and S Morrison, Hail damage to built-up
roofing, in Proceedings of the 22nd Conference on
Severe Local Storms, 2004, P9.3.
[2]. Yaojie Yue, Lan Zhou, A-xing Zhu, and Xinyue Ye,
Vulnerability of cotton subjected to hail damage.
PLOS One, 14, 1, 2019, pp. 1–21.
[3]. R. Schleusener and P. Jennings. An energy method for
relative estimates of hail intensity, Bulletin of the
American Meteorological Society, 41, 7, 1960,
pp. 372–376.
[4]. A. Long et al., The hailpad: construction and materials,
data reduction, and calibration, Citeseer, 1979.
[5]. M. Löffler-Mang, D. Schön, et al. Characteristics of a
new Automatic Hail Recorder, Atmospheric Research,
Vol. 100, No. 4, 2011, pp. 439–446.
[6]. J. Lane, R. Youngquist, et al. A Hail Size Distribution
Impact Transducer, The Journal of the Acoustical
Society of America, Vol. 119, No. 3, 2006, pp. 47–53.
[7]. L. McKeen, The Effect of UV Light and Weather on
Plastics and Elastomers, 3rd ed. William Andrew,
2019.
[8]. D. Vento, The Hailpad Calibration for Italian Hail
Damage Documentation, Journal of Applied
Meteorology, Vol. 15, 1976, pp. 1018–1022.
[9]. F. Blasina, A. Echarri, N. Pérez, et al., Implementation
and Evaluation of a Hail-impact Simulation Device,
Memoria Investigaciones en Ingeniería, No. 23, 2022,
pp. 135–150.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
120
(045)
Terahertz Sensor System with Dual Mode Operation
Janez Trontelj, Andrej Švigelj, Domen Višnar and Janez Trontelj jr.
Faculty of Electrical Engineering, University of Ljubljana, Trzaska 25, 1000 Ljubljana, Slovenia
E-mail: janez.trontelj1@guest.arnes.si
Summary: New THz applications are emerging fast into our everyday life. Due to its ability to penetrate harmlessly through
many media, for instance through textile which not transparent to light, but also the Styrofoam which is a perfect thermal
insulator and not transparent to visible light, it is perfectly transparent to THz spectrum. Most of applications rely on
measurements and analysis of THz spectrum.
There are two types of THz measurements set-ups. A time domain set- up TDS and Frequency Domain System, FDS The
TDS is a more mature measurement, typically the results are more accurate, but the at the cost of more expensive measurements
set up and is more time consuming. It is normally performed on optical tables and is not portable. The FDS is on the contrary
based on solid state technology, the measurement equipment cost is much lower, and is capable to perform measurement on
the spot.
Keywords: THz sensor, THz detection, Compact THz system, Portable THz system.
1, Introduction
We propose a compact THz sensor system, that is
capable to operate in both tine-domain and frequency
domain modes. It consists of an array of single sensors
with equal or properties or of different ones, but
providing a separate output for each sensor. This
allows grouping the sensor area to patches, detecting
the THz spectrum of bio-response of reflecting THz
from the sample in vivo, or the transmitting signal
in vitro.
The sensor system is compact, and is connected to
laptop by USB for providing the power and
simultaneously signal transferring. It also includes
signal input for synchronization possibility and square
signal output with selectable frequency.
Fig. 1. Photo of compact THz detection system.
The sensor system processes each pixel
individually and therefore it can be used both in time
domain and in frequency domain. It can also be used
as a THz spectrometer when a narrow band pixel is
used with different central THz frequency in the array.
This means, that when illuminated with the THz source
with a wide band spectrum, the different pixel is
responding differently according the received spectral
frequency. Such spectrometer is an enormous
advantage over typical TDS systems, which can be
time consuming, clumsy, and inadequate set-up. It
works well for food quality control on running
industrial conveyor belts to detect THz spectral
response of unhealthy food.
The detection system is a compact size of
30×30×12 mm, easy to use, and simple to program for
detection the anomaly in the THz spectra.
2. Hardware
The main objective of the hardware optimization is
the possibility to use it at an affordable cost so it can
be used in mass production for different applications
such as industrial quality control. Fig. 2 presents a
general block diagram of the detection system.
Fig. 2. Block diagram of the detection system.
Sensors are developed and produced in LMFE.
They are grouped in to an array of four sensors with
fixed position and sensor types. The sensors are
antenna-coupled microbolometer type.
The developed antenna can be narrow frequency
band to be used as band-pass filters for selectively
chosen for different applications [1]. We have
developed and produced the 0.1 THz, 0.3 THz and
0.6 THz narrow band antennas. In addition, we have
developed broadband antennas with the frequency
band from 100 GHz to 1 THz with reasonably flat
response. It turned out that the response of this antenna
is acceptable also over several THz range. The photo
of the antennas is shown on Fig. 3. The energy
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
121
absorbed by the antennas is released on the
intermediate microbolometer element which results in
its temperature change and its resistance accordingly.
[2, 3] The computer simulation of the microbolometer
temperature is shown o Fig. 4.
Fig. 3. Photo of broadband and narrow band antenna.
Fig. 4. Thermal simulation of LMFE microbolometer.
The low noise amplifier was designed and
fabricated in 0.35 um analog process of TSMC. The
layout was combining four LNAs with connections
optimized for connections to the sensors as shown
on Fig. 5.
Fig. 5. Photo of sensor line and integrated LNA.
The amplifiers gain can be set to 1000, 500 or 250
by single control signal. The IC also includes sensor
bias circuit. The bias current can be selected between
values of 150 uA, 200 uA and 250 uA. Fig. 6- shows
noise levels of the analog part of the system. 1/f noise
is clearly dominant below 1 kHz frequency. Above
1 kHz the noise levels drop below 22 Nv/Hz as here
on the thermal noise of the circuit is the only main
contributor.
Fig. 6. Measured results of LNA noise including
the sensors.
3. Digital Part
All the signals are processed by digital signal
processing. The requirements for the analog to digital
conversion are 16-bit resolution and sampling rate of
up to 200 k samples per second. The sampling rate is
limited by the USB transfer rate. The upper limit of
THz modulation frequency is defined by the number of
sensors. Which is 12.5 kHz for 8 and 25 kHz for
4 sensor system.
Integrated microcontroller also detects trigger
input signal and starts the AD conversion of predefined
number of samples. It also generates the signal for THz
source modulation. The frequency range is
programmable from 1 Hz up to 10 kHz.
4. Time Domain Setup
The setup is presented on Fig. 7. It consists of THz
source on the left and the proposed THz sensor system
on the right with four THz sensors
Fig. 7. Time domain setup layout.
Graph shown in Fig. 8 presents signals obtained
from the setup in time domain configuration. Each
color represents output of each THz sensor. The
amplitude difference is due to non-uniform THz beam.
The upper graph presents signals in time domain and
the lower shows its FFT.
0
10
20
30
40
50
0 5000 10000
Noise (nV/Hz)
Frequecy (Hz)
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
122
Fig. 8. AM signals from 4 sensors in time-domain
configuration with corresponding FFT.
Time domain setup allows instantaneous overview
of the measurement scene. It can be used in both
transmission and reflection mode.
5. Frequency Domain Setup
In frequency domain setup the THz source needs to
be frequency modulated. The frequency ramp is linear
with the span of 40 GHz and sweep time 50 ms. A half
transparent beam splitter reflects half of the signal
directly to sensors. The other half is proceeded to the
mirror which reflects with 100 % reflection. The
Frequency domain setup is shown on Fig. 9.
Fig. 9. Frequency domain setup layout.
This means that the sensor received a delayed
signal caused by the doubled distance between beam
splitter and mirror. Fig. 10 shows signals on
the sensor.
Fig. 10. Signals present on the sensor in time.
Ramp graph is shown in Fig. 4 including the
transmitting and receiving signal. The distance D of
the mirror can be calculated using Eq. (1).
𝐷 ||
󰇛
󰇜 (1)
In Fig. 11 the FFT result of frequency domain setup
is shown. Frequency on the graph is directly
proportional to the metal mirror distance from the
source, although reduced for the for the distance of the
beam which is reflected from the beam splitter.
Fig. 11. Graph showing FFT of frequency domain
setup result.
Fig. 12 presents measurement results of frequency
at different positions of the mirror.
Fig. 12. Frequency results vs distance.
6. Conclusion
The proposed sensor system has some unique
functions and is intendent to be used in many
applications:
1. Research of new applications is much more user
friendly and opens the possibility to large number of
scientists, having a limited budget for expensive
equipment.
2. Fast diagnosis of both in vitro or in vivo tissue
for different irregularity.
3. Detection of voids, foreign objects in products
invisible for naked eye or visual spectroscopy.
4. Detection of irregularity of food products
vacuum packaged, so no smell detector can’t be used.
100
120
140
160
180
200
220
240
260
100 125 150 175 200
Frequency (Hz)
D (mm)
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
123
5. Detection of contents in the pockets of the
garments brought to dry cleaning facilities.
And more.....
References
[1]. J. Grade, P. Haydon, D. Van der Weide, Electronics
Terahertz Antennas and Probes for Spectroscopic
Detection and Diagnostics, Journal of Applied Physics
Vol. 125, Issue 15, April 2019, 151602,
[2]. J. Lloyd-Hughes, G. Scalari, A. van Kolck, M. Fischer,
M. Beck, J. Faist, Optics Express, Vol. 17, Issue 20,
2009, pp. 18387-18393.
[3]. K. Hirakawa, Y. Zhang, B. Qiu, T. Niu, R. Kondo,
N. Nagai, K. Kuroyama, Fast and sensitive bolometric
terahertz detection at room temperature through
thermomechanical transduction, in Proceedings of the
44th International Conference on Infrared and
Millimeter and Terahertz Waves (IRMMW-THz’19),
Paris, 1-6 Sept. 2019, pp. 1-3.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
124
(047)
Physiological Assistance by Climate Comfort:
Measurements and Indicators
Bernhard Kurz 1 and Christoph Russ 2
1 Institute for Applied Ergonomics, Siedlerstr. 1, 85716 Unterschleissheim, Germany
2 InsideClimate GmbH, Hilpoltsteinerstr. 1b, 83607 Holzkirchen, Germany
1 Tel.: + 49 89 62489595, Fax: + 49 89 662065
E-mail: be.kurz@ifaerg.de
Summary: With regard to performance and motivation to perform, in addition to adapted work intensities and social aspects,
the environmental conditions (noise, light, climate) in particular must be adapted to the human physiological requirements. At
this the ambient climate resp. the resulting microclimate depending on work load, clothing etc. plays the central role, especially
in view of increasing global temperatures. These facts underline the importane of skin microclimate as a comfort or discomfort
indicator as well as its preventive diagnostics for emerging critical health conditions such as dehydration or hyperthermia.
Initially, the measurement of microclimate factors and the determination of suitable comfort indicators under real wearing or
working situations represent a particular challenge. For complete physiological assistance with the aim of minimizing the basic
physical load (reactive power), heart rate, core body temperature and skin resistance, for example, must also be recorded,
preferably by means of textile resp. smart sensors. A disadvantage of climate measurements with test persons is the inter- and
intraindividual scattering, which only can be compensated by thermophysiological simulation methods, based on a
reproducible and standardized heat and humidity source. So the assessment of the degree of discomfort as well as unhealthy
work conditions can be evaluated objectively.
Keywords: climate comfort, physiological assistance, comfort assessment, microclimate measurement
1. Introduction
The requirements addressed on modern work,
protective and sports clothing or even on seating and
lying systems have changed fundamentally in recent
decades. So multifunctionalities are expected of
todays clothing systems, which not only fulfill the
primary purpose of, for example protection against
chemical, mechanical or thermal influences, but also
for skin-sensory or haptic benefits, as well as hygiene
and cleaning demands or biomechanical requirements
with regard to a best fit (see Fig. 1).
Fig.1. Ergonomic requirements for clothing (green areas with comfort relevance)
2. Climate Comfort and Discomfort
Additionally to the requirement-related
functionalities comfort aspects have to be regarded,
which can be roughly divided into the two areas of
biomechanical and climatic comfort. Furthermore
both comfort sectors show strong interactions and
mutual influences with corresponding physiological
consequences. For example, a poor shoe fit will
produce skin irritation and blisters and, in conjunction
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
125
with a warm and damp shoe climate, this effects will
even be accelerated, because the mechanical
properties of the skin are adversely affected
(maceration). In addition, it is known that in the first
phases of use of clothing components or of sitting and
lying systems, the biomechanical conditions initially
determine the wearing comfort or, better, the
discomfort. But after some time, the climatic factors
arise and can sometimes even take on the dominant
role. In view of the demand that comfort is ultimately
determined by no unpleasant sensations [3, 7], it is also
clear that regional discomforts have a decisive
influence on the global situation and will directly
influence the physical and mental performance as well
as the motivation to perform.
In combination with ergonomic and skin-sensory
aspects, such as sweat-related adhesive effects or wet
spots, climate comfort plays a central role, since the
global thermophysiological requirements are only
fulfilled by a balanced heat management [2, 4, 9]. This
includes a targeted design of the heat transfer, the
breathability and the moisture transportation of the
textile layers in order to achieve a pleasant local
microclimate (see Fig. 2).
Fig. 2. Heat and humidity profile of a test person
on an office chair.
Regarding the partly dramatic changes of the work
situations with increasing temperatures (global
temperature change) and their physiological
consequences the importance of climate comfort will
increase in future.
3. Physiological Assistance
In the construction and design as well as
assessment of the climaphysiological functions of
clothing or body support systems, and incidentally
also for actuators and control elements, a multi-stage
analysis process is required, i.e.
Determination of (bio-)physical parameters for
the evaluation of thermoregulatory properties
(balanced heat management),
Quantification of the comfort-determining
microclimate as a (human-)physiological
criterion by standardized testing methodology,
Use of validated prediction models for comfort
prognosis,
Selected supplementary validation by tests with
test persons.
After numerous investigations by a wide variety of
industrial and public institutions [1, 6, 8], the skin
microclimate is the decisive multi-dimensional
indicator for assessing the thermoregulatory function
and, in particular, the climatic and ergonomic wearing
comfort. This is confirmed in numerous studies with
shoes, gloves, clothing or seating and lying systems
[5, 10] by the destincted physiological sensitivity for
temperature and humidity perception with high
correlation to the measured variables temperature and
absolute humidity. From this, the approved limits for
thermal neutrality of 35 °C and 25 g/kg can be derived,
characterized by a balanced heat and sweat control
with periodic thermoregulative activities (see Fig. 3).
In contrast, increased heat dissipation as well as
insufficient cooling effect are recognizable by
additional physiological reactions with increased
energy expenditure for heat production, sweating and
vasodilation/constriction. This “reactive” power,
which is useless for the work process, must be
minimized in the sense of physiological assistance in
order to counteract
the kidney function impairment due to increased
sweating and the resulting loss of electrolytes
with subsequent interference of muscle functions,
and
the cardiovascular stress due to increased pump
rate.
Fig. 3. Thermoregulative phases.
Finally, these facts underline the importance of
skin microclimate as a comfort or discomfort indicator
as well as its preventive diagnostics for emerging
critical health conditions such as dehydration or
hyperthermia, especially with the help of additional
physiology parameters such as core temperature, skin
resistance and heart rate e.g. by means of smart
textiles.
3.1. Microclimate Measurement
In addition to a suitable, miniaturized and nearly
non-retroactive sensor technology, a key aspect of
microclimate measurement close to the skin is the
detection of its distribution by means of sensor arrays.
Automated cluster analyses are used to determine
area-specific parameters without being dependent on a
single sensor position or sensor orientation. The
SweatLog measurement system offers sensor mats
(see Fig. 4) for use on surfaces such as seats or
mattresses, as well as sensor grids for clothing
analyses, for example with shoes, jackets or headwear.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
126
Fig. 4. SweatLog sensor array as sensor mat.
Fig. 4 shows the results of a proband test with
2 different safety vests, one (blue bars) with PCM
(phase change material) the other without (red bars).
The tests have been conducted in various room
temperatures (25°, 30° und 35° C) on a bike ergometer
adjusted to 80 W physical load. The results confirm
the cooling effect of the PCM material with its
physiological consequence on the resulting heart rate,
at least increasing over the lactate threshold (resting
pulse plus 40) wearing the vest without the cooling
technology.
Fig. 5. Safety vests: microclimate (Temperature T, absolute
(AH) resp. relative humidity (RH)) and heart rate (HR)
from proband test in different ambient
temperatures (25°, 30°, 35°C).
Thus, the assessment of the thermoregulatory
situation can be derived from the obvious relationships
between
Objective physiological reactions (dehydration,
rise in core temperature, heat exhaustion);
Subjective comfort perceptions (hot spots, wet
spots, clothing stickiness);
And the correlating microclimate parameters
temperature and absolute humidity.
3.2. Standardized Heat and Sweat Source
A disadvantage of measurements with test persons
is the not inconsiderable inter- and intraindividual
scattering, which requires many test persons for
reproducible and statistically reliable statements. With
the help of thermophysiological simulation methods,
which are used as a reproducible and standardized heat
and humidity source [11], the assessment of the degree
of discomfort can be achieved also from the
measurement of temperature and absolute humidity of
the simulated microclimate.
According to the best possible replication of the
physiological human evaporation and the combination
of heat and humidity emissions the SWEATOR
technology has been developed [4]. This technology is
based on a water-filled, heat controlled hollow body
with a special water vapor permeable membrane
coating. The test specimens can be manufactured in
different shapes (see Fig. 6), with different permeable
membranes and without or with different surface
perforations. The heating and water circulation are
controlled by a touch screen control unit. In the
boundary layer between SWEATOR surface and
material probe the microclimate distribution is
measured with a SweatLog sensors array (see
Chapter 3.1). The test trials take place under
climatically defined room conditions including
additional convections, if necessary. The SWEATOR
technology thus enables realistic test conditions and
non-destructive testing options with ready-made
products.
Fig. 6. SWEATOR torso with test jacket, SWEATOR foot
and SWEATOR head with safety helmet.
Fig. 7 shows the results of simulation tests with the
SWEATOR head in various protective caps. The
differences in the type of textile layer used and in the
perforation of the protective cap are assigned as
follows:
REF: Cotton/Jersey textile, no impact protection
Blue: identical to REF but with perforated plastic
(ABS) shield
Black: identical to REF with non-perforated ABS
shield.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
127
Fig. 7. Bump caps: microclimate (Temperature T, absolute
(AH), relative humidity (RH)) from SWEATOR test under
21°C ambient temperature
The effect of the impact shield (ABS plastic) is
clearly recognizable for Blue as well as Black with
higher temperatures and higher humidities compared
to the reference cap (REF). The perforation of the ABS
shield (Blue) results in more convenient microclimate
than with the Black cap, but still in the discomfort
range because of a high microclimate temperature in
the case of Black and increased absolute humidity in
the case of Blue and Black. Compared to any proband
test, such simulation methods are a cost-efficient,
highly reproducible and meaningful alternative.
4. Conclusions
Climatic comfort involves physiological relevance
and influences performance and motivation to perform
and has to be differed into global and regional comfort
as well discomfort perceptions. In order to achieve a
balanced heat and moisture management the
thermoregulative reactive power must be
thermoregulate minimized resp. avoided due to
cardiovascular stress or kidney function impairments.
Comfort perception, especially in clothing
components or on seating/lying surfaces, can be
quantified at a high correlation level by measuring the
temperature and absolute humidity in the skin
microclimate, measured as climate distribution by
several sensors resp. sensor arrays (SweatLog system).
To compensate the inter- and intraindividual
influences on the appearing microclimate any
simulation processes like the SWEATOR technology
are preferable, but must ensure a thermophy-
siologically adjustment and a highly reproducible and,
of course, an easy handling.
Selected research projects in the fields of work
clothing components (bump helmets, protection vests
e.a.) emphasize the applicability of required climate
measurement as well as climate simulation
technology. The result confirm the informative value
of microclimate indicators on comfort or discomfort
perception and, maybe much more important, on
critical physiological situations.
References
[1]. Y. Epstein, D. S. Moran, Thermal Comfort and Heat
Stress Idices, Industrial Health, 44, 2006, pp. 388-398.
[2]. R. F. Goldman, B. Kampmann, Handbook on Clothing
2nd ed., in Proceedings of the International Conference
on Environmental Ergonomics (ICEE), 2007,
pp. 2/1-2/19.
[3]. T. H. E. Hertzberg, Seat Comfort, WADC Technical
Report, 1958, pp. 30-56.
[4]. B. Kurz, Ch. Russ, Climate Comfort and Product
Testing, Technical Textiles, 4/5, 2020, pp. 172-174.
[5]. B. Kurz, S. Langenmeir, C. Zimmermann,
W. Uedelhoven, M. Rottenfusser. Klimamanagement
im Schuh, Orthopädieschuhtechnik, 11, 2012,
pp. 42-51.
[6]. A. Psikuta, L. C. Wang, R. M. J. Rossi, Prediction of
the Physiological Response of Humans Wearing
Protective Clothing Using a Themophysiological
Human Simulator, J. Occup. Environ. Hyg., 10, 4,
2013, pp. 222-232.
[7]. A. Ulherr, K. Bengler, Bewertung von Sitzen – Eine
kritische Betrachtung von Komfort und Diskomfort
Modellen, Z. Arb. Wiss., 72, 2019, pp. 104-110.
[8]. K. H. Umbach, Die physiologische Funktion der
Bekleidung, P. Knecht (ed.), Funktionstextilien, High-
Tech-Produkte bei Bekleidung und Heimtextilien,
2003, pp. 43-56.
[9]. L. Wang (ed.). Performance Testing of Textiles,
Woodhead Publishing of Elsevier, Cambridge, 2016.
[10]. C. Zimmermann, W. Udelhoven, B. Kurz, K. -J. Glitz.
Thermal comfort range of a military cold protection
glove: database by thermophysiological simulation,
Eur J Appl Physiol, 104, 2008, pp. 229-236.
[11]. DIN EN ISO 11092:2014-12. Textilien -
Physiologische Wirkungen – Messung des Wärme-
und Wasserdampfdurchgangswiderstands unter
stationären Bedingungen (sweating guarded hotplate
test), 2014.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
128
(048)
Internet of Things-based Geo-awareness System for Civilian Drones
S. Kunze
Deggendorf Institute of Technology, Institute for Applied Informatics,
Grafenauer Str. 22, 94078 Freyung, Germany
Tel.: +49 8551 91764-33, fax: +49 8551 91764-69
E-mail: stefan.kunze@th-deg.de
Summary: With the increasing adoption of civilian drones, technical solutions for managing the traffic in the lower airspace
are becoming increasingly important. Pure transponder solutions which only allow monitoring of the movement will not be
sufficient for safe drone operation in the future. Instead, the situation in the lower airspace must be continuously analyzed,
based on transponder data from the drones as well as information from various other sensors and data sources. Whenever
potentially dangerous situations are identified, a means to interact with the affected drones (either by updating the autopilot or
warning and guiding the human) is also essential. In the SIMULU project, such a system is prototypically implemented with
a strong focus on the scalability and modularity of the approach. The implementation is presented in this paper.
Keywords: Unmanned aerial systems, Drones, U-Space, Internet of Things.
1. Introduction
In recent years, numerous new applications for
civilian drones have emerged. However, with the
increasing numbers of unmanned aerial systems
(UAS), the need for regulating the lower airspace also
has become a pressing issue. In the European Union
(EU) the term U-Space is used for UAS traffic
management (UTM). Transponders will be an
important part of any future UTM implementation. But
for reliable and safe drone operation a way to interact
with the UAV (unmanned aerial vehicle) or its pilot is
needed. In The SIMULU project, a prototypical
geo-awareness system (GAS) for civilian drones is
implemented. It features a bi-directional
communication link between the UTM system and the
UAS. This allows interaction with the drone’s pilot or
autopilot in case of dangerous situations. In order to
identify potentially dangerous situations, not only the
transponder messages from the connected drones but
also information from various other sensors and
information sources are taken into account. For the
prototype, Internet of Things (IoT) techniques and
protocols are used. In this paper, the implementation of
SIMULU’s geo-awareness system is presented.
2. Related Work
A central aspect of UTM systems is the transponder
which allows the drone to transmit its position (and
other telemetry data) to the UTM. In manned aviation,
this is done using the Automatic Dependent
Surveillance – Broadcast (ADS-B) standard. The
feasibility of ADS-B with reduced transmit power for
UAV applications has been shown by Duffy and Glaab
[1]. However, the ADS-B frequency band is becoming
increasingly congested [2]. Hence, it doesn’t seem to
be a suitable choice for widespread use in UTM
systems. The authors of [3] propose an ADS-B-like
communication using various communication
channels like 4G, Zigbee, or LoRa. They further
present a prototypical UTM system using the
ADS-B-like transponders [4]. This system also
incorporates a UTM-based detect and avoid
mechanism. It recognizes when two drones are
approaching one another in the same airspace. “Traffic
alerts to pilots via personal mobile phones will be sent
by voice calls from UTM controller for avoidance” [4].
The Single European Sky ATM Research
Programme (SESAR) demonstration project
EuroDRONE implements some key UTM features of
Europe’s U-space programme. It did include a
hardware box that serves as a transponder and for flight
clearance. However, a means for interacting with the
drone or its pilot during the flight was not part of the
project [5].
The Commission Implementing Regulation EU
2021/664 which is the regulatory framework for the
U-space, is currently being passed into the national
laws of the member states. It is stated that a U-space
service provider must be able to alert the UAS operator
and update or withdraw a flight authorization if it is at
risk. Examples of this are conflicts with manned air
traffic or the detection of non-cooperative drones [6].
While cooperative drones regularly transmit their
position to the UTM via their transponders,
uncooperative drones won’t do so. However, the
existence of these uncooperative drones, which don’t
adhere to the regulations should also be accounted for
when designing UTM applications, as they can
severely impact safe drone operation. This threat has
created a demand for civilian drone detection systems.
A wide range of methods (acoustical, optical and
infrared-based, radio frequency-based, and radar) has
been researched [7]. Some detection systems for
civilian drones are already commercially available.
For the design of the proposed system, privacy is
also important. All future U-Space implementation
also must adhere to the regulations of the EU’s General
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
129
Data Protection Regulation (GDPR). A UTM system
handles sensitive and personal data. For example, the
drone owners and pilots must be registered in the
system. Anonymization of the data can be an important
step to assure sufficiently high levels of privacy and
security. An overview of various levels of
anonymization based on [8] is shown in Table.
Table 1. Levels of data anonymization,
in accordance with
[8]
.
Classification Definition
Identifiable data Contains obvious identifiers of
individuals.
Reversibly
Pseudo-
nymized
Obvious identifiers are removed and
replaced with code; reidentification is
p
ossible with a ke
y
.
Pseudo-
anonymized
Directly identifiable information is
removed, and no key exists to map the
records back to respective individuals.
However, linkage to other available
data sets could enable reidentification.
Irreversibly
anon
y
mized data
Reidentification of individual is no
lon
g
er
p
ossible
Anonymous data
Data has been collected on an
anonymous basis or has been
aggregated and reidentification of
individual is not
p
ossible
3. Implementation
The goal of the SIMULU project is to collect
relevant data from all connected drones (in the form of
regular transponder messages), various other sensors,
and information sources. All this data is fused to
monitor the air traffic in the lower air space and
identify potentially dangerous situations. The system
shall be able to interact with the UAS to restore safe
flying conditions. The implementation of the
geo-awareness system is based on the previously
published concept [9]. The project focuses on a
scalable solution based on open interfaces, that can
either run in the cloud or on-premises. The backend
communication is mostly based on MQTT. The system
architecture is shown in Fig. 1 and consists of the
following components:
Fig. 1. System architecture.
GAS Central System: This component handles
the incoming transponder messages from all connected
drones and retrieves other relevant data from the
connected sensors and services. By fusing the data
from the different sources, a clear picture of the current
situation is created. Based on the fused data, potential
dangers are identified. Such potentially dangerous
situations may include but are not limited to UAVs on
a collision course, UAVs in the flight path of a
pre-planned and approved mission of another drone, or
UAVs in a no-fly zone. These no-fly zones can either
be static (e.g., the area around airports) or dynamic.
The latter ones can be set up to temporarily close
specific parts of the lower air space for civilian drone
traffic. This allows the UTM service to account for
events like mass gatherings (e.g., demonstrations,
concerts, etc.) or police and rescue service operations.
For drones guided by autopilot, the mission may be
updated directly. This way the drone will
automatically change its course, speed, or altitude to
evade the danger. In case the UAV is controlled by a
human pilot, warnings and instructions are transmitted
to the user interface (UI) of the pilot. The prototype in
SIMULU is implemented using Node-RED and
provides a dashboard view for the UTM controller.
MQTT Broker: This is the central component of
MQTT communication, as the clients don’t
communicate with each other directly. Instead,
information is published to the broker on a specific
topic. All other clients that have subscribed to this
topic will receive a copy of the message from
the broker.
Depending on the Quality of service (QoS) setting,
the level of delivery guarantee for messages can be set
[10]: For QoS 0 a best-effort mechanism without any
acknowledgment is used. The sender will send each
message only once. Thus, message loss is possible. If
QoS 1 is selected, the recipient will reply with a
PUBACK message. If the sender doesn’t receive this
PUBACK, it will retransmit the message. With QoS 2
it is assured, that the message is received exactly once.
For this, a four-part handshake is used. While QoS 0
provides the fastest way of communication, messages
might get lost. The GAS uses this setting for the
regular transponder messages of the drone. The system
can cope with losing some transponder updates. On the
other hand, this keeps the overhead for most of the
communication very small. A higher QoS level
increases the overhead but assures the message
delivery. For this reason, the GAS uses these settings
for transmitting important data, like mission updates or
warnings for pilots.
UAV adapter: This device is the part of the GAS
that interfaces the UAV. It consists of a radio module
and an embedded PC which is connected to the drone’s
flight controller. Through this connection, it has access
to the sensor data (e.g., GPS position, etc.) of the
drone, as well as its autopilot (if available). From this
information, the UAS adapter creates transponder
messages (based on the MAVLink protocol), which
are regularly transmitted to the central system. The
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
130
update interval of the transponder messages can be
configured in the software.
The prototype of the UAV adapter is implemented
for a quadcopter using a Pixhawk Cube Orange flight
controller. In addition to the existing setup required for
operating the drone, the components of the UAV
adapter are integrated into the UAS as shown in Fig..
The second telemetry port of the Pixhawk is connected
to a Raspberry Pi embedded PC. Through this
interface, the same MAVLink messages that are sent
to the ground control station (GCS) are also available
on the Raspberry Pi. From this information, the
transponder messages are created and then transmitted
to the central System of the GAS. For this radio
transmission, RFD868 radio modules are used. These
modules operate in the license-free 868 MHz ISM
frequency band. This radio link provides a
bidirectional channel between the UAV and the central
system. It is used for all GAS-related communication,
like transponder messages and flight plans in the
uplink (UAV to UTM), but also for warnings and
mission updates in the downlink (UTM to UAV). The
prototype of the geo-awareness system uses the same
radio modules that are used for the telemetry link
between the drone and GCS. The power supply for the
Raspberry Pi and the additional radio module is
provided by the drone’s battery.
Fig. 2. UAV Adapter for drone with Pixhawk.
Alternatively, a 4G/5G modem could be used for
communication utilizing the public cellular network.
In this case, MQTT could also be used for the
transponder messages. The device is powered by the
drone’s battery. The UAS adapter can also be
implemented as a pure hook-on device, with its own
power supply and sensors. In this case, no direct
interaction with the drone is possible. Instead, only
messages to the pilot or GCS may be sent. However,
this type of device can be retrofitted to any UAV.
User Interface: The UI displays relevant
information for the pilot. It shows the position and
status (telemetry and other sensor data) of the pilot’s
own drone. The map also shows the position of other
nearby UAVs, (dynamic) no-fly zones, and the flight
paths of upcoming registered flights. The prototype in
SIMULU is implemented in Node-RED. A screenshot
of the UI is shown in Fig.. The information can either
be relayed through a radio link from the central system
via the UAV adapter, or the UI can subscribe directly
to the relevant MQTT topics if it has an Internet
connection.
Fig. 3. Screenshot of user interface.
The UI can also provide an audio output to improve
the usability. This way the pilot can concentrate on the
drone in the air, rather than permanently checking
the display.
External sensors: For the safe operation in the
lower airspace not only the transponder messages from
the connected cooperative drones are relevant. The
threat posed by non-cooperative drones (i.e., drones
that don’t comply with the regulations) must also be
considered. In the SIMULU system, a radar-based
solution was used as an exemplary detection system.
Information on detected signals (non-cooperative
drones, but also bird swarms) is published to the
MQTT broker. Another example of an external type of
sensor that can provide valuable information is a
weather station, as especially the wind speed is an
important factor for safe drone operation. Using the
modular MQTT-based IoT approach, the system can
easily be extended with further sensors.
Services: The SIMULU system also features
several services that can provide further important
information for the situation analysis. Several
prototypical services are implemented in the project.
They can either be interfaced via MQTT or an
OpenAPI interface.
The drone register provides detailed information on
the UAV itself, like type, weight, and payload
capacity. It also contains information on the owner of
the UAS. Finally, each drone gets a unique ID which
is used to identify it in the communication with the
UTM. Since this is partially sensitive data, there are
two different APIs for accessing the information. The
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
131
uav-model-api gives access to generic data on the
drone type, while the uav-object-api gives access to
sensitive data.
The pilot register contains information on the user
of the drone. It also allows the pilots to upload
certifications and permits. This data is highly sensitive
and is not accessed by the geo-awareness system.
However, it is required for other aspects of a full UTM
implementation.
In the flight register, information on planned
missions and approved flights is stored. This data
serves as the basis for the geo-awareness system to
clear flight corridors for pre-approved flights. For
example, human pilots maybe warned, that a planned
flight will lead through the nearby airspace. By
informing the pilot of temporary altitude restrictions a
save passing can be assured.
Another service keeps track of existing no-fly
zones. Besides the static no-fly zones (e.g., at airports),
the SIMULU project also considers dynamic ones.
Since these can’t be hardcoded into the firmware of the
drone, it’s up to the GAS to inform the connected UAS
of these dynamic no-fly zones. The positions, altitude
limitations and time limits of all the respective no-fly
zones can be accessed from this service.
Privacy considerations: Any future U-Space
system must be compliant with the EU’s GDPR. While
the proposed prototypical system is not
GDPR-compliant, some privacy considerations were
nevertheless taken into account. Access to the services
is based on a user’s role. For example, in the pilot’s UI
only a drone icon is shown. The icon can be dependent
on the type of UAV (fixed-wing or multi-copter), as
this information is relevant for the pilot. Other
information, like the owner of the drone, is not relevant
to the functionality of the UI. In the central system of
the GAS all connected drones are identified by their
unique ID. This is necessary to obtain information
from the drone register on the type of drone, weight,
autopilot capability etc. However, a UTM operator
using the monitoring system does not need to know
who the drone with a given ID belongs to. For other
U-Space-related services, like billing or handing out
fines for infractions, the owner of the drone can be
identified by the drone’s ID. Depending on the user
role, the user rights can be scaled, so that they only
have access to the data that is relevant for them.
In the radio transmissions of the UAV adapter and
the central system, only the drone ID is sent. This
follows the concept of pseudonymization as described
in [8], as the sensitive private data can only be obtained
with the according user rights.
4. Test & Evaluation
The prototypical implementation of the SIMULU
system was tested during real drone flights, to test the
features of e.g., automatically updating an autopilot
mission during the flight to evade no-fly zones or
other drones.
With the following exemplary test flight, the basic
functions of the GAS can be summarized nicely. After
turning on the drone, it automatically begins sending
transponder messages (containing its position and
status) to the GAS. Once, the autopilot mission is
uploaded from the GCS to the flight controller, the
mission details are automatically passed along to the
GAS as well. After taking off, the drone began to
follow the preplanned path. At that time, a dynamic
no-fly zone was activated. In Error! Reference
source not found.a. the original waypoints of the
drone and the no-fly zone are illustrated. The central
system of the GAS recognized that the original mission
of the drone would lead through the prohibited area.
Therefore, new waypoints for the autopilot were
calculated and transmitted to the drone. The drone
successfully updated its autopilot mission midflight
and followed the new waypoints, as shown by the track
of the drone’s flightpath in Error! Reference source
not found.b.
Fig. 4. Mission update.
The test flight also included a second, manually
piloted drone, which was positioned in the flight path
of the original drone. The system reacted to this by
warning the pilot of the second drone and imposing an
altitude limit on him. At the same time the mission of
the first drone was updated again, to increase the
altitude and allow a safe passage of the two drones.
To better test and evaluate the behavior of the GAS
in dangerous situations, emulated drones were also
used for the evaluation process. These drones were
emulated with Hardware in the loop (HiL) using
AirSim or purely in Software. Additionally, a radar
emulator was used to replay the previously recorded
track of real drones. The radar measurements were
performed in cooperation with the project partner
Fraunhofer IOSB at their facility in Karlsruhe.
Afterward, the recorded radar tacks can be replayed by
publishing the radar data to the MQTT broker with the
original timing. These detections are treated as
uncooperative drones by the GAS and the emulated
drones in the area are warned and guided away
from them.
For the performed evaluation flights and
simulations, the update interval for the transponder
messages was always set to one second. For the
performed tests this provided a good balance between
the temporal resolution of the data, the congestion of
the radio channel, and system performance. However,
the implementation of dynamic transponder intervals
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
132
might increase the performance of the system,
especially when sharply increasing the number of
drones. The frequency of the regular transponder
messages could increase with the threat level (green /
yellow / red) of the UAS. It could also be increased
only in specific cases (e.g., UAVs on a collision
course) where a better temporal resolution are
beneficial. On the other hand, for drones that are flying
alone and far away from any potential risks, the update
period could be considerably lengthened.
5. Conclusions & Future Work
The test and evaluation of the SIMULU
geo-awareness system has shown the functionality of
the design. In contrast to traditional drone transponder
designs with information flow only from drone to
UTM, the central aspect of the proposed system is a
bidirectional communication link. During the
evaluation, the monitoring of the drone traffic, the
recognition of potentially dangerous situations by the
central system, as well as updating of the autopilot
mid-flight worked well. In the case of manually piloted
drones, the GAS provides a way to interact with human
pilots in a simple way during the flight. Providing live
support to human pilots will become increasingly
important with the rising number of UAVs and UAS
applications. Many pilots use their drones for
recreational purposes and may not do so on a regular
basis. They may also not be familiar with the latest
regulations. Due to these factors, human pilots may
quickly become overburdened in dangerous situations.
In conclusion, the SIMULU project has shown the
value of a bidirectional channel for UTM-related
communication. It provides a means to directly interact
with or influence the drones in the U-Space, which can
significantly increase safety in the lower airspace.
However, there are several steps to be taken in the
future work. The next step towards a real-world
application is increasing the number of drones as well
as the area covered by the system. So far, the
integration into the autopilot has only been
implemented for one system (Pixhawk flight
controller). Other flight controller systems should be
incorporated into the system as well. In terms of
communication, the radio modules could be replaced
by a 4G/5G-based solution. This would eliminate the
need for a dedicated communication infrastructure.
The usability of the user interface could be
increased by using augmented reality to implement a
head-up system for the pilot. This would keep the focus
of the pilot on the airborne drone while providing
important information at the same time.
To improve the situation analysis and
recommendations of the GAS, further services could
be connected. Examples are the integration of weather
services, or a service that links the U-space to manned
aviation. One application for this could be to clear
flight paths and landing spots for rescue helicopters.
Additionally, the path planning for updating autopilot
missions is quite rudimentary, so far. More
sophisticated algorithms that also consider the flight
trajectories of different drone types could be
used instead.
Acknowledgements
The work presented in this paper is part of the
SIMULU project, which was funded by the German
Federal Ministry for Digital and Transport.
References
[1]. B. Duffy, L. Glaab, Variable-power ADS-B for UAS,
in Proceedings of the IEEE/AIAA 38th Digital Avionics
Systems Conference (DASC’19), San Diego, USA,
September 2019, pp. 1-6.
[2]. A. Baltaci, E. Dinc, M. Ozger, A. Alabbasi, C. Cavdar,
D. Schupke, A Survey of Wireless Networks for Future
Aerial Communications (FACOM), IEEE
Communications Surveys & Tutorials, Vol. 23, Issue 4,
2021, pp. 2833–2884.
[3]. C. E. Lin, C.-S. Hsieh, C.-C. Li, P.-C. Shao, Y.-H. Li,
Y.-C. Yeh, An ADS-B Like Communication for UTM,
in Proceedings of the Integrated Communications,
Navigation and Surveillance Conference (ICNS’19),
Herndon, USA, April 2019, pp. 1-12.
[4]. C. E. Lin, T. Chen, P. Shao, Y. Lai, T. Chen, Y. Yeh,
Prototype Hierarchical UAS Traffic Management
System in Taiwan, in Proceedings of the Integrated
Communications, Navigation and Surveillance
Conference (ICNS’19), April 2019, pp. 1-13.
[5]. V. Lappas, et al., EuroDRONE, A European UTM
Testbed for U-Space, in Proceedings of the
International Conference on Unmanned Aircraft
Systems (ICUAS’20), Athens, Greece, September 2020,
pp. 1766-1774.
[6]. Acceptable Means of Compliance and Guidance
Material to Regulation (EU) 2021/664 on a Regulatory
Framework for the U-Space, Annex to ED Decision
2022/022/R, Issue 1, European Union Aviation Safety
Agency (EASA), December 2022.
[7]. A. Holland Michel, Counter-Drone Systems, Center for
the Study of the Drone at Bard College, Feb. 2018,
http://dronecenter.bard.edu/counter-drone-systems/
[8]. K. N. Vokinger, D. J. Stekhoven, M. Krauthammer,
Lost in Anonymization – A Data Anonymization
Reference Classification Merging Legal and Technical
Considerations, Journal of Law, Medicine & Ethics,
Vol. 48, Issue 1, 2020, pp. 228-231.
[9]. S. Kunze, A. Weinberger, Concept for a
Geo-Awareness-System for Civilian Unmanned Aerial
Systems, in Proceedings of the 31st International
Conference Radioelektronika, April 2021, pp. 1-6.
[10]. MQTT Version 5.0, OASIS Standard, Organization for
the Advancement of Structured Information Standards,
2019.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
133
(049)
Hyperspectral Imaging Microscopy for Single-cell-analysis
Wolfgang Kurz, Aaron Flügge Arus, Emre Kariper, Olcay Akgün, Edwin Adisoemarta,
Martin Jakobi and Alexander W. Koch
Technical University of Munich, Institute for Measurement Systems and Sensor Technology,
Arcisstraße 21 80333 Munich, Germany
Tel.: + 4989 289-23354
E-mail: w.kurz@tum.de
Summary: Hyperspectral imaging (HSI) is an imaging technology that can capture images of a wide range of electromagnetic
wavelengths, providing high-resolution spectral data that can be used to identify subtle differences in a scene. This technology
is being used in a wide range of applications, including remote sensing, agriculture, astronomy, forestry, food quality, and
cultural heritage preservation. Due to recent advances in this technology, it is also possible to scan objects in the micro- and
nanometer range, so called hyperspectral microscopy. This adjustment allows studying nanoparticles, biological samples or
processes and providing spectral information, as conventional microscopy only presents an image in the visible range. The
spectral information can differentiate certain materials, which is usually difficult to achieve using conventional microscopy.
Alternatively, it may help identify the cellular structures of a single cell or cell culture and avoid the challenging process of
labelling for fluorescence microscopy. This work presents the modification of a hyperspectral imaging setup for microscopy.
Keywords: Hyperspectral imaging, Hyperspectral microscopy, Spectral analysis, Spectral fingerprint.
1. Introduction
Hyperspectral imaging (HSI) collects and
processes information across the electromagnetic
spectrum [1]. It combines spectroscopy and digital
photography. Hyperspectral imaging aims to acquire
the spectrum of each pixel in an image of a scene to
discover objects, identify materials, or detect processes
[2]. Hyperspectral cameras use specialized hardware to
capture hundreds of bands for each pixel, which can be
interpreted as a full spectrum. Therefore, an object of
interest is scanned, and an image for every wavelength
is created. These images are stored in a so-called
hypercube and allow the analysis of the spectral
distribution of a single pixel (Fig. 1) [3].
Fig. 1. Hypercube of a hyperspectral imaging scan and its
pixel analysis [3].
The entire electromagnetic spectrum of the
inspected object is used to find so-called fingerprints,
which are unique spectral signatures [4]. With these
distinctive marks, objects can be differentiated due to
their spectral features in the non-visible
electromagnetic spectrum range. Therefore,
hyperspectral imaging is used in many fields of
application like remote sensing, agriculture,
astronomy, forestry, food quality and cultural heritage
preservation, and biomedical imaging. The field of
application of this technology is shifting more and
more from macroscale down to micro- and nanoscale,
where additional spectral information can help with the
analysis and characterization in molecular biology or
of nanoparticles where conventional microscopy
mostly presents only spatial information [5].
2. Hyperspectral Microscope Setup
Section 2 presents the hyperspectral Imaging setup
built for the micro- and nanoscale to perform
microscopic analysis.
Fig. 2 shows the schematic representation of the
hyperspectral imaging microscope. The system
includes a stage controller, an optical path with the
optical components, a hyperspectral imaging camera
with a spectral range of 325-1056 nm, and a data
processing unit (Matlab GUI). A broadband LED light
source is used as illumination source (wavelength
range 470-850 nm).
The light (“green” path) from the light source is
partially reflected (50:50) by the beam splitter 1 and
then projected onto the microscope objective as
illumination. The light that is reflected (in yellow)
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
134
from the sample travels through the objective and the
beam splitter 1 and is then focused by the cylindrical
achromatic lens through a slit onto a diffraction grating
inside the spectrograph to disperse the light into its
constituent wavelengths.
Fig. 2. Schematic setup of the hyperspectral microscope [6].
The light reflected by the beam splitter 2 is used for
visual observation. When the area of interest is
positioned, the second beam splitter is removed to
increase the amount of light reaching the sensor of
the camera.
The motorized positioning platform performs scans
of the object of interest in y direction. The scans were
performed with an x63 plan-appochromat objective
reaching a total system resolution of 0.3 
.
3. Hyperspectral Microscopic Scan
Fig. 3 shows a successful scan of a dead
single-cancer cell and a gold nanoparticle and displays
the representative spectra on the right side. The colors
in the left image represent the reflectance intensity,
with yellow standing for the strongest and blue for the
weakest reflectance. The marks on the left side depict
the pixel position, where the spectra for the right side
were chosen. The orange and purple mark represent the
center and border of the cancer cell, whereas the
yellow mark displays the gold particle, and the blue
shows the background spectrum. It is apparent that the
gold particle reflects the light resulting in a higher peak
of intensity, whereas the cell absorbs more light
resulting in a valley whereby the border absorbs
weaker as the cell structure becomes shallower.
4. Conclusions
The hyperspectral microscope used in this project
proofed to do high-resolution scans down to the
nanometer scale and distinguish the objects in a
scanned scene by their spectral distribution. Therefore,
this system enables the acquisition of nanoparticles,
single-cells, cell cultures, and their processes in
multiple spectral bands, which can be used to further
analyze a sample’s components and structure.
One potential application is the diagnosis of cancer.
By using hyperspectral imaging to scan tissue samples,
it may be possible to identify cancer cells based on
their spectral signatures. Also, detecting foreign matter
in a biological sample could be possible, as the
spectrum clearly indicates a different specimen,
whereas just an image in the visible range could
complicate the differentiation if no experienced expert
is conducting the analysis.
The promising results obtained demonstrate the
great potential of a hyperspectral microscope system.
Robust and reliable algorithms for the image
processing and segmentation are necessary and can
lead to significant advances in the understanding of
processes on this scale.
Fig. 3. Nanoparticle and Single-Cell Scan (left) with a x63 magnification and its representative spectrum (right).
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
135
References
[1]. C.-I Chang, Hyperspectral Imaging: Techniques for
Spectral Detection and Classification, Springer US,
2003.
[2]. P. Geladi, H. F. Grahn, Techniques and Applications of
Hyperspectral Image Analysis, John Wiley Sons Ltd,
2007.
[3]. L. Giannoni, F. Lange, I. Tachtsidis, Hyperspectral
imaging solutions for brain tissue metabolic and
hemodynamic monitoring: past, current and future
developments, J. Opt., Vol. 20, 2018, 044009.
[4]. G. Lu, B. Fei, Medical hyperspectral imaging: A
review, JBO Journal of Biomedical Optics, Vol. 19,
Issue 1, 2014, 10901.
[5]. M. W. Hagen, N. Kudenov, Review of snapshot
spectral imaging technologies, Optical Engineering,
Vol. 52, Issue 9, 2013, 090901.
[6]. X. Dong, Hyperspectral Imaging Microscopy for
Atomic Layer Mapping of Two-Dimensional
Materials, Technical University of Munich (TUM),
2021.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
136
(052)
Development of Bimetallic Zn/Ti-BMOF Thin Film Composite Optical
Waveguides for Ethylenediamin Detection at Ambient Temperature
Patima Nizamidin * and Huifang Chen
State Key Laboratory of Chemistry and Utilization of Carbon Based Energy Resources; College of Chemistry,
Xinjiang University, Urumqi, 830017, Xinjiang PR China
Tel.: 086-15899445904
E-mail: patima207@aliyun.com
Summary:
In this case, for the purpose of improve the gas sensing selectivity of monometallic titanium organic framework
(MOF, Ti-MOF), the zinc and titanium bi-metal (Zn/Ti) nodes were introduced to the frame to produce specific adsorption
site by synergistic effect of them. A graphene-like Zn/Ti-bimetallic MOF(BMOF) thin film fabricated on titanium dioxide
(TiO
2)
film composite optical waveguide (COWG) substrate using solvothermal method, and the gas-sensing performance at
ambient temperature (20
C) for amines and acidic gases were investigated. As a result, Zn/Ti-BMOF completed its graphene-
like structure with uniform small pores (D=60 nm) after 5h of continuous growth at 150
C. Zn/Ti-BMOF thin film COWG
exhibited higher refractive index and a positive response to ethylenediamine (EDA) and H
2
S gas co-existing with the same
concentration (100 ppm) of amines and acidic gases. It is postulated that there are charge transfer and hydrogen bond
interaction occurs when the Zn/Ti-BMOF film COWG adsorbs EDA and H
2
S gases, respectively, as the Lewis acid-base nature
of C=N bond in the Zn/Ti-BMOF. The presented sensor shows a wide detection range (100 ppm –100 ppt), fast (2 s) and stable
response to EDA.
Keywords: Bimetallic Zn/Ti-BMOF, Composite optical waveguides, Response selectivity, Ethylenediamine.
1. Introduction
Metal-organic frameworks (MOFs) attracted great
attention of researchers for decades due to their tenable
structure and reversible host-guest interaction between
MOFs and the analyte. The desired frame-work
structure can be controlled by adjusting the organic
ligand, or the central metal, the solvent, as well as the
pH values for its surface moditability, pores
distribution uniformity, and excellent physio-chemical
activity, which makes MOFs an ideal candidates for
gas separation/storage, sensors and devices [1].
Recently, in gas adsorption and gas sensor field,
extensive investigations on the design and synthesis of
bimetallic-based metal organic framework (BMOF)
have exponentially increased owing to their improved
adsorption selectivity and structural stability compared
to those monometallic MOF, which caused from
synergistic functionalities. Yi Xue [2]
and co-workers
designed a series of Cu/Bi-BMOFs for the selective
adsorption of CO
2
. The synergistic Co-Mn-MOF-74
[3] exhibits an excellent adsorption characteristics to
NO
x
. Our research group have been developed
monometallic titanium metal organic framework
{Ti-MOF, [Ti
2
-(TpA)
2
-NDI]
n
} based COWG with
highly sensitive and multiselective to ethylenediamine
(EDA), followed by nitrogen dioxide (NO
2
),
methylamine and trimethylamine upond exposed to 15
types of benzenes, amines and acidic gases [4]. In this
study, in order to improve the response selectivity of
Ti-MOF film COWG, the zinc and titanium bi-metal
(Zn/Ti) nodes were introduced to the frame, and Zn/Ti-
BMOF was prepared for the first time. The optical gas
adsorption performance for amines (ammonia,
methylamine, dimethylamine, trimethylamine, EDA)
and acidic gases (H
2
S, SO
2
, NO
2
, HCl, CO
2
) was
investigated using self-assembled OWG sensor testing
system (Fig.1). Meanwhile, the gas adsorb principles
was studied and discussed with respect to optical gas
adsorption behaviours of the Zn/Ti-BMOF film
COWG.
Fig. 1. Schematic view of OWG sensor plate form.
2. Method
Film fabrication: To promote efficient film growth,
a titanium dioxide (TiO
2
) film with 40 nm thick was
fabricated on the tin-diffused glass slide [4]
(TiO
2
-
COWG); then the precursors, Zn(NO
3
)
2
6H
2
O, TiCl
4
,
and NDI were mixed in a molar ratio of 1:1:1 and
stirred at room temperature for 1 h. The resulting
mixture was then transferred to a cylindrical reactor.
Simultaneously, the TiO
2
-COWG substrate was placed
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
137
in the reactor. The Zn/Ti-BMOF film was grown at
150
C for 5 hour. Finally, the as-grown film was
removed, dried under a nitrogen gas flux, and stored in
a vacuum desiccator for further investigation.
Gas sensing performance: the obtained Zn/Ti-
BMOF films were fixed on the OWG sensor testing
plate-form (Fig. 1), and a laser beam with fixed
wavelength of 520 nm was introduced by prism
coupler, the carrier gas (dry air) flow rate was
controlled at 30 ml min
–1
. For each test, the Zn/Ti-
BMOF film COWG was first exposed to dry air to
record the baseline output light intensity, denoted as
I
air
. Subsequently, the film was exposed to a given
amount of the analyte gas, and the intensity of the
output light, I
gas
, was simultaneously recorded. The
optical signal (I, variations in the output light
intensities) over the testing time was transmitted to a
computer by the photomultiplier tube and recorded.
The procedure was performed at ambient temperature
(20
C
) and normal pressure (0.92 atm). The testing
gases (amines and acidic gases) with different
concentrations were prepared according to a procedure
described in the [4], and their concentrations were
determined using gas detection tubes with a detection
range of 2–200 ppm.
3. Results and Discussion
The chemical composition and morphology of the
Zn/Ti-BMOF were confirmed by Fourier transform
infrared spectroscopy (FTIR), X-ray photoelectron
spectroscopy (XPS), and field emission scanning
electron microscope (FESEM). XPS spectrum
(Fig. 2a) shows the characteristic binding peaks of zinc
(Zn), carbon (C), nitrogen (N) and oxygen (O), without
the Ti binding peaks due to its lower contents. The
FTIR characteristic peaks (Fig. 2b) located at
3430 cm
-1
(O-H), 3074 cm
-1
(N-H), 1649 cm
-1
(C=O),
1269 cm
-1
(O-C-O), and 737 cm
-1
(C-H) were
represents successful construction of Zn/Ti-BMOF.
FESEM image revealed a homogeneous graphene-like
structure on the surface of the Zn/Ti-BMOF film with
an average pore size of 60 nm (Fig. 2c).
Fig. 2. (a) Full XPS spectrum, (b) FTIR, (c) FESEM image of Zn/Ti-BMOF; (d) Selective response of Zn/Ti-BMOF films
OWG to different gases with 100 ppm, (e) FTIR, (f) Absorbance, (g-h) FESEM images of Zn/Ti-BMOF films after exposure
to EDA and H
2
S, (i) Response diagram of Zn/Ti-BMOF films OWG to different concentrations of EDA gas.
In gas sensing process, the selectivity of the Zn/Ti-
BMOF film COWGs for the adsorption of amines and
acidic gases were evaluated. The testing result (Fig.
2d) shows that Zn/Ti-BMOF film COWG exhibited a
larger adsorption response for EDA, followed by H
2
S,
upon exposure to 100 ppm of amines and acidic gases.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
138
Compare with monometallic Ti-MOF, the response
selectivity was improved due to the synergistic effects
of Zn and Ti metals. The FTIR spectrum results (Fig.
2e) of Zn/Ti-BMOF film before and after exposure to
EDA and H2S shows that EDA mainly interacts with
C=N side in NDI and induces enhancement of
intramolecular hydrogen bond; besides, the O-H
(3430 cm-1) and C=O (1649 cm-1) peaks are blue-
shifted 11.4 and 21.7cm-1, approximately, which
maybe caused from the n-π transfer of lone pair
electrons of nitrogen atom to organic ligand NDI.
While after the H2S gas adsorption, there are hydrogen
bond interaction between host-guest, resulting reduce
in strength of O-H (3430 cm-1) bond vibration. These
charge transfer and hydrogen bond interactions were
believed to be comes from the Lewis acid-base nature
of C=N bond in the Zn/Ti-BMOF [5]. Meanwhile, the
host-guest interactions inducing structural stacking of
film surface (Fig. 2f-g) and reduction in porosity, thus
an increase in film refractive index (Fig. 2h) was
participated, which resulting the decrease in output-
light intensity. After further study it was proved that,
while concentration of analyte gases was reduced to
the 1 ppm, Zn/Ti-BMOF film COWG showed a fast (2
s) and unique response with a wide detection range (1
ppm–100 ppt) to EDA gas (Fig. 2i) without the
interference of other amines and acidic gases.
4. Conclusions
In this work, we present the fabrication and gas
sensing characteristics of Zn/Ti-BMOF film COWG.
Zn/Ti-BMOF completed its graphene-like structure
after 5h of continuous solvothermal growth, resulting
in an uniform small pores (D=60 nm), higher refractive
index, and improved selective response to EDA and
H2S gas co-existing with the 100 ppm of amines and
acidic gases. When the Zn/Ti-BMOF film COWG
adsorbs EDA and H2S gases, it susceptible to occurs a
charge transfer and hydrogen bond interaction, as the
Lewis acid-base nature of C=N bond in the Zn/Ti-
BMOF. The presented sensor could prob a wide range
(1 ppm –100 ppt) of EDA.
Acknowledgements
This study was supported by the National Natural
Science Foundation of China (number 22164019) and
Natural Science Foundation of Xinjiang Uyghur
Autonomous Region (number 2021D01C032).
References
[1]. S. Kevat, B. Sutariya, V.N. Lad, Microfluidics-assisted,
time-effective and continuous synthesis of bimetallic
ZIF-8/67 under different synthesis conditions, Journal
of Materials Science, 58, 12, 2023, pp. 5219-5233.
[2]. Y. Xue, C. Li, X. Zhou, et al., MOF-Derived Cu/Bi Bi-
metallic Catalyst to Enhance Selectivity Toward
Formate for CO2 Electroreduction, ChemElectroChem,
9, 4, 2022, e202101648.
[3]. Z. Wu, Y. Shi, C. Li, et al., Synthesis of Bimetallic
MOF-74-CoMn Catalyst and Its Application in
Selective Catalytic Reduction of NO with CO, Acta
Chimica Sinica, 77, 8, 2019, pp. 758-764.
[4]. P. Nizamidin, C. Guo, Q. Yang, et al., Surface-
modified Ti-MOF/TiO2 membrane and its gas-sensing
characteristics, Surface Innovations, 11, 6-7, 2023,
pp. 365-376.
[5]. Z. Meng, K.A. Mirica, Covalent organic frameworks
as multifunctional materials for chemical detection,
Chemical Society Reviews, 50, 24, 2021, pp. 13498-
13558.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
139
(056)
Routine Measurement and Monitoring System for the Activity of Elderly
People with Dementia: A Systematic Review
Júlia D. Rodrigues, Pedro Morais and Vítor Carvalho
2AI – School of Technology, IPCA, Campus do IPCA, 4750-810, Barcelos, Portugal
Tel.: +351 253 802 260
E-mails: juliadoriarodrigues@gmail.com, vcarvalho@ipca.pt; pmorais@ipca.pt
Summary: This systematic literature review was conducted to explore systems for measuring and monitoring the activity
routine of older people with dementia. The review was able to identify and analyze relevant studies that evaluate the
effectiveness and usability of these systems in improving the quality of life and well-being of this population. A survey was
carried out using the following databases: B-On, Google Scholar, PubMed and Science Direct, within the time frame between
2018 and 2023, considering only the newest studies to be reached. The techniques under study or already developed were
presented, along with their results, as well as the characteristics of the study and the characteristics of the technologies. The
final sample included 8 articles describing different measurement and monitoring tools. Gaps were identified in this final
samples, as well as fields that still need to be addressed. Future research should focus on developing and improving solid, safe
and viable tools that can help to improve patients' quality of life and facilitate the work of their caregivers
Keywords: Elderly, Dementia, Routine, Activities, Monitoring, Measurement, Systematic review.
1. Introduction
Dementia is a broad term including several
illnesses that affects millions of individuals
worldwide, particularly among the elderly population
aged 65 years and older [1]. It is characterized by
memory loss, cognitive impairment, behavioral
changes, and functional limitations. Such symptoms
can compromise and prevent the performance of
activities of daily living [2].
In addition, according to the World Health
Organization, in 2021 about 55 million people were
living with dementia [3] and this number is likely to
grow more and more with the increase of the elderly
population. If the predictions come true, the number of
people with dementia could reach the 139 million
marks in the next 30 years [4, 5]. In this sense, it is
important to find new solutions or strategies that can
help these older people to perform their daily activities
without needing so much attention or help from their
relatives and caregivers, or at least to improve their
treatment with information provided on their daily
activities. On the other hand, it is important that the
people responsible for the care of these patients can
also have tools that help them to monitor these
activities and facilitate this care. Therefore,
recognizing and understanding the specific needs and
challenges faced by the target group is crucial for
developing adapted interventions and support systems.
In this sense, this research sought to understand how
systems that can monitor the daily activities of elderly
people with dementia, and what they are capable of
identifying, as well as their effectiveness in improving
the quality of life of these elderly people and their
caregivers have been developed.
Through this systematic analysis of existing
literature, the review provided an overview of the
current state of the art, identifying gaps and limitations
of existing studies and highlighted areas for further
research and improvement. The results of this review
serve as a basis for future evidence-based decision-
making in healthcare settings, guiding the
development and implementation of more effective
and adapted measurement and monitoring systems.
This paper is organized in 5 sections. Section 2
describes the literature review methodology; Section 3
presents the results obtained; Section 4 draws the
discussion; and, finally, Section 5 presents the paper
final considerations and remarks.
2. Methodology
The literature review commenced by conducting a
comprehensive search and analysis of pertinent
studies, articles, and reports detailing various systems
employed within this particular context. This review
also encompassed an exploration of associated
challenges, requirements, and the daily experiences
encountered by the target demographic.
2.1. Search Strategy
Adhering to the methodological rigor stipulated by
the PICO (Population, Intervention, Comparison, and
Outcome) framework [6, 7], this strategy was
meticulously employed to enhance the transparency,
precision, and lucidity of the review process [8]. In this
context, the demographic focus group comprises
elderly individuals affected by dementia, thus
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
140
constituting the population of interest, although it is
believed that the results can also be used for future
studies with younger age groups, but who are affected
by the same disease.
The intervention component concerns the
implementation of measurement and monitoring
systems, meticulously designed to supervise and track
the intrinsic aspects of the daily routines of this
population cohort. In the sphere of comparison, an
initial inclusion criterion involved studies that cast a
comparative lens on different measurement and
monitoring methodologies or that evaluated the
effectiveness of these systems against prevailing
standard care paradigms or alternative interventions.
Ultimately, the most important results were to
assess the operational effectiveness of the systems
used and to elucidate their role in improving the quality
of life, daily functioning and holistic well-being of
older people affected by dementia.
2.2. Databases, Keywords and Filters
With the aim of an exhaustive exploration of the
literature, a selected set of keywords was used. This list
of keywords embraced a spectrum of interconnected
concepts that captured the essence of the topic in
question. Variations of terms including, but not limited
to, "routine measurement", "monitoring systems",
"assessment systems", "older people", "dementia",
"activities" and "technologies" were incorporated.
Through strategic integration with Boolean operators
(AND, OR), these keywords were inserted into the
search queries, adapted to each particularity of the
chosen databases. This strategic orchestration aimed to
obtain articles that comprehensively addressed the
effectiveness, usability and repercussions of these
systems on the designated population.
In addition, in order to rigorously carry out this
literature review, an approach involving the use of four
different databases was adopted. This broad spectrum
included PubMed, Google Scholar, ScienceDirect and
the Portuguese platform B-on, ensuring that a wide
range of relevant studies were included in
combination. Also, a strict focus on relevance and
timeliness was maintained, concentrating
predominantly on articles published in the English and
Portuguese languages, reflecting the primary language
proficiency of the research team. And, to safeguard
timeliness and contextual appropriateness, a careful
publication date filter was used, covering articles
published in the time domain ranging from 2018 to
2023. This well-defined time period was selected with
the highest precision to cover the latest advances,
aligning the review with the contemporary overview of
knowledge and technological progress in the field.
A total of 252 articles relevant to the scope were
initially identified and exported to the Zotero platform
for organization [9]. Following a reading of the titles
and a review of the inclusion and exclusion criteria,
68 articles were selected for a reading of the abstracts
and conclusions. Subsequently, after a thorough
review of abstracts and conclusions, the articles were
refined in accordance with the criteria outlined in the
forthcoming section.
2.3. Selection Criteria
The definition of inclusion or exclusion criteria for
articles discovered through the designated keywords
was carried out with the aim of exclusively covering
materials that have an immediate link to the intended
thematic area and that remain relevant in the
contemporary scene.
Therefore, the inclusion criteria were structured to
cover articles that were accessible to the research team
and that fit into a time period ranging from 2018 to
2023, as previously referred, thus promoting
congruence with the current environment. In addition,
a key prerequisite involves the peer-reviewed nature of
the articles, underlining a commitment to
academic rigor.
Also, one of the main inclusion criteria was the
selection of articles that provided a specialized
exploration in the field of monitoring and
measurement systems for daily activities. At the same
time, studies that focused on treatment modalities
without patient data acquisition, such as musical
therapies, were omitted. On this path, the exclusion
criteria operated with a precision that aimed to filter
out studies carried out on cohorts that go beyond the
sphere of the elderly, as well as those that focus on
pharmacological or non-technological interventions.
Articles written in languages other than English and
Portuguese are part of this exclusion list, as are
duplicate studies considered redundant in the
analytical context. Systematic reviews, meta-analyses,
editorials, journal articles, magazine articles and book
chapters were also excluded as well as duplicates.
These criteria together ensure a precise and exhaustive
selection process.
2.4. Quality Criteria
The selection process for the articles in this study
was also based on a set of quality criteria, with the aim
of guaranteeing the integrity and credibility of the
research results. First, priority was given to studies
with a representative sample of elderly people with
dementia. However, it was realized that large samples
became difficult in some studies, due to the lack of
necessary authorization from the elderly themselves or
their guardians. Therefore, studies that did not need a
very large sample to prove their effectiveness were
also included, as well as studies that were applied only
to caregivers and healthy people.
In addition, the quality criteria gave priority to
studies that incorporated meticulously validated and
reliable measurement systems, guaranteeing the
solidity of the data collected. In addition, the
evaluation process embraced studies that employed
methodologies capable of minimizing potential bias,
both in terms of selection and confounding factors.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
141
Strategies such as randomization and pairing were
considered, supporting the commitment to
methodological robustness and rigour in the search for
meaningful knowledge.
3. Results
After the research process and an exhaustive
evaluation, considering all of the stablished criteria, a
collection of 8 articles was selected for final analysis.
These carefully selected articles will be presented in
the following subsection, organized in Table 1 and
providing a comprehensive overview of each article.
3.1. Characteristics of the Selected Studies
This section presents a detailed overview of the
articles chosen. Table 1 contains vital information,
including the titles of the articles, the main
characteristics of the studies and the types of
technology used. The articles were also identified in
the table for future reference in the review. The aim of
this tabulated presentation is to provide a clear
reference point for next researches and reviews,
ensuring easy accessibility to the relevant details. The
following subsection will look in depth at the
conclusions and results of these studies.
Table 1. Selected articles.
Article Title Type of Technology Characteristics
A1 [10]
Audio Based Action
Recognition for Monitoring
Elderly Dementia Patients
Audio capture and action
recognition with deep learning
Usage of two hardware devices (Raspberry Pi 4B
and ReSpeaker sensor). Usage of a Convolutional
Neural Network. Usage of MQTT protocol to
transmit the data to the backend platform.
A2 [11]
Development of a Sensor-
Based Behavioral Monitoring
Solution to Support Dementia
Care
Wearable Sensor Technology
Study carried out with 5 healthy people, using a
smartwatch and a smartphone with google APIS.
The research focused on obtaining location, step
count and activity recognition data.
A3 [12]
Evaluating the Use of Daily
Care Notes Software for Older
People with Dementia
Daily notes system via mobile
devices and desktops
The survey was carried out using the Yammer
Software application, with 18 caregivers, 2 team
leaders and 13 patients of a residential home, as
well as other health professionals who also had
access to the notes.
A4 [13]
Geofencing Technology in
Monitoring of Geriatric
Patients Suffering from
Dementia and Alzheime
r
Geofencing surveillance system
Patient monitored by a GPS device. If the
monitored area is exceeded, the distance between
the elderly person and the nearest caregiver is
calculated using the Haversine method.
A5 [14]
Monitoring behavioral
symptoms of dementia using
activity trackers
Personalized Behavioral
Symptom Monitoring with
Activity Trackers
9 elderly people and 8 caregivers participated in the
study. Two smartwatch models were used to collect
data on steps per minute, heartbeats per minute and
minutes of sleep. The data sought to evaluate
behavioral changes related to dementia in
participants in Cognitive Stimulation Therapy
using the Eva robot.
A6 [15]
Monitoring Behaviors of
Patients with Late-stage
Dementia Using Passive
Environmental Sensing
Approaches: A Case Series
Unobtrusive activity-sensing
technologies to track behavioral
markers
Platform gathered participant data, including
sensors under mattresses, movement sensors and
actigraphy devices; other participants used Emerald
sensors. Data correlated with medication, assessed
b
y caregivers and doctors.
A7 [16]
Portable EEG monitoring for
older adults with dementia and
chronic pain - A feasibility
study
Pain monitoring through a
portable EEG device
Use of a portable headband for EEG measurement.
Sample of 5 elderly people in an institution. The
device is connected to a smartphone or other device
capable of receiving data. The device was placed
on different days, monitoring the level of pain
perceived by the subjects.
A8 [17]
The feasibility of a vision-
based sensor for longitudinal
monitoring of mobility in older
adults with dementia
Vision-based sensor for
tracking
Study with 18 institutionalized elderly people. The
setup included a Microsoft Kinect, a laptop
computer, a radio frequency identification reader
and two circularly polarized UHF antennas.
Walking sequences were recorded, analyzed, and
gait parameters ext
r
acted.
Among the articles identified, a trend emerges in
which three of them [12, 16, 17] mainly investigate the
feasibility of using the technologies mentioned, rather
than the details of the systems themselves. Although
these studies do not focus precisely on the architecture
of each system, they do shed light on the operational
dynamics of such technologies and their potential
acceptance by patients. This analysis, which
encompasses the evaluation of these aspects, aligns
with the focus of this review.
Studies A2, A5, A6, and A7 utilized devices
directly connected to the patients. Study A3 was
unique in its use of a technology managed solely by
caregivers. Additionally, studies A1, A4, A6, and A8
implemented remote monitoring technologies,
eliminating the necessity for the elderly to
carry a device.
In terms of monitoring methodologies, the research
landscape reveals different facets. Study A1 is the only
one to propose audio capture and subsequent
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
142
recognition for activity assessment. Studies A2 and A4
converge on geolocation strategies adapted to elderly
people with dementia. With similar objectives, A2 and
A5 orchestrated pedometer technology to assess
ambulatory activity. Expanding the research
repertoire, A2 emerged as a singular effort to delineate
activities through sensor-activated inference. Instead
of automated processes, A3 introduced a caregiver-
mediated annotation system, facilitated by a
smartphone app. Both A2 and A6 orchestrated the
acquisition of sleep patterns, while A5 ventured into
heart rate monitoring. A6 and A8 used motion sensors
to obtain data, although the A8 study tried to track the
gait of each patient. Interestingly, A7 made unique use
of a portable EEG device to discern pain levels.
3.2. Presentation of Results
While limited in scope, the initial study (A1) [10]
achieved a noteworthy 98 % accuracy in the audio
capture, forwarding data to discern basic activities.
Ongoing enhancements are warranted, including
broadening recognized activities and minimizing
sensor dimensions.
The second study (A2) [11], although successful in
tracking behavior at home and identifying patterns,
aiding predictive models and personalized
interventions for dementia care, still needs to return
this data to the health system in order to prove the real
effectiveness of each of them in improving the quality
of life of individuals. Furthermore, the success of
studies carried out on healthy people may still run into
a number of problems if carried out on people who
have all the difficulties that dementia can offer.
The A3 study [12], although not giving details of
the system adopted, shows the successful adoption of
the software by care providers. Positive results include
an increase in the volume of daily care notes, improved
readability, greater accessibility and better preparation
for activities. Usability considerations are highlighted,
with the versatility of the software being crucial to
overcoming barriers. Furthermore, the alignment of
policies and technologies remains a challenge. Finally,
as this is a study that relies on manual data input, it still
needs research and evaluation of the best ways to
maintain the reliability of the information collected,
The fourth study (A4) [13] used the Haversine
distance calculation and a notification system to
manage proximity between the care provider and the
patient. Although generally successful, two trials failed
due to inaccuracies in the distance calculation. The
dependence of the mobile device system on GPS
signals led to lower accuracy. A further disadvantage
emerged: the system does not monitor immobile
patients. Future research was suggested that could
include alerts of dangerous areas and notifications of
patient immobility, as well as improving the
effectiveness of GPS.
The A5 study [14] demonstrates the successful
adoption of activity monitors by patients and
caregivers for the assessment of dementia-related
symptoms. Activity monitors provide valuable data on
behavioral symptoms such as depression and apathy.
Incorporating new sensors or raw data could improve
the monitoring of other symptoms. Challenges include
accuracy due to the limitations of the devices. Future
research could explore advanced sensors and
environmental data. Accelerometer raw data could
estimate physical agitation and spatio-temporal gait
parameters related to the risk of depression and
dementia. The raw data would also allow for better
monitoring of mobility and sedentary activity. The
article also suggests incorporating microphones to
assess socialization and including waterproof devices
that are easier to use.
The sixth article selected (A6) [15] is a case series
demonstrating the potential of discrete activity
detection technology for monitoring people with
dementia in their natural living environments. The
technology implemented offered continuous,
privacy-preserving data with the appropriate clinical
implications, including the identification of behavioral
changes and medication effects. The study also
identified the need for further research to validate
behavioral markers, guarantee privacy and establish
transparency.
A7 study [16] suggests that changes in pain-related
brain regions are associated with changes in
spontaneous brain activity, which may indicate pain
processing. Portable EEG devices, such as the one
used in the study, could serve as a low-cost,
non-invasive tool for assessing pain in people with
dementia. However, the small sample size and
technical limitations of the study warrant further
research for robust clinical applications. The feasibility
of obtaining EEG data from residents with dementia
was demonstrated, opening up the possibility of EEG
being a biomarker of pain for those who are unable to
communicate. Finally, the article indicates that larger
studies are needed to validate these results and
compare portable EEG with medical-grade EEG for
pain assessment.
Finally, the last study (A8) [17] demonstrates the
feasibility of using the vision-based tool to
longitudinally monitor gait in people with dementia.
The tool successfully collected and analyzed gait data,
providing information on walking patterns and gait
measurements. This approach offers potential to
improve falls risk assessment in long-term care by
detecting changes in mobility patterns that may
precede health events. Although portable sensors have
limitations, vision-based systems such as AMBIENT
are promising. Opportunities for improvement
presented include refining participant identification
methods, improving sensor placement and automating
data processing.
4. Discussion
Emerging technologies designed to measure and
monitor activities hold great promise for improving
care for older people with dementia, offering
transformative improvements in care approaches and
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
143
alleviating difficulties for caregivers. Although each
study presents its own insights and conclusions, a
common thread emerges: technology has the power to
improve patient outcomes, simplify processes and
provide valuable data for informed decision-making.
The technologies examined in this analysis
highlight the diverse range of applications in the field
of elderly care. Although the analysis has maintained a
specific focus on activity measurement and monitoring
systems with data capture capabilities, it is clear that
there is a dearth of comprehensive experience and
studies in this area. It is undeniable that certain
technologies need significant improvement, especially
given the specific needs of individuals with dementia.
From the discomfort caused by conventional
smartwatches [14] to the mobility limitations that arise
[13], it is clear that there is an understanding of the
specialized obstacles that require resolution.
It is important to note that one of the studies under
review focused in particular on the dyadic relationship
between people with dementia and their caregivers
[12], highlighting the key role of caregiver
involvement [18]. This underlines the emphasis of
research on the need to adequately prepare and
integrate caregivers in the assessment and training of
these technologies.
Finally, in the larger context, these technologies have
the potential to launch a new era of personalized and
economically efficient dementia care, facilitating
timely interventions and simultaneously contributing
to the reduction of hospitalization rates [15].
5. Final Remarks
The integration of technologies presents a
promising avenue for improving the well-being of the
geriatric population affected by dementia.
Technological implementations show favourable
results by providing auxiliary support to the elderly
and their caregivers in the field of daily activities,
while facilitating the assessment and optimization of
medical interventions. Given the rapid pace of
demographic ageing, the increasing prevalence of
dementia and the concomitant expense of providing
care, monitoring technologies are set to play a key role
in the daily lives of older people with dementia and
their caregivers.
Empirical evidence underlines the satisfactory
implications of monitoring paradigms, while at the
same time highlighting the need for improvement. The
use of pre-existing devices that interact seamlessly
with common smartphones and smartwatches is clearly
an economical and effective resource. However, the
emergence of more sophisticated modalities, such as
audio recognition systems, requires greater investment
in research to discern their effectiveness. Given the
lack of comprehensive experimental research with
substantial cohorts of participants, and recognizing the
profound implications for older people affected by
dementia and their caregivers, the impetus for further
exploration in this field is clearly appropriate.
Acknowledgements
This work was financed by the financed by national
funds, through FCT - Foundation for Science and
Technology and FCT / MCTES under the project
UIDB/05549/2020, UIDP/05549/2020, CEECINST/0
0039/2021 and LASI-LA/P/0104/2020.It was also
funded by the Innovation Health From Portugal,
co-funded from the "Mobilizing Agendas for Business
Innovation" of the "Next Generation EU" program of
Component 5 of the Recovery and Resilience Plan
(RRP), concerning "Capitalization and Business
Innovation", under the Regulation of the Incentive
System "Agendas for Business Innovation".
References
[1]. C. Webster, What is dementia, why make a diagnosis
and what are the current roadblocks?, World Alzheimer
Report, 2021.
[2]. J. C. Maia, et al., Tecnologias assistivas para idosos
com demência: revisão sistemática, Acta Paul. Enferm.,
Vol. 31, Issue 6, 2018, pp. 651–658 (in Portuguese).
[3]. World Health Organization, https://www.who.int/
news-room/fact-sheets/detail/dementia
[4]. M. Matsangidou, et al., ‘Bring me sunshine, bring me
(physical) strength’: The case of dementia. Designing
and implementing a virtual reality system for physical
training during the COVID-19 pandemic, International
Journal of Human-Computer Studies, Vol. 165, 2022,
102840.
[5]. T. Fulmer, N. Li, Age-Friendly Health Systems for
Older Adults With Dementia, The Journal for Nurse
Practitioners, Vol. 14, Issue 3, 2018, pp. 160-165.
[6]. H. Donato, M. Donato, Etapas na Condução de uma
Revisão Sistemática, Acta Médica Portuguesa, Vol. 32,
2019, 227 (in Portuguese).
[7]. C. Okoli, A Guide to Conducting a Standalone
Systematic Literature Review, CAIS, Vol. 37, 2015.
[8]. A. Pollock, E. Berge, How to do a systematic review,
International Journal of Stroke, Vol. 13, Issue 2, 2018,
pp. 138-156.
[9]. E. Linares-Espinós, et al., Methodology of a systematic
review, Actas Urológicas Españolas (English Edition),
Vol. 42, Issue 8, 2018, pp. 499-506.
[10]. D. K. Basuki, R. Zull Fhamy, M. I. Awal, L. Hakim
Iksan, S. Sukaridhoto, K. Wada, Audio Based Action
Recognition for Monitoring Elderly Dementia Patients,
in Proceedings of the International Electronics
Symposium (IES’22), 2022, pp. 522-529.
[11]. J. R. Thorpe, B. H. Forchhammer, A. M. Maier,
Development of a Sensor-Based Behavioral
Monitoring Solution to Support Dementia Care, JMIR
Mhealth Uhealth, Vol. 7, Issue 6, 2019, e12013.
[12]. N. Maiden, K. Pitts, K. Pudney, K. Zachos, Evaluating
the Use of Daily Care Notes Software for Older People
with Dementia, International Journal of Human-
Computer Interaction, Vol. 35, Issue 7, 2019,
pp. 605-619.
[13]. E. R. Pratama, F. Renaldi, F. R. Umbara, E. C. Djamal,
Geofencing Technology in Monitoring of Geriatric
Patients Suffering from Dementia and Alzheimer, in
Proceedings of the 3rd International Conference on
Computer and Informatics Engineering (IC2IE’20),
2020, pp. 106-111.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
144
[14]. J. Favela, D. Cruz-Sandoval, A. Morales-Tellez,
I. H. Lopez-Nava, Monitoring behavioral symptoms of
dementia using activity trackers, Journal of Biomedical
Informatics, Vol. 109, 2020, 103520.
[15]. W.-T. M. Au-Yeung, et al., Monitoring Behaviors of
Patients With Late-Stage Dementia Using Passive
Environmental Sensing Approaches: A Case Series,
The American Journal of Geriatric Psychiatry, Vol. 30,
Issue 1, 2022, pp. 1-11.
[16]. L. Pu, K. M. Lion, M. Todorovic, W. Moyle, Portable
EEG monitoring for older adults with dementia and
chronic pain – A feasibility study, Geriatric Nursing,
Vol. 42, Issue 1, 2021, pp. 124-128.
[17]. E. Dolatabadi, Y. X. Zhi, A. J. Flint, A. Mansfield,
A. Iaboni, B. Taati, The feasibility of a vision-based
sensor for longitudinal monitoring of mobility in older
adults with dementia, Archives of Gerontology and
Geriatrics, Vol. 82, 2019, pp. 200-206.
[18]. A. E. Harper, et al., A Systematic Review of Tools
Assessing the Perspective of Caregivers of Residents
With Dementia, J. Appl. Gerontol., Vol. 41, Issue 4,
2022, pp. 1196-1208.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
145
(057)
Virtual Reality and Artificial Intelligence as Tools to Aid the Management
of Chronic Pain: A Comprehensive Literature Review
Arthur Gomes 1, Anabela Marques 2, Vítor Carvalho 1 and Duarte Duque 1
1 2Ai – School of Technology, IPCA, Barcelos, Portugal
2 CHEDV - Centro Hospitalar de entre Douro e Vouga, E.P.E.
Tel.: +351 253 802 260
E-mail: a24200@alunos.ipca.pt, anabela.marques@chedv.min-saude.pt, vcarvalho@ipca.pt, dduque@ipca.pt
Summary: Chronic pain is a healthy issue affecting millions of people, in which conventional treatments demand the use of
medication and considerable discipline from patients. Emerging technologies based on virtual reality (VR) and artificial
intelligence (AI) have the potential to improve the efficiency of such treatments and pain management techniques. To
comprehend what are the most promising chronic pain management practices based on these technologies, as well as the gaps
in their development, this study aimed to produce a literature review on the subject. We employed a systematic search strategy
for peer-reviewed studies, published in English language, from 2018 to 2023, targeting relevant electronic databases and
applying a combination of keywords and Boolean operators. VR is proved to distract individuals from their pain, promote
relaxation, and enhance engagement in therapeutic activities, while AI algorithms demonstrated the potential to personalize
treatment plans. However, new research is necessary to establish the optimal protocols, guidelines, and cost-effectiveness of
these interventions.
Keywords: Virtual reality, Artificial intelligence, Chronic pain, Review, Therapy.
1. Introduction
Chronic pain has emerged as a pervasive and
complex health challenge, affecting millions of
individuals worldwide. Characterized by its
persistence and endurance beyond the normal healing
process, it exerts a substantial toll on both physical
well-being and overall quality of life. Conventional
methods of pain management encompass
pharmaceutical interventions and physical therapy,
aiming to alleviate discomfort and restore
functionality. However, the multifaceted nature of
chronic pain often demands a more comprehensive and
innovative approach.
In recent times, the integration of emerging
technologies has stimulated a paradigm shift in chronic
pain management. Virtual reality (VR) and artificial
intelligence (AI) are at the forefront of this
transformative journey, offering promising avenues for
addressing the intricate aspects of chronic pain that
extend beyond the scope of traditional treatments. VR,
which immerses users in simulated environments, and
AI, which processes complex data to derive insights,
have garnered attention as potential complementary
tools in the holistic treatment of chronic pain [1].
As the understanding of this problem deepens, so
does the appreciation for treatments that encompass a
wider spectrum of approaches. In this context, the
integration of VR and AI aligns with the growing
emphasis on personalized and patient-centric care.
These technologies have the potential to empower
individuals by providing tailored interventions that
target not only the physical manifestations of pain but
also the psychological and emotional dimensions that
often accompany it. By engaging patients in immersive
experiences that distract from pain or facilitate
relaxation, VR offers a novel dimension to
management strategies.
Furthermore, AI-driven systems, leveraging data
analytics and machine learning, hold promise in
optimizing pain assessment and treatment plans. These
technologies can decode intricate patterns in pain
experiences, enabling healthcare providers to
customize interventions that align with the unique
needs of each patient. Such tailored approaches are a
marked departure from the one-size-fits-all
methodologies of the past.
The integration of VR and AI into chronic pain
management is not only evidence to the evolving
landscape of healthcare but also a reflection of the
growing synergy between technology and medicine.
These advancements carry the potential to enhance
treatment outcomes, reduce reliance on traditional
pharmacological interventions, and alleviate the
burden of chronic pain on individuals and healthcare
systems alike. However, as with any novel approach,
thorough investigation, empirical validation, and
ethical considerations remain crucial to realizing the
full potential of VR and AI in this domain.
The objective of this study is to explore the current
state of knowledge regarding the use of VR and AI in
the treatment and relief of chronic pain, by conducting
a comprehensive literature review. Specifically, the
review aims to identify existing research, highlight
identified problems, assess the efficacy of different
interventions, and identify promising practices to
tackle this persistent global health challenge.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
146
To achieve these objectives, a comprehensive
search strategy was employed, targeting the relevant
electronic databases B-On and PubMed, where the
combination of keywords and Boolean operators
chronic pain management or chronic pain relief or
chronic pain control or chronic pain reduction AND
virtual reality or vr or augmented reality or artificial
intelligence or ai or a.i.” was used to select the
applicable studies. The search was also limited to
peer-reviewed articles, book chapters, and conference
papers published in English between 2018 and 2023.
As result of application of the search criteria,
139 studies were found, 32 of which were deemed as
eligible and included in review, according to the flow
diagram [2] presented in Fig. 1.
Fig. 1. PRISMA flow diagram for new reviews which
included searches of databases.
The paper is organized in 3 sections. Section 2
presents the selected studies included in the literature
reviewed performed and Section 3 draws the final
comments and remarks of the performed study.
2. Overview of the Included Studies
The 32 eligible studies included in this review
encompassed a wide range of research designs, which
can be divided in two major groups: (i) Clinical Trials
and Pilot Studies on VR/AI Interventions; and
(ii) Literature Reviews, Meta-Analyses, and
Perspectives on VR/AI in Pain Management.
The use of artificial intelligence in chronic pain
assessment and treatment was explored only in one of
the included studies, in which D. Kringel et al. [3]
demonstrated the AI algorithms potential to identify
pain patterns and personalize treatment plans based on
individual patient data, including genetic information.
Machine learning techniques were employed to
analyze large datasets and develop predictive models
for pain outcomes. This AI-based approach showed
promise in improving pain assessment accuracy and
optimizing treatment strategies.
On the other side, the vast majority of the studies
explored diverse aspects of using VR as an
intervention method in the management and treatment
of chronic pain.
2.1. Clinical Trials and Pilot Studies on VR/AI
Interventions
This group includes studies that involve conducting
clinical trials or pilot studies to evaluate the
effectiveness of VR/AI interventions for managing and
treating chronic pain.
Most of the studies were designed as Randomized
Controlled Trials (RCT), dividing participants into a
control group, a VR intervention group, and, in some
cases, a traditional treatment group, so the results
collected during or after the interventions could
be compared.
N. Tuck et al. [4] included patients being aged 18
to 70 years with broad musculoskeletal pain conditions
in their trial, while other studies chose to observe the
effects of the interventions in a narrower scope of
participants, such as pediatric patients [5] or people
who suffer from very specific conditions (e.g., pain
associated to a disease of rheumatological origin [6],
endometriosis-related pelvic pain [7], and chronic joint
pain associated with hemophilic arthropathy [8]).
L. Riera et al. [6] and F. J. Perales et al. [9] explored
not only the impacts of a VR system for pain
management, but also the use of binaural acoustic
stimulations during the interventions, what could
create an even more immersive environment, finding
out that the combination between VR and binaural
beats sessions produced a greater decrease in the
perception of pain.
H. Fu et al. [10] studied the management of chronic
pain through guided meditation enhanced by the use of
VR systems. Through the measure of
electroencephalograph activity, the study detected that
the session altered neurophysiological brain signals,
which were not necessarily associated with pain.
B. D. Darnall et al. [11] evaluated the feasibility of
a self-administered skills-based VR program,
concluding that using VR for pain management was
associated with positive outcomes, since participants
reported minimal adverse effects, high satisfaction
with the VR experience, and improvements in various
pain-related measures such as pain intensity, pain
interference, sleep, mood, and stress, increasing their
overall well-being.
A wide range of VR devices and applications, body
sensors, and software have been utilized in pain
management and rehabilitation research. These
technologies encompass VR headsets produced by
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
147
different manufacturers, VR applications available at
the market and others developed specifically for the
studies, which tried to fill the patients’ needs and the
circumstances derived from their medical condition.
In this context, P. Brown et al. [12] compared the
results achieved by patients submitted to an immersive
and interactive experience vis-à-vis those submitted to
a non-immersive controlled experience with no
interaction. The researchers concluded that
technological immersion alone is not enough to induce
VR-analgesia. It is the kind of content and the degree
of presence generated by this content that seems to
facilitate the analgesic response.
Performing qualitative research for cancer patients
suffering from chronic pain perceptions of VR therapy,
B. M. Garrett et al. [13] found out that participants
tended to fall into two preference categories: those
favoring contemplative and meditative environments,
and those engaging in cognitive problem-solving
scenarios. Notably, a particular problem-solving game
was found to worsen pain for some individuals,
indicating that many current VR applications, designed
for recreation, might not be suitable for pain
management due to their content, while designs
integrating positive environments, natural elements,
and soothing soundscapes could potentially yield
better results.
A. Griffin et al. [5] developed a specific VR
intervention application to improve mobility and
manage chronic pain among pediatric patients, which
was well-received and deemed acceptable, feasible,
and engaging by the participating youth. Qualitative
feedback from patients and parents highlighted the
engagement in previously unattainable activities, the
distraction provided by VR, and the perception of
therapies with VR as enjoyable and fun.
Several studies reported positive outcomes in using
VR for reductions in opioid consumption, pain
severity, and pain interference, while improving
patients’ daily physical activity, strength, sitting and
standing tolerances, and confidence.
Moreover, participants generally found the VR
experiences engaging, and adherence to VR-based
treatments was noted, while collateral effects (e.g.,
cybersickness, nausea) were deemed unusual. This
indicates that VR can be a well-received and feasible
option in pain management strategies.
However, due to the lack of long-term follow-up in
most of the studies, it is not possible to properly assess
if the positive effects observed are sustained in time or
if they are limited to a short period.
Nevertheless, a study conducted by F. S. de Vries,
R. T. M. van Dongen, and D. Bertens [14] offers
insightful observations in this regard. Notably, their
study revealed that beyond the treatment phase, a
significant portion of the participants experienced a
continued reduction in pain intensity. This finding
implies the potential for sustained benefits even after
the active intervention period, highlighting the need for
further investigations that delve into the durability of
these effects over extended periods.
2.2. Literature Reviews, Meta-Analyses, and
Perspectives on VR/AI in Pain Management
This group encompasses studies that review
existing literature, perform meta-analyses, and provide
perspectives on the use of virtual reality in pain
management.
In this context, P. D. Austin [1] developed a
scoping review about the analgesic effects of VR for
people with chronic pain. The study covered a range of
primary and secondary chronic pain conditions and
used a variety of different computer screen and headset
protocols, including gaming, mindfulness, exercise,
relaxation, and proprioceptive skills.
The researcher concluded that VR is deemed to be
an effective analgesic intervention for people with
chronic pain, given user satisfaction, a lack of side
effects such as cybersickness, and relief of comorbid
symptoms, what is agreed by L. Goudman et al. [15].
However, it was pointed out that VR intervention study
methods remain heterogeneous and exploratory in
design, thus findings should be interpreted
with caution.
Huang et al. [16], A. Pourmand et al. [17], and B.
Mallari et al. [18], by their turn, alerted that there is
insufficient evidence to support lasting analgesic effect
in chronic pain, although identified potential to reduce
pain in patients with chronic pain during VR exposure.
In the same line, C. Tack [19] affirmed that it was
not demonstrated yet whether immersive VR can
provide benefits beyond those seen with passive
distraction for the treatment and management of
chronic low back pain, while S. Grassini [20] noted the
positive impact of VR therapy on chronic pain was
comparable to that observed with other pain
management interventions, such as physical exercise
and laser therapy.
M. Alqudimat et al. [21] focused on the utilization
of immersive technologies, primarily VR, for the
management of pediatric chronic pain. The study
reminds that while physical activity for children with
this condition often poses challenges due to
discomfort, engaging in such activities through
immersive technology distracts them from pain, what
could potentially improve pain tolerance. Furthermore,
it highlights that incorporating features such as music,
cues, rewards, and performance metrics in technology
can motivate children to engage in painful movements.
The research also indicates that several pilot
studies in pediatric populations demonstrated the
feasibility and potential benefits of VR in managing
pediatric chronic pain, what should be confirmed by
further studies. In this sense, it was informed that the
INOVATE-Pain consortium has outlined guiding
principles for VR interventions in pediatric chronic
pain with suggested measures and study procedures.
F. Rousseaux et al. [22] observed immersive VR as
a tool for hypnosis techniques applied to the
management of pain, reporting that the combination of
Virtual Reality Hypnosis (VRH) with daily hypnosis
exercises led to a notable reduction of 36 % in pain
intensity and 33 % in pain unpleasantness for a patient
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
148
with chronic neuropathic pain, as demonstrated across
33 VRH sessions. Similarly, another patient
experiencing pain related to gluteal hidradenitis
observed a significant decrease in pain intensity within
just two days following VRH intervention, resulting in
a remarkable 62 % reduction.
In their study, S. O’Connor, A. Mayne, and B.
Hood [23] concluded that the utilization of VR in
mindfulness practices holds the potential to enhance
chronic pain management. However, the review’s
outcomes were constrained in significance due to the
presence of weak study designs and small sample sizes.
To further assess its efficacy in enhancing overall
health and various outcomes, future research should
prioritize the meticulous co-design and thorough
testing of VR-based mindfulness applications,
engaging closely with individuals enduring
chronic pain.
2.3. Challenges and Limitations
Despite the promising findings, several challenges
and limitations were identified across the studies. One
major limitation was the lack of standardized protocols
and guidelines for using VR and AI in chronic pain
management. The studies varied in terms of the VR
interventions used, the AI algorithms employed, and
the outcome measures assessed, making it difficult to
compare and generalize the results.
A notable limitation encountered during this
literature review was the scarcity of available articles
and studies addressing the utilization of AI as a tool for
the management and treatment of chronic pain. The
dearth of research in this specific domain restricted the
depth of analysis and comprehensive understanding
that could be achieved, leaving certain aspects of this
innovative approach relatively unexplored within the
scope of the review.
Many studies faced limitations due to small sample
sizes, also impeding the generalization of findings.
Future research should prioritize larger multicenter
randomized clinical trials with diverse populations to
enhance the external validity of results. Addressing the
variability of patient experiences with VR programs
and considering previous VR exposure in statistical
analyses could further refine the understanding
of outcomes.
The temporal aspect of VR interventions was often
constrained, with evaluations conducted immediately
after sessions. Future studies should consider longer
follow-up periods to assess both immediate and
sustained effects. Incorporating placebo or active
control groups can help establish the true impact of VR
interventions, discern their most effective elements.
Some studies faced limitations related to the use of
VR applications that were not specifically designed for
the targeted condition. Future research should consider
developing or adapting applications tailored to the
specific pain type or context, enhancing the
intervention's relevance and effectiveness.
Several studies reported higher attrition rates than
anticipated, with reasons ranging from comorbid
mental health issues to scheduling conflicts and
pandemic-related disruptions. Strategies to improve
recruitment and retention rates, such as providing more
options for sessions or incorporating patient
preferences, should also be explored.
Challenges related to blinding assessors and
participants were noted, introducing potential bias.
Implementing measures to blind assessors, even
partially, and adopting rigorous methodologies can
mitigate this limitation.
Lastly, technical issues such as inconsistent Wi-Fi,
software timeouts, and lack of “kiosk modes” affected
immersion and user experience, for future studies
should focus on improving technical reliability and
user-friendly interfaces to optimize the intervention’s
effectiveness.
3. Final Remarks
The results of this review suggest that both VR and
AI hold promise for enhancing the management and
treatment of chronic pain, even though there is a small
number of research encompassing the use of AI.
Specifically taking into consideration the studies based
on VR interventions, which are broad and
heterogeneous, evidence shows that such use can be
effective as supplementary strategies across various
chronic pain conditions.
The studies reviewed show that VR interventions
can serve as distractions from pain owing to their
immersive quality and as supportive aids to
physiotherapy exercises. However, their efficacy
varies based on individual factors such as patients’ age,
movement constraints, and the severity of underlying
diseases. Additionally, the level of customization
within VR programs to cater to specific treatment
requirements also plays a role in determining
their success.
Nevertheless, it is possible to conclude that further
research is needed to establish the optimal protocols,
guidelines, and cost-effectiveness of these
interventions. Notably, it becomes imperative for
future studies to extend their focus towards
incorporating longer follow-up periods. By doing so,
these studies can better evaluate and discern not only
the immediate effects, but also the sustained impacts
of the interventions being explored.
With continued advancements in technology and
research, VR and AI may become integral components
of chronic pain treatment and management strategies
in the future.
Acknowledgments
This paper was funded by the project “NORTE-01-
0145-FEDER-000042”, supported by Northern
Portugal Regional Operational Programme
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
149
(Norte2020), under the Portugal 2020 Partnership
Agreement, through the European Regional
Development Fund (ERDF).
References
[1]. P. D. Austin, The Analgesic Effects of Virtual Reality
for People with Chronic Pain: A Scoping Review, Pain
Medicine, Vol. 23, Issue 1, 2022, pp. 105-121.
[2]. M. J. Page, et al., The PRISMA 2020 statement: an
updated guideline for reporting systematic reviews,
British Medical Journal, Vol. 372, 2021, n71.
[3]. D. Kringel, et al., A machine-learned analysis of human
gene polymorphisms modulating persisting pain points
to major roles of neuroimmune processes, European
Journal of Pain, Vol. 22, Issue 10, 2018,
pp. 1735-1756.
[4]. N. Tuck, et al., Active Virtual Reality for Chronic
Primary Pain: Mixed Methods Randomized Pilot
Study, JMIR Formative Research, Vol. 6, Issue 7,
2022, pp. 1-12.
[5]. A. Griffin, et al., Virtual Reality in Pain Rehabilitation
for Youth With Chronic Pain: Pilot Feasibility Study,
JMIR Rehabilitation and Assistive Technologies,
Vol. 7, Issue 2, 2020, e22620.
[6]. L. Riera, et al., Advances in the Cognitive Management
of Chronic Pain in Children through the Use of Virtual
Reality Combined with Binaural Beats: A Pilot Study,
Advances in Human-Computer Interaction, Vol. 2022,
2022, pp. 1-10.
[7]. B. Merlot, et al., Pain Reduction with an Immersive
Digital Therapeutic in Women Living with
Endometriosis-Related Pelvic Pain: At-Home Self-
Administered Randomized Controlled Trial, Journal of
Medical Internet Research, Vol. 25, 2023, e47869.
[8]. R. Ucero-Lozano, et al., Approach to Knee Arthropathy
through 180-Degree Immersive VR Movement
Visualization in Adult Patients with Severe
Hemophilia: A Pilot Study, Journal of Clinical
Medicine, Vol. 11, Issue 20, 2022, pp. 6216-N.PAG.
[9]. F. J. Perales, et al., Evaluation of a VR system for Pain
Management using binaural acoustic stimulation,
Multimedia Tools & Applications, Vol. 78, Issue 23,
2019, pp. 32869-32890.
[10]. H. Fu, et al., Virtual Reality-Guided Meditation for
Chronic Pain in Patients With Cancer: Exploratory
Analysis of Electroencephalograph Activity, JMIR
Biomedical Engineering, Vol. 6, Issue 2, 2021,
pp. 1-21.
[11]. B. D. Darnall, et al., Self-Administered Skills-Based
Virtual Reality Intervention for Chronic Pain:
Randomized Controlled Pilot Study, JMIR Formative
Research, Vol. 4, Issue 7, 2020, e17293.
[12]. P. Brown, et al., Virtual Reality as a Pain Distraction
Modality for Experimentally Induced Pain in a Chronic
Pain Population: An Exploratory Study,
Cyberpsychology, Behavior, and Social Networking,
Vol. 25, Issue 1, 2022, pp. 66-71.
[13]. B. M. Garrett, et al., Patients perceptions of virtual
reality therapy in the management of chronic cancer
pain, Heliyon, Vol. 6, Issue 5, 2020, e03916.
[14]. F. S. de Vries, R. T. M. van Dongen, D. Bertens, Pain
education and pain management skills in virtual reality
in the treatment of chronic low back pain: A multiple
baseline single-case experimental design, Behaviour
Research and Therapy, Vol. 162, 2023, 104257.
[15]. L. Goudman, et al., Virtual Reality Applications in
Chronic Pain Management: Systematic Review and
Meta-analysis, JMIR Serious Games, Vol. 10, Issue 2,
2022, pp. 1-21.
[16]. Q. Huang, et al., Using Virtual Reality Exposure
Therapy in Pain Management: A Systematic Review
and Meta-Analysis of Randomized Controlled Trials,
Value in Health, Vol. 25, Issue 2, 2022, pp. 288-301.
[17]. A. Pourmand, et al., Virtual Reality as a Clinical Tool
for Pain Management, Current Pain & Headache
Reports, Vol. 22, Issue 8, 2018, pp. 53-60.
[18]. B. Mallari, et al., Virtual reality as an analgesic for
acute and chronic pain in adults: a systematic review
and meta-analysis, Journal of Pain Research, Vol. 12,
2019, pp. 2053-2085.
[19]. C. Tack, Virtual reality and chronic low back pain,
Disability & Rehabilitation: Assistive Technology,
Vol. 16, Issue 6, 2021, pp. 637-645.
[20]. S. Grassini, Virtual Reality Assisted
Non-Pharmacological Treatments in Chronic Pain
Management: A Systematic Review and Quantitative
Meta-Analysis, International Journal of
Environmental Research and Public Health, Vol. 19,
2022, 4071.
[21]. M. Alqudimat, et al., State of the Art: Immersive
Technologies for Perioperative Anxiety, Acute, and
Chronic Pain Management in Pediatric Patients,
Current Anesthesiology Reports, Vol. 11, Issue 3, 2021,
pp. 265-274.
[22]. F. Rousseaux, et al., Hypnosis Associated with 3D
Immersive Virtual Reality Technology in the
Management of Pain: A Review of the Literature,
Journal of Pain Research, Vol. 13, 2020,
pp. 1129-1138.
[23]. S. O’Connor, A. Mayne, B. Hood, Virtual
Reality-Based Mindfulness for Chronic Pain
Management: A Scoping Review, Pain Management
Nursing, Vol. 23, Issue 3, 2022, pp. 359-369.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
150
(058)
Using Machine Learning to Classify Network Abnormalities
into Legitimate or Assault in IoT-based Cyber Physical System
Stephen Afrifa 1,2, Vijayakumar Varadarajan 3,4,5, Peter Appiahene 2 and Tao Zhang 1
1 Department of Information and Communication Engineering, Tianjin University, Tianjin 300072, China
2 Department of Computer Science and Informatics, University of Energy and Natural Resources,
Sunyani 00233, Ghana
3 School of Computer Science and Engineering, University of New South Wales,
Sydney, NSW 2052, Australia
4 International Divisions, Ajeenkya D. Y. Patil University, Pune 412105, India
5 School of Information Technology, Swiss School of Business Management, 1213 Geneva, Switzerland
Tel.: + 233247498261, + 919942057843
E-mails: afrifastephen@tju.edu.cn, v.varadarajan@unsw.edu.au
Summary: The use of Internet of Things (IoT) devices is a result of the massive number of messages carried through the
internet. One of the most serious IoT threats is the botnet attack, which aims to commit legitimate, profitable, and effective
cybercrimes. Three machine learning (ML) techniques are used in this study to create a system that detects legitimate or
malicious communication in linked computer networks. The findings showed that random forest (RF) performed the best, with
a coefficient of determination (R2) of 0.9958. Despite the models produced considerable results, it was recommended that RF
be utilized when detecting legitimate or malicious networks in IoT-connected computer networks. This research is critical for
making educated decisions for long-term growth in industrial and educational networks.
Keywords: Machine learning, IoT, CPS, Deep learning, Network traffic.
1. Introduction
Cyber Physical Systems (CPS) are a synergistic
blend of computing, networking, and physical
processes. CPS models a wide range of systems,
including intelligent critical infrastructures [1]. Indeed,
the broad integration of CPS in vital infrastructures has
increased their significance in assuring economic
development, and as a result, their security and
resilience have become important in many aspects of
modern life. Researchers and developers discovered a
means to fulfill Industry 4.0 goals by combining
machine learning (ML) approaches with industrial
processes that include intelligence [2]. The Internet of
Things (IoT) is made up of millions of physical objects
that are linked to the Internet through a network and
execute tasks autonomously with little human
intervention [3]. The proliferation of Internet of Things
(IoT) technologies and their integration with CPS
allows for improved monitoring, control, and
administration of these systems [4]. However, due to
the nature of the integrated IoT devices, such systems
are becoming increasingly vulnerable to component
failures as well as security assaults, since the
underlying communication protocols employed might
bring such new vulnerabilities into the system.
We examine how to distinguish component failures
(which alter regular network behavior) from network
assaults by evaluating channel characteristics in a
computer networked system using unique machine
learning models. The following are the primary
contributions of this study, as adopted from [5, 6]:
1. We present an overarching Machine Learning
(ML) framework that makes use of data gathered from
computer networked IoT devices.
2. We use a simulation framework as well as a
real-time testbed to assess the proposed framework for
a variety of different classification algorithms.
3. We give cutting-edge contemporary
performance assessment criteria for evaluating the
performance of machine learning algorithms.
2. Related Works
A lot of researchers have conducted experiments to
evaluate the resilience of CPS systems in IoT devices
using ML models. To begin with, Khan et al. [7]
proposed an innovative and safe architecture with a
standardized process hierarchy/lifecycle for dispersed
small and medium sized enterprises (SMEs) based on
collaborative blockchain, internet of things (IoT), and
artificial intelligence (AI) with machine learning (ML)
techniques. They designed "BSMEs," a blockchain
with an IoT enabled permissionless network topology
that gives solutions to cross chain platforms. The
B-SMEs handle the registration of participating SMEs,
day-to-day information administration and exchange
between nodes, and analysis of partnership exchange
related transaction data before they are recorded on the
blockchain immutable storage. In the IoT, Al-Wesabi
et al. [8] developed a Pelican Optimization Algorithm
with Federated Learning Driven Attack Detection and
Classification (POAFL-DDC). For IoT threat
detection, the POAFL-DDC approach used
decentralized on-device data. Last but not least,
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
151
Alghamdi and Bellaiche [9] provided a multi-pronged
classification strategy for a deep ensemble-based
intrusion detection system (IDS) employing Lambda
architecture. To distinguish between malicious and
benign traffic, binary classification uses Long
Short-Term Memory (LSTM), whereas multi-class
classification employs an ensemble of LSTM,
Convolutional Neural Network, and Artificial Neural
Network classifiers to detect the kind of attack.
3. Materials and Methods
3.1. Data Collection
It is difficult to find an adequately tagged dataset
for botnet identification. This study is based on a
publicly available dataset from the FigShare data
repository, which may be downloaded at
https://doi.org/10.6084/m9.figshare.21769658.v1
(accessed on July 1, 2023). The dataset has already
been labeled and is being analyzed for botnet
identification. The data provided are used as a standard
in the present study.
3.2. Data Preprocessing
The dataset was preprocessed in order to transform
raw data into useful information for machine learning
algorithms to grasp [10]. It is claimed that the data is
generally incomplete and rife with inaccuracies.
Handling null or missing values, label encoding, and
feature selection are all processes in the
preprocessing stage.
3.3. The Proposed Architecture
Fig. 1 depicts the proposed architecture for the
study. The proposed machine learning models, which
comprise the random forest (RF), support vector
machine (SVM), and naïve bayes models, were used to
train the dataset. The outcome shows either real or
botnet traffic. This delivers an artificial intelligence-
powered system that can detect it in real-time
behavioral analysis, block it, and end any
botnet activity.
3.4. The Data Division
The data was separated into training (70 %) and
testing (30 %) sets. These are hyperparameters whose
values govern the learning process and determine the
model parameters that a learning algorithm learns [11].
The testing dataset aids in evaluating the models'
performance.
3.5. The Machine Learning Algorithms
This study makes use of three conventional
machine learning classifiers: RF, SVM, and NB. To
create predictions or classifications, RF employs an
ensemble of many decision trees. The random forest
technique produces a more accurate result by mixing
the outputs of various trees [12]. The SVM algorithm's
purpose is to find the optimum line or decision
boundary for categorizing n-dimensional space so that
we may simply place fresh data points in the proper
category in the future. A hyperplane is the optimal
choice boundary [13]. Additionally, NB is a
probabilistic classifier, which means it predicts based
on an object's likelihood [14]. The developed
algorithms can determine whether a network is real or
botnet traffic to assist enterprises, government, and
organizations in making educated judgments.
Fig. 1. The proposed architecture of the study.
3.6. Performance Evaluation Measures
The root mean squared error (RMSE), mean
absolute error (MAE), mean absolute percentage error
(MAPE), and coefficient of determination (R2) were
used to evaluate the performance of the machine
learning models in the present study. The higher the
R2, thus, as it approaches 1, the better the accuracies
and the lower the error scores, the better the
performance of the models. The RMSE, MAE, MAPE,
and R2 are represented by equations 1, 2, 3, and 4,
respectively.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
152
𝑅𝑀𝑆𝐸


, (1)
𝑀𝐴𝐸
𝑦
 𝑦
, (2)
𝑀𝐴𝑃𝐸  %
󰇻


 󰇻
, (3)
𝑅
󰇭

 


 


󰇮
, (4)
where 𝑦 represents the observed values,
𝑦 is the predicted values, 𝑛 is the number of
samples, 𝑦 and 𝑦 represent the average
observed and predicted values, respectively.
4. Experimental Results and Discussion
This section presents the results gained from the
ML algorithms employed for the present study.
4.1. Performance of the ML Algorithms
The performance of various single machine
learning models in the proposed architecture is shown
in Table 1. Model performance differed little between
the employed algorithms. According to the data, the
RF had the highest performing metrics in terms of R2
(0.9958) and the lowest error scores in all other
metrics, including RMSE (0.0025), MAE (0.0652),
and (0.0852).
Table 1. ML models performance and evaluation.
Model Metrics Results
RF
R2 0.9958
RMSE 0.0025
MAE 0.0652
MAPE 0.0852
SVM
R2 0.9025
RMSE 0.0035
MAE 0.0721
MAPE 0.8200
NB
R2 0.9159
RMSE 0.0042
MAE 0.0755
MAPE 0.0865
According to Table 1, the NB obtained the
second-best performing ML model. The NB had an R2
of 0.9159, whereas the SVM had an R2 of 0.9025. In
terms of other performance categories, the NB also had
the second-best error scores. Although all of the
models earned substantial scores in all of the
performance measures, the SVM performed the worst.
According to this study, the RF is the best ML
algorithm for recognizing legitimate or malicious
communication in a linked computer network.
Additionally, the study's findings were compared
to the state-of-the-art to assess the robustness of the
models used. The RF used in the study outperformed a
comparable study by Vimont et al. [15], while the NB
outperformed Disha and Waheed [16] in
accomplishing the same goal as this study.
Furthermore, the study found that it is required to
regularly check the functioning of connected computer
networks in order to facilitate and improve
communication. Computer networks are critical to
modern society because they enable the transmission
and sharing of information between individuals,
groups, and objects.
5. Conclusion and Future Works
Computer networks have had a huge impact on
society by providing global connectivity and
information exchange. The utilization of Internet of
Things devices has greatly changed communication,
and hence digital communication. This study
demonstrated that machine learning techniques are
effective in detecting actual or malicious traffic in a
connected computer network. The models used in this
study produced considerable results, and their
influence on providing a benchmark analysis for
decision making is enormous.
The authors want to use deep learning techniques
such as convolutional neural networks (CNN) and
hybrid models such as random forest-convolutional
neural network (RF-CNN) in the future.
Acknowledgements
The authors would like to express their
appreciation to Adwoa Afriyie for her advice and help
during the study. Our heartfelt thanks also go to Eric
Afrifa and Malcolm Afrifa for their encouragement.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
153
References
[1]. J. Vrchota, P. Řehoř, M. Maříková, M. Pech, Critical
Success Factors of the Project Management in Relation
to Industry 4. 0 for Sustainability of Projects,
Sustainability, Vol. 13, Issue 1, 2021, 281.
[2]. D. Chatziparaschis, M. G. Lagoudakis, Aerial and
Ground Robot Collaboration for Autonomous Mapping
in Search and Rescue Missions, Drones, Vol. 4,
Issue 4, 2020, 79.
[3]. J. Almalki, et al., Enabling Blockchain with IoMT
Devices for Healthcare, Inf., Vol. 13, Issue 10, 2022,
448.
[4]. D. Shiny Irene, S. Indra Priyadharshini, R. Tamizh
Kuzhali, P. Nancy, An IoT based smart menstrual cup
using optimized adaptive CNN model for effective
menstrual hygiene management, Artif. Intell. Rev.,
Vol. 56, Issue 7, 2022, pp. 6705-6722.
[5]. S. Afrifa, V. Varadarajan, Cyberbullying Detection on
Twitter Using Natural Language Processing and
Machine Learning Techniques, Int. J. Innov. Technol.
Interdiscip. Sci., Vol. 5, Issue 4, 2022, pp. 1069-1080.
[6]. S. Afrifa, V. Varadarajan, P. Appiahene, T. Zhang,
E. A. Domfeh, Ensemble Machine Learning
Techniques for Accurate and Efficient Detection of
Botnet Attacks in Connected Computers, MDPI Eng,
Vol. 4, Issue 1, 2023, pp. 650-664.
[7]. A. A. Khan, A. A. Laghari, P. Li, M. A. Dootio,
S. Karim, The collaborative role of blockchain,
artificial intelligence, and industrial internet of things
in digitalization of small and medium-size enterprises,
Sci. Rep., Vol. 13, Issue 1, 2023, 1656.
[8]. F. N. Al-Wesabi, et al., Pelican Optimization
Algorithm with Federated Learning Driven Attack
Detection model in Internet of Things environment,
Futur. Gener. Comput. Syst., Vol. 148, 2023.,
pp. 118-127.
[9]. R. Alghamdi and M. Bellaiche, An ensemble deep
learning based IDS for IoT using Lambda architecture,
Cybersecurity, Vol. 6, 2023, 5.
[10]. S. Afrifa, T. Zhang, P. Appiahene, V. Vijayakumar,
Mathematical and Machine Learning Models for
Groundwater Level Changes: A Systematic Review
and Bibliographic Analysis, Futur. Internet, Vol. 14,
Issue 9, 2022, 259.
[11]. W. K. Adu, P. Appiahene, S. Afrifa, VAR, ARIMAX
and ARIMA models for nowcasting unemployment
rate in Ghana using Google trends, J. Electr. Syst. Inf.
Technol., Vol. 10, 2023, 12.
[12]. S. Afrifa, T. Zhang, X. Zhao, P. Appiahene, M. S. Yaw,
Climate change impact assessment on groundwater
level changes: A study of hybrid model techniques, IET
Signal Process., Vol. 17, Issue 6, 2023, e12227.
[13]. J. W. Asare, P. Appiahene, E. J. Arthur, S. Korankye,
S. Afrifa, and E. T. Donkoh, Detection of anemia using
conjunctiva images: A smartphone application
approach, Med. Nov. Technol. Devices, Vol. 18, 2023,
100237.
[14]. P. Appiahene, J. W. Asare, E. T. Donkoh, G. Dimauro,
R. Maglietta, Detection of iron deficiency anemia by
medical images: a comparative study of machine
learning algorithms, BioData Min., Vol. 16, 2023, 2.
[15]. A. Vimont, H. Leleu, I. Durand-Zaleski, Machine
learning versus regression modelling in predicting
individual healthcare costs from a representative
sample of the nationwide claims database in France,
Eur. J. Heal. Econ., Vol. 23, Issue 2, 2022,
pp. 211-223.
[16]. R. A. Disha, S. Waheed, Performance analysis of
machine learning models for intrusion detection system
using Gini Impurity-based Weighted Random Forest
(GIWRF) feature selection technique, Cybersecurity,
Vol. 5, 2022, 1.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
154
(059)
Vehicle Speed Measurement through Ground Vibrations
Induced by Transverse Rumble Strips
D. Thanglerdsumpan 1, P. Wardkein 2 and L. Kirasamuthranon 3
1 Ruamrudee International School, Bangkok, Thailand
2 Department of Telecommunications Engineering, School of Engineering,
King Mongkut’s Institute of Technology Ladkrabang, Bangkok, Thailand
3 Department of Electronics Engineering Technology, College of Industrial Technology
King Mongkut’s University of Technology North Bangkok, Bangkok, Thailand
E-mail: 1 dhiraththanglerdsumpan@gmail.com, 3 Lersonk@kmutnb.ac.th
Summary: The likelihood and severity of a crash are directly related to the speed of vehicles. Therefore, vehicle speed
measurement is crucial to ensuring road safety as they are key to enforcing speed limits. This paper proposes two new methods
of vehicle speed measurement utilizing a novel means of ground vibrations induced by transverse rumble strips. A transverse
rumble strip is a group of equally spaced raised strips perpendicular to the traffic. These strips serve as a safety measure to
alert drivers of potential hazards on the road by inducing noise and vibrational feedback when driven over. The ground
vibrations can be monitored by a geophone sensor and processed to obtain the time taken by a vehicle to travel from one strip
to another. By knowing the time taken to travel over a certain distance between each strip, the proposed methods in this paper
are able to calculate the speed of a passing vehicle. The two methods proposed in this paper differ in the way in which they
obtain that time: one processes the time domain signal of ground vibrations directly, while the other processes the frequency
domain signal obtained through a fast Fourier transform (FFT) algorithm.
Keywords: Rumble strips, Transverse rumble strips, Geophones, Frequency-domain, Time-domain, Vehicle speed
measurement, Ground vibrations.
1. Introduction
According to the World Health Organization, “the
number of road traffic deaths continues to rise steadily,
reaching 1.35 million in 2016” [1]. Vehicles traveling
at high speeds are among the deadliest things on the
road. The vehicle’s speed directly influences the
crash’s severity and the likelihood of death. As a
result, many roads have speed limits. However,
without proper enforcement, these speed limits render
useless. Therefore, vehicle speed measurement is
crucial to transportation engineering, as it helps
enforce speed limits and ensure road users’ safety. A
wide variety of vehicle speed measurement methods
are currently used today, radar sensors, embedded
sensors, magnetic sensors, and monitoring of
pavement deflections. Nonetheless, each of these
methods has its advantages and disadvantages.
Currently, radar sensors are among the most
popular methods of vehicle speed measurement. These
sensors direct electromagnetic waves at a moving
vehicle and observe a Doppler shift in the reflected
wave to calculate the vehicle’s speed [2]. This method
is widely used since it is easy to use, relatively
accurate, and available as a hand-held speed gun.
However, these speed sensors’ accuracy can suffer
greatly depending on the weather. Moreover, these
devices are exceedingly expensive to purchase and
maintain.
Embedded sensors are another commonly
used method. This method works by embedding two
sensors — usually inductive loops, piezoelectric tubes,
or pneumatic tubes [3] — a certain distance apart into
the pavement. The vehicle’s speed is calculated by
dividing the distance between the two sensors by the
difference in time between when the first and second
sensors detect it. However, embedding these sensors
into the pavement is a costly and disruptive process, as
it requires cuts in the pavement.
Monitoring pavement deflections [4] is a
comparatively novel method that uses geophones set a
certain longitudinal distance apart on a stretch of
pavement to obtain the difference in time that the
different geophones detect
deflection at different points of the pavement. One key
advantage of this approach is that it is non-intrusive,
meaning it does not require a line of sight to the
vehicle. However, a significant drawback of this
method is that it requires bitumen pavements that are
flexible enough for a passing vehicle to cause
significant deflection. This method was first proposed
by Ngoc Son Duong et al. in 2021.
Another relatively new method is the portable
roadside magnetic sensor system for vehicle counting,
classification, and speed measurement [5] developed
by Saber Taghvaeeyan and Rajesh Rajamani in 2014.
The sensor system consists of wireless anisotropic
magnetic devices that are not required to be embedded
in the roadway. Speed measurement is based on
calculating the cross-correlation between
longitudinally spaced sensors using frequency-domain
signal processing techniques.
This research paper presents two new approaches
for measuring vehicle speed. The methods are based
on monitoring ground vibrations as vehicles drive over
transverse rumble strips.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
155
Transverse rumble strips are a set of equally spaced
raised strips across the road that induces noise and
vibration [6] when a vehicle passes over them. These
strips serve as a safety measure to alert drivers of
potential hazards on the road and prevent accidents.
They are commonly found in urban areas, especially
ahead of crosswalks, to warn drivers of pedestrians
crossing the road. As proposed in this paper, the
ground vibrations generated by vehicles passing over
rumble strips can be used to measure vehicle speed.
There are a few similarities between the proposed
methods in this paper and existing methods in the
literature review. One is the utilization of geophones,
similar to the monitoring pavement method [4], to
monitor the pavement's motion caused by passing
vehicles. One of the proposed methods also resembles
the portable roadside magnetic sensor system for
vehicle counting, classification, and speed
measurement [5], using frequency-domain signal
processing techniques to determine the speed of
passing vehicles. These similarities suggest that the
new methods may offer an alternative solution to
existing methods and contribute to reducing the
number of road traffic deaths by enforcing speed limits
and ensuring road users' safety.
This paper is organized as follows. In Section 2,
the scientific principles relating to the proposal of this
paper, including the generation of ground vibrations
when a vehicle passes over a set of rumble strips, the
kinematic principle of speed, the mechanism of a
geophone, and the amplifier circuit, are discussed. In
Section 3, the two new methods of vehicle speed
measurement — processing the time-domain signal of
ground vibrations generated by a vehicle passing over
transverse rumble strips and processing the frequency-
domain signal of ground vibrations generated by a
vehicle passing over transverse rumble strips — are
proposed. In Section 4, the experimental results of
speed measurements are presented, and conclusions
are discussed Section 5.
2. Principles
2.1. Rumble Strips Induced Ground Vibrations
Generated from Wheels Impacting
the Pavement
As the tire rolls across a set of transverse rumble
strips, it rolls up the raised surface of each strip and
then falls back down, generating a peak in ground
vibrations upon each impact with the ground [7], as
seen in Fig. 1. The ground vibrations are captured in
an electrical signal in relation to time. By obtaining
two values: the distance d between identical points of
two consecutive strips and the time T taken to travel
that distance found in the time period of the electrical
frequency, (1) can be used to calculate the speed s of
the vehicle.
d
sT
(1)
Fig. 1. Relationship between the movement
of the tire across rumble strips and magnitude of ground
vibrations in the time domain.
2.2. Capturing Ground Vibrations
Multiple methods exist for capturing ground
vibrations; however, this research paper utilizes
geophones placed on the ground for capturing ground
vibrations generated by a moving vehicle across a set
of rumble strips. The internal structure of a geophone
comprises a conductive coil wrapped around a mass
that is suspended over a stationary magnet placed in
the center of the conductive coil, as depicted in
Fig. 2. As the mass suspended over the magnet moves
up and down due to ground vibrations, it causes
electrons to move through the conductive coil,
inducing a current. The resulting current produces an
electrical signal corresponding to the ground's
vibration.
Fig. 2. Structure of a Geophone.
2.3. Amplifying Circuit
Due to the minuscule magnitude of ground
vibrations induced by transverse rumble strips, an
amplifying circuit is employed to amplify the voltage
of the output signal from the geophones in the
proposed methods of this paper. This amplifying
circuit is an instrumentation amplifier, which is a type
of differential amplifier [8]. Owing to its exceptional
noise reduction capabilities, the instrumentation
amplifier is the optimal choice for amplifying low
magnitude ground vibrations from busy roads flooded
with vibrational noise.
Fig. 3 displays the schematic of the most
commonly used instrumentation amplifier, while (2)
expresses the gain of the circuit.
3
1
2
2
1
G
R
R
Gain RR




(2)
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
156
Fig. 3. Schematic of instrumentation amplifier.
3. Proposed Methods
This paper proposes two innovative methods for
measuring vehicle speed, both utilizing a novel
technique of using ground vibrations generated by a
vehicle passing over a set of transverse rumble strips
as a means of speed measurement. Despite both
acquiring the ground vibration signal through a
geophone and amplifying it, the two methods differ in
the process of obtaining the time taken by the vehicle
to travel from one strip to another. The first method
calculates the time taken by processing signals in the
time domain, while the second method calculates the
time taken by processing signals in the frequency
domain.
As noted in the principles section, when a vehicle
traverses a set of rumble strips, it generates ground
vibrations. To capture these ground vibrations, a
geophone is placed on the roadside in a location
relative to the rumble strips, as depicted in Fig. 4. The
geophone produces an analog signal that is amplified
by an instrumentation amplifier.
Fig. 4. Placement of geophones for vibration detection through the instrumentation amplifier. The analog signal is then
converted into a digital signal using a sound card. The two methods differ after a digital signal is obtained.
3.1. Time Domain Processing
In this approach, the signal is plotted in the time
domain and subsequently analyzed to determine the
time tT taken by the vehicle to travel from one strip to
another. The principle of vibrations generated from
wheels impacting the ground is applied to determine
the time tT. Specifically, tT is calculated as the
difference in the horizontal axis of two adjacent zero-
crossings in the graph, as illustrated in Fig. 5. The
obtained value of tT is then utilized in (1) to compute
the speed s, where d represents the distance from the
start of one strip to the next.
Fig. 5. Signal from the Geophones in the Time Domain.
3.2. Frequency Domain Processing
In contrast to the time domain processing method,
this method obtains the value that is to be substituted
into (1) for variable t through a spectral analysis rather
than an analysis of the signal in its time domain. A
spectrum is first obtained using the Fast Fourier
Transform (FFT) algorithm to convert the signal into
a frequency domain graph. The spectrum is then
analyzed to determine the frequency of the highest
magnitude, which would correspond to the frequency
of the vibrations generated by the car going the rumble
strips. From the frequency, the time which the car
takes to travel from one strip to the next tF can be
substituted into (1) for variable t to calculate the speed
s of the vehicle.
4. Experimental Results
Experiments were conducted to assess the
accuracy of the proposed methods. This section
presents the performance of the instrumentation
amplifier used in the experiments and an analysis of
the accuracies of the two proposed methods based on
the results of the experiments.
4.1. Instrumentation Amplifier
An instrumentation amplifier is implemented in
the experiment as proposed in the Principles section
and as depicted in Fig. 3. The instrumentation
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
157
From Fig. 3, there are 7 resistors in the circuit of this
instrumentation amplifier; in this experiment, R
1
, R
2
,
and R
3
all have a resistance value of 10k ohm, while a
variable resistor that has a range of resistance from 0
to 100 ohm is used for R
G
. As a result of this specific
design for the instrumentation amplifier, (2) can be
rewritten as (3) to match the experiment.
20000
1
G
Gain R

(3)
To determine the frequency response of the
instrumentation amplifier, the amplifier was given an
input of sinusoidal waves of 500 mV
pp
amplitude with
frequencies ranging from 0.1 to 10000 Hz. The
experimental results of the frequency response of the
instrumentation amplifier are displayed in Fig. 6.
From Fig. 6, the amplifier used in this experiment has
a constant gain for all frequencies under 10 kHz. As a
result, this amplifier is suitable for the purpose of this
research as the maximum frequency input to the
amplifier is no more than 50 Hz.
Fig. 6. Frequency Response of Instrumentation Amplifier.
4.2. Speed Measurement
In the experiments, the speed of a moving vehicle
was measured using four different methods. Two of
those methods were the two proposed in this paper.
Meanwhile, the other two methods were looking at the
vehicle's speedometer and using a radar gun. The
driver used the speedometer to gauge the speed of the
car while driving. On the other hand, the radar gun was
used to obtain a reference speed to analyze the
accuracy of the results of the proposed methods. The
instrumentation includes:
Two geophones placed 0.15 m apart in the center
of the set of rumble strips.
An instrumentation amplifier receiving input from
the geophone and outputting to a soundcard in a
computer.
A set of rumble strips, as modeled in Fig. 7,
containing nine strips.
Fi
g
.7. Dia
g
ram of Rumble Stri
p
s in Ex
p
eriments.
Fig. 8 and Table 1 represent the results of the
experiment. Fig. 8 shows examples of plotted signals
that were used in the speed calculations. Each row
contains signals of different speeds read from the
speedometer. The top, middle, and bottom rows
display the signals for 30, 40, and 50 km/h,
respectively. Meanwhile, the left column shows the
signals in the time domain, and the right column shows
the signals in the frequency domain. From top to
bottom, the time period for each time domain graph is
0.07, 0.05, and 0.04 seconds, respectively. On the
other hand, from top to bottom, the tallest peak for the
frequency domain graphs are 13.9, 19.3, and 24.5,
respectively. Table 1 presents the average speed
measured from the radar gun, time-domain method,
and frequency-domain for each speedometer speed
across the three trials.
The speedometer was used in the experiment to
allow the driver to gauge their speed. However, the
speed displayed on the speedometer does not reflect
the true speed of the vehicle due to several reasons,
such as changes in tire circumference due to use and
environmental conditions. Therefore, measurements
from a radar gun are used as the reference speeds
instead. Fig. 9 presents the error or difference in
kilometer per hours between the speed measured from
the radar gun and the speeds measured from the time
domain and the frequency domain methods. The time
domain method had the smallest error (0 km/h) at
17 and 36 km/h and the largest error (-2 km/h) at
56 km/h. On the other hand, the frequency domain
method had the smallest error (0 km/h) at 17 km/h and
the largest error (-5 km/h) at 47, 56, and 75 km/h.
5. Conclusion
Vehicle speed measurement is crucial to ensuring
traffic safety for road users, especially in urban areas
where the number of vehicles and pedestrians is high.
However, another thing that is also abundant in urban
areas is transverse rumble strips, which are used to
alert drivers of potential hazards ahead. This paper
proposes two new speed measurement methods that
utilize this abundance of transverse rumble strips by
using geophones to collect the ground vibrations
generated by a vehicle passing over them. The two
methods — processing the time-domain and
frequency-domain signals — are simple, inexpensive,
portable, and can work in any weather conditions.
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
158
Moreover, the only sensors used in the methods are
geophones, which are passive sensors, and therefore,
allow the system to operate on low power until the
process of signal processing and computations. Based
on the outcomes of the experiments, the errors of the
proposed vehicle speed measurement methods
compared to a conventional radar gun fall within a
range of 0 to 5 km/h difference. These variations are
of minor magnitude, sufficiently validating the role of
the proposed methods in contributing to traffic
accident warning and preventative systems.
Fig. 8. Signals from Experiment.
Table 1. Average Frequencies and Speeds from Experiment.
Speedometer
(km/hr)
Radar
Gun
(km/hr)
Time domain Frequency domain
Speed (km/hr) Frequency (Hz) Speed (km/hr) Frequency (Hz)
20 18 18.27 10.15 17.72 9.84
30 26 26.88 14.93 24.91 13.84
40 37 36.59 20.32 35.65 19.80
50 47 48.08 26.70 42.49 23.61
60 56 53.89 29.94 50.95 28.31
70 66 64.56 35.86 67.70 37.61
80 75 73.54 40.85 69.83 38.79
Fig. 9. Bar Graph Displaying Average Error of the
Proposed Methods Compared to a Radar Gun from
Experiment.
References
[1]. World Health Organization, Global status report on
road safety 2018, World Health Organization,
Geneva, 2018, December 2018, pp. 2-13.
[2]. IEEE Standard for the Performance of Down-the-
Road Radar Used in Traffic Speed Measurements,
IEEE Std 2450™-2019, IEEE Instrumentation and
Measurement Society, November 2019.
[3]. M. Akram Adnan, N. Izzah Zainuddin, N. Sulaiman,
and T. Badrul Hisyam Tuan Besar, Vehicle Speed
Measurement Technique Using Various Speed
Detection Instrumentation, in Proceedings of the IEEE
Business Engineering and Industrial Applications
Colloquium (BEIAC’ 2013), Langkawi, Malaysia, 07-
09 April 2013, pp. 668-672.
[4]. N. S. Duong, J. Blanc, P. Hornych, F. Menant, Y.
Lefeuvre, B. Bouveret, Monitoring of pavement
deflections using geophones, International Journal of
Pavement Engineering, Vol. 21, Issue 9, 7 Jun 2021,
pp. 1103-1113.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
159
[5]. S. Taghvaeeyan and R. Rajamani, Portable Roadside
Sensors for Vehicle Counting, Classification, and
Speed Measurement, IEEE Transactions on Intelligent
Transportation Systems, Vol. 15, No. 1, Feb 2014,
pp. 73-83.
[6]. R. L. Pimentel, R. A. Melo, I. A. Rolim, Estimation of
increases in noise levels due to installation of
transverse rumble strips on urban roads, Elsevier,
Applied Acoustics, Vol. 76, Feb. 2014, pp. 453-461.
[7]. M. S. Zaina, M. H. Othmanb and et al, Evaluation of
Ground Vibration Resulting from a Heavy Vehicle
Passing Over Transverse Rumble Strips: A Case Study
in Kluang Road FT050, Jurnal Kejuruteraan, Vol. 33,
No. 3, 2021, pp. 654-650.
[8]. R. Boylestad, L. Nashelsky, Op-Amp Applications, in
Electronic Devices and Circuit Theory, 11th ed.,
Pearson, 2009.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
160
(061)
Static and Dynamic Calibration of Pneumatic Pressure Sensors
and Instruments
José Dias Pereira 1,2 and Octavian Postolache 3
1 Instituto Politécnico de Setúbal, ESTSetubal, Sustain.RD, Setúbal, Portugal
2 Instituto de Telecomunicações, Lisboa, 1049-001, Portugal
3 Instituto de Telecomunicações IT-IUL, Instituto Universitário de Lisboa, Portugal
E-mail: dias.pereira@estsetubal.ips.pt
Summary: Static and dynamic calibration of industrial devices is a key factor regarding the process performance, in terms of
quality and reliability, and obviously in the return on investment and companies’ profits. The importance of the calibration is
even more important for continuous industrial processes where online condition-based maintenance enables to monitor and
spot upcoming equipment failure and triggers maintenance within a long enough time before failure. In this paper a particular
attention is dedicated to static and dynamic calibration of pneumatic pressure devices, such as, industrial transmitters that are
used in a large number of industrial instrumentation applications to measure different physical quantities, not only for pressure
measurements, but also for indirect measurements such as flow rates of liquids and gases. The paper includes simulation and
experimental results associated with static dynamic calibration of pneumatic pressure sensors and instruments.
Keywords: Pressure sensors, Industrial instrumentation, Calibration, Pressure regulators.
1. Introduction
Regarding dynamic pressure calibration
techniques, it is important to underline those sensors
are typically calibrated by varying the input signal
amplitudes, rather that its frequency. A challenge of
this calibration techniques is to cover the dynamic
range of the sensing pressure devices according their
working principles and applications. Regarding
aperiodic calibration techniques, there exist two main
techniques, one known as step input and other known
as impulse input [1]. In the first case, the calibration is
based on the response of the sensing device to a quick
rise in the applied pressure and in the second case a
quick rise and fall input pressure signals are used.
Regarding application fields, there are several areas
where dynamic measurements and calibrations are an
important issue [2], such as: hydraulic piping and
pumps, medical cardiac probes, control of medical
balloon manufacturing, angioplasty catheters, gas
turbines, intake and exhaust flow engines, engines
combustion chambers, wind tunnels, specific
aeronautic instrumentation, synchronized hydraulic
and pneumatic systems and explosions’ detection,
among others. The response time of a pressure
measuring system with a connecting tube, that is
widely used in several instrumentation devices, must
also be analyzed since the dynamic characteristics of
connecting tubes, namely their lengths and diameters,
can affect, substantially, pressure measurement
accuracy [3]. This paper is organized as follows:
section is the introduction; section two includes the
hardware and software description of the proposed
static and dynamic calibration system; section three
includes experimental results and their discussion, and
the last part, section four, is dedicated to conclusions.
2. System Description
This section includes the description of the main
characteristics of the HW and SW of the proposed
static and dynamic calibration system and flowchart of
the implemented calibration procedure.
2.1. Hardware
The block diagram of the electro-pneumatic
pressure regulator (EPPR) [4] used to generate the
static and dynamic pressure signals, used for
calibration purposes, includes, mainly: a pneumatic
pressure transducer (PT), a miniature electro-valve
(VENT), used for speed up pneumatic air discharge
and for zero calibration purposes, that means offset
calibration, a miniature proportional valve (MPV) and
some elementary conditioning circuits, namely, a
comparator and a voltage follower. Fig. 1 represents
the block diagram of the EPPR. The setpoint signal,
used to define the pressure applied to the DSUT, is
generated by multifunction data acquisition board with
12-bit resolution; a maximum sample rate of 50 kS/s
and output DAC voltage range varying between
0 and 5 V.
2.2. Software
The software of the proposed system includes
several routines used for: configuration of the
calibration signal; data processing for evaluation of
static and dynamic calibration parameters [5]; storage
of calibration data and generation of calibration
reports. As an example, Fig. 2 represents the front
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
161
panel of the LabVIEW program that was developed to
configure parameters of the pressure signal used for
calibration purposes. The main configuration
parameters, visible in the front panel of the virtual
instrument (VI), include: the definition of waveform
settings, namely the frequency, amplitude and offset of
the control voltage signal that corresponds to the
setpoint of the pneumatic loop represented in Fig. 1;
the definition of the main characteristics of the
pressure calibration pattern, that include the number of
bursts, the number of pulses per burst and inter-burst
pause duration, and a burst modulation parameter of
the multiples patterns included in the calibration
pressure signal. In the example, represented in Fig. 2,
the calibration pressure signal includes 4 burst patters;
a number of 9 pressure pulses per burst; an inter-burst
pauses equal to 3 s; a sinusoidal pattern waveform,
with frequency and amplitude equal to 1 Hz and 1 V,
respectively, and an offset value equal to 0.25 V.
Fig. 1. Block diagram of the electro-pneumatic pressure
regulator with internal vent (MPV- miniature proportional
valve; DSUT- device system under test).
Fig. 2. Calibration pressure signal associated with a burst
suction signal characterized by: 4 burst patterns; 9 pressure
pulses per burst; inter-burst pauses equal to 3 s
and a sinusoidal pattern waveform.
Another example, considering a more realistic
situation, is represented in Fig. 3 where the calibration
pressure signal includes 8 burst patters; a number of
4 pressure pulses per burst; an inter-burst pauses equal
to 8 s; a triangular pattern waveform and with the same
values of frequency, amplitude and offset, that were
previously considered. In this case, the time constant
for increasing the pressure is much higher than the time
constant for pressure discharge.
Fig. 3. Calibration pressure signal associated with a burst
suction signal characterized by: 8 burst patterns; 4 pressure
pulses per burst; inter-burst pauses equal to 8 s
and a triangular pattern waveform.
2.3. Calibration
Whenever dynamic calibration of pressure devices
is essential, it is important to underline that the
pressure device to be calibrated includes not only the
sensing element, for examples a piezoelectric or
piezoresistive element, but also the connecting
devices, such as tubing and others pneumatic
interconnections, and the pneumatic fluid, itself. The
dynamic response is affected not only by frequency
variations bout also by amplitude variations. Thus, the
dynamic calibration of pressure sensors is typically
performed by varying, not only the frequency, but also
the amplitude of the pneumatic signals. Fig. 4
represents the flowchart of the calibration procedure
that includes the evaluation of the static calibration
coefficients (SCC), on the left side of the figure, and
the evaluation of the dynamic calibration coefficients
(DCC), on the right side of the same figure.
3. Results
Regarding static calibration test results, Fig. 5
represents the results that were obtained when the
input control voltage of the EPPR varies between its
minimum and maximum values, 0 V and 5 V,
respectively. As it is clearly visible from Fig. 4, the
EPPR presents high linearity and its sensitivity is about
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
162
1 p.s.i. per Volt, which agrees with the datasheet
specifications of the device [4].
Fig. 4. Calibration flowchart (SCC- static calibration
coefficients, DCC- dynamic calibration coefficients).
Fig. 5. Static calibration test results of the EPPR.
Fig. 6 represents the relative error of the output
pressure amplitude taking as reference its theoretical
value. From the results it is possible to conclude that,
after offset and gain compensation of the static
calibration test results, the absolute value of the
relative error is always lower than 0.3 %, being the
correlation coefficient between measured and
theoretical values equal to 0.9828, and the offset error
value lower than 0.0051 p.s.i.. These error values are
low enough to perform the laboratory calibration of the
majority of the industrial pressure instrumentation that
typically presents an overall measurement error higher
that 1 %. Additional tests were performed with
time-variable pressure signals thar are required for
dynamic calibration purposes. Some of these tests
were performed with sinusoidal control voltage signals
and it was verified, as expected, that the frequency
response has a low-pass filter behavior. Fig. 7
represents experimental test results for a set of
sinusoidal signals with offset and amplitude values
equal to 2.5 V and 1 V, respectively, and frequencies
equal to {0.1; 0.2; 0.5; 1.0; 2.0; 5.0; 10.0; 20.0; 50.0}
Hz. Using the polyfit function of MATLAB, it was
obtained the following filter characteristics: a unitary
static gain and a cutoff frequency (fc) approximately
equal to 7.2 Hz. As previously referred, it is important
to underline that the value of the cutoff frequency
depends significantly on the characteristics of the
pneumatic system, connected to the output of the
EPPR, namely, the pneumatic tubes’ lengths and
diameters, together with the limitation imposed by air
compressibility and the dynamic characteristics
of the EPPR.
Fig. 6. Relative error between measured and theoretical.
Fig. 7. Bode diagram of the electro pneumatic based system
(circle symbol- experimental data, continuous line- LMS
curve fitting of the experimental data).
Finally, regarding time domain characterization,
Fig. 8 represents the dynamic calibration results that
were obtained with three square pressure control
signals with a common frequency equal to 5 Hz and
Pressure (p.s.i.)
Relative Error (%)
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
163
amplitudes equal to 0.25, 0.5 and 1 V. As it is clearly
visible, the maximum relative errors increase with the
amplitude of the signals being its maximum absolute
value lower than 1.2 % for the first two smaller
amplitudes and about 8 % for the higher square wave
amplitude. The relative errors are referenced to the
full-scale range of the EPPR, that is equal to 5 p.s.i.,
and the theoretical sensitivity of the pressure regulator,
used for testing purposes, which is equal to 1 p.s.i./V.
Fig. 8. Dynamic calibration results for a set of three-square
voltage control signals with a frequency equal to 5 Hz
and three different amplitude values (- 0.25 V and - 0.5 V
and - 1 V).
4. Conclusions
This paper presents a low-cost and flexible electro
pneumatic based system that can be used for static and
dynamic calibration of pneumatic pressure sensing
devices. Regarding flexibility capabilities, the
proposed system enables that different parameter, such
as, amplitude, frequency, waveform, number of pulses
per burst, burst rate, and amplitude burst modulation of
the generated pressure signals be adjusted in real time,
according to users’ needs and application fields of the
devices under calibration, namely according the
application field of industrial measurement pressure
applications. The main advantages of the proposed
static and calibration system include: its low-cost
implementation with COTS components; flexible
configuration; self-test capabilities and an acceptable
performance for the large majority of industrial
pressure instrumentation devices.
References
[1]. L. M. Léodido, C. Sarraf, J. Damion, Caractéristiques
Dynamiques des Capteurs de Pression en Milieu
Hydraulique, in Proceedings of the International
Metrology Congress, Paris, France, 22-25 June 2009.
[2]. A. C. G. C. Diniz, et al., Dynamic Calibration Methods
for Pressure Sensors and Development of Standard
Devices for Dynamic Pressure, in Proceedings of the
XVIII IMEKO World Congress – Metrology for a
Sustainable Development, 17-22 September, 2006, Rio
de Janeiro, Brazil.
[3]. I. Bajsic, J. Kutin, T. Zagar, The response time of a
pressure special issue intelligent sensing and
decision-making, in Advanced Manufacturing,
Instrumentation Science and Technology, Vol. 35,
Taylor and Francis, 2007, pp. 399-409.
[4]. Miniature Electronic Pressure Controller, Part Number:
990-005103-005, https://www.parker.com/content/
dam/Parker-com/Literature/Precision-
Fluidics/Electronic-Pressure-Controllers/OEM-Data-
Sheet.pdf
[5]. J. P. Damion, Moyens d’Étalonnage Dynamique des
Capteurs de Pression, Bulletin du Bureau National de
Métrologie, Vol. 8, Issue 30, 1977.
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
164
(062)
APHRODITE: Design and Preliminary Tests of an Autonomous
and Reusable Photo-sensing Device for Immunological Test
aboard the International Space Station
L. Nardi1, N. Maipan Davis1, S. Sansolini1, T. B. De Albuquerque1, M. Laarraj1, D. Caputo2,
G. de Cesare2, S. R. Shariati Pour3, M. Zangheri3, D. Calabria3, M. Guardigli3, M. Balsamo4,
E. Carrubba4, F. Carubia4, M. Ceccarelli4, M. Ghiozzi4, L. Popova4, A. Tenaglia4, M. Crisconio5,
A. Donati4, A. Nascetti1 and M. Mirasoli3
1 School of Aerospace Engineering, Sapienza University of Rome, Via Salaria 851, I-00138, Rome, Italy
2 Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome,
Via Eudossiana 18, I-00184, Rome, Italy
3 Department of Chemistry “Giacomo Ciamician”, Alma Mater Studiorum – University of Bologna,
Via Selmi 2, I-40126, Bologna, Italy
4 Kayser Italy s.r.l., Via di Popogna 501, I-57128, Livorno, Italy
5ASI, Italian Space Agency, Via del Politecnico, 00133 Rome, Italy
Tel.: + 39 3924573881
E-mail: lorenzo.nardi@uniroma1.it
Summary: Preliminary results of the design and manufacturing of APHRODITE, a compact and versatile device for carrying
out analyses of biological fluids during space missions that will be used as a technological demonstrator on board the
International Space Station (ISS) for the quantitative determination of salivary biomarkers indicators of alterations of
functionality of the immune system. The paper addresses the design of the main subsystems of the analytical device and the
preliminary results obtained during the first implementations of the device subsystems and testing measurements. In particular,
the system design and the experiment data output of the lab-on-chip photosensors and of the front-end readout electronics are
reported in detail.
Keywords: Lab-on-chip, Chemiluminescence, Hydrogenated amorphous silicon photosensors, Biosensor, International space
station.
1. Introduction
The APHRODITE analytical device, developed
collaboratively by the School of Aerospace
Engineering (SIA), Sapienza University of Rome, the
University of Bologna, and Kayser Italia, funded by
ASI, aims to design and manufacture a technology
demonstrator for salivary biomarker analysis on the
International Space Station (ISS). The system employs
a lab-on-chip (LoC) with thin-film sensors for dual-
analyte chemiluminescence immunoassay. Long-
duration space travel necessitates health protection
and prevention methods due to microgravity-induced
health issues like muscle atrophy, metabolic changes,
and increased cancer risk [1]. Traditional space
activity has been within low orbit, but future missions
demand in situ diagnostics and prevention methods
due to non-viable sample return. In response,
diagnostic tools usable during space missions are a
new aerospace priority. The LoC technology, with its
microfluidic approach, is a fitting design choice
because it enhances analytical efficiency, reduces
sample size, response time, and cost, and increases
automation.
Such devices have already been used on Earth in
numerous biomedical applications and have recently
been validated in orbit [2, 3].
In this paper, we report the system design and
preliminary results of an innovative platform
conceived to enhance space exploration by permitting
the detection of numerous target analytes of interest in
microgravity.
2. System Overview
APHRODITE is a compact biosensor (Fig. 1)
designed for ISS deployment to analyze cortisol and
dehydroepiandrosterone (DHEA) levels in astronaut
saliva. The biosensor incorporates microfluidics,
functionalized microbeads (MBs), and a-Si:H
photodiodes for detection, a detection method already
validated in microgravity [4,5].
Its components include a disposable cartridge,
detection subsystem, fluidic dispensing subsystem,
control electronics, and mechanical housing. The
biosensor's design emphasizes space application
requirements such as compactness, weight reduction,
and radiation resistance. Its subsystems include a
disposable cartridge with reagent reservoirs, a
detection subsystem with a microfluidic chip and
photosensor array, a fluidic dispensing subsystem,
control electronics, and mechanical housing.
The immunological analysis protocol employs a
luminol/H2O2 reaction catalyzed by Horseradish
9
th
International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
165
Peroxidase for detecting cortisol and DHEA. The
analyte-specific capture antibodies are immobilized
on MBs [6], which are then loaded in the chip channel,
kept in position using permanent magnets, and then
eliminated by removing the magnetic field and
washing, leaving the system clean and ready for the
next analysis, unlike in other LoC systems [7].
Fig. 1. APHRODITE block diagram.
3. Results and Discussion
Initial experiments verified the system’s concept
were carried out by the University of Bologna to select
the most suitable buffer, on MBs stability,
functionalization protocol, and the cross-reactivity
between DHEA and cortisol.
In addition, an integration test demonstrated
APHRODITE's performance with a fluidic dispensing
subsystem and disposable cartridge prototype. The
chemiluminescence (CL) signal was successfully
detected (Fig. 2), with attention to microbead
distribution during analysis. This experiment included
the detection of DHEA without saliva sample and
demonstrated the successful operation of the core
subsystems, including microfluidic channels, valves,
and cartridge components. The typical behavior of CL
signal curve after stop flow condition was observed in
the data.
4. Conclusions
The APHRODITE biosensor's design was
validated through preliminary tests and subsystem
integration. Future steps involve deploying a ground
model for measurements with actual saliva samples
and qualifying the device for ISS flight. Once proven,
APHRODITE could contribute to studying space's
impact on human health and be adapted for other
biomarker analyses.
Acknowledgments
The authors acknowledge the Italian Space
Agency (ASI) that funded the system development and
future flight to the ISS in the frame of the program
«Ricerche e dimostrazioni tecnologiche sulla Stazione
Spaziale Internazionale – VUS3:
ISS4EXPLORATION».
Fig. 2. DHEA run output signal.
Et al.
References
[1]. Afshinnekoo, E., Scott, R. T., MacKay, M. J., Pariset,
E., et al., Fundamental Biological Features of
Spaceflight: Advancing the Field to Enable Deep-
Space Exploration, Cell, Vol. 183, 2020,
pp. 1162–1184.
[2]. Burklund, A., Tadimety, A., Nie, Y., Hao, N., Zhang,
J.X.J., Advances in diagnostic microfluidics, in:
Advances in Clinical Chemistry, 2020, Elsevier,
pp. 1–72.
[3]. Calabria, D., Trozzi, I., Lazzarini, E., Pace, A., et al.,
AstroBio-CubeSat: A lab-in-space for
chemiluminescence-based astrobiology experiments.
Biosensors and Bioelectronics, 2023, Vol. 226,
115110.
[4]. Zangheri, M., Mirasoli, M., Guardigli, M., Di Nardo,
F., et al., Chemiluminescence-based biosensor for
9th International Conference on Sensors and Electronic Instrumentation Advances (SEIA' 2023),
20-22 September 2023, Funchal (Madeira Island), Portugal
166
monitoring astronauts’ health status during space
missions: Results from the International Space Station.
Biosensors and Bioelectronics, 2019, Vol. 129,
pp. 260–268.
[5]. Fereja, T. H., Hymete, A., Gunasekaran, T., A Recent
Review on Chemiluminescence Reaction, Principle
and Application on Pharmaceutical Analysis, ISRN
Spectroscopy, 2013, pp. 1–12.
[6]. Khizar, S., Ben Halima, H., Ahmad, N.M., Zine, N.,
Errachid, A., Elaissari, A., Magnetic nanoparticles in
microfluidic and sensing: From transport to detection.
Electrophoresis, Vol. 41, 2020, pp. 1206–1224.
[7]. Nascetti, A., Mirasoli, M., Marchegiani, E., Zangheri,
M., et al., Integrated chemiluminescence-based lab-
on-chip for detection of life markers in extraterrestrial
environments, Biosensors and Bioelectronics, 2019,
Vol. 123, pp. 195–203.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
One of the most important sources of water supply is groundwater. However, the groundwater level (GWL) is significantly impacted by the global climate change. Therefore, under these more severe climate change conditions, the accurate and simple forecast of farmland GWL is a crucial component of agricultural water management. A hybrid model (HM) of Bayesian random forest (BRF), Bayesian support vector machine (BSVM), and Bayesian artificial neural network (BANN) is built in this study. The HM is made up of a Bayesian model averaging (BMA) and three machine learning models: random forest (RF), support vector machine (SVM), and artificial neural network. These three HMs are employed to help automate logical inference and decision‐making in business intelligence for groundwater management. For this purpose, data on 8 separate climatic factors that impact GWL changes in the study area were acquired. Nine distinct farming communities' GWL change data were utilised as the dependent variables for each model fit (community data). The effectiveness of the HM techniques was assessed using the evaluation metrics of mean absolute error (MAE), coefficient of determination (R²), mean absolute percent error (MAPE), and root mean square error (RMSE). The model fit in Suhum had the greatest performance with the highest accuracy (R² varied from 0.9051 to 0.9679) and the lowest error scores (RMSE ranged from 0.0653 to 0.0727, and MAE ranged from 0.0121 to 0.0541), according to the models' evaluation results. The BRF delivered the greatest results when compared to the two independent HMs, the BSVM and BANN. Future GWL and climatic variable data may be trained using the trained HM techniques to determine the effects of climate change. Farmers, businesses, and civil society organisations might benefit from continuous monitoring of GWL data and education on climate change to help control and prevent excessive deteriorations of global climate change on GWL.
Article
Full-text available
Background The management of chronic pelvic pain in women with endometriosis is complex and includes the long-term use of opioids. Patients not fully responsive to drugs or ineligible for surgical treatments need efficient alternatives to improve their quality of life and avoid long-term sequelae. Objective This randomized controlled trial aimed to assess the effects of repeated at-home administrations of a 20-minute virtual reality (VR) solution (Endocare) compared with a sham condition on pain in women experiencing pelvic pain due to endometriosis. Methods Patients were instructed to use the VR headsets twice daily for at least 2 days and for up to 5 days starting on their first day of painful periods. Pain perception was measured using a numerical scale (0-10) before and 60, 120, and 180 minutes after each treatment administration. General pain, stress, fatigue, medication intake, and quality of life were reported daily by patients. ResultsA total of 102 patients with endometriosis were included in the final analysis (Endocare group: n=51, 50%; sham group: n=51, 50%). The mean age was 32.88 years (SD 6.96) and the mean pain intensity before treatment was 6.53 (SD 1.74) and 6.22 (SD 1.69) for the Endocare group and the sham control group, respectively (P=.48). Pain intensity decreased in both groups from day 1 to day 5 along with a decrease in medication use. Maximum pain intensity reduction of 51.58% (SD 35.33) occurred at day 2, 120 minutes after treatment for the Endocare group and of 27.37% (SD 27.23) at day 3, 180 minutes after treatment for the control group. Endocare was significantly superior to the sham on day 1 (120 minutes, P=.04; 180 minutes, P=.001), day 2 (0 minutes, P=.02; 60, 120, and 180 minutes, all P
Article
Full-text available
The advancement in technology has led to the integration of internet-connected devices and systems into emergency management and response, known as the Internet of Emergency Services (IoES). This integration has the potential to revolutionize the way in which emergency services are provided, by allowing for real-time data collection and analysis, and improving coordination among various agencies involved in emergency response. This paper aims to explore the use of IoES in emergency response and disaster management, with an emphasis on the role of sensors and IoT devices in providing real-time information to emergency responders. We will also examine the challenges and opportunities associated with the implementation of IoES, and discuss the potential impact of this technology on public safety and crisis management. The integration of IoES into emergency management holds great promise for improving the speed and efficiency of emergency response, as well as enhancing the overall safety and well-being of citizens in emergency situations. However, it is important to understand the possible limitations and potential risks associated with this technology, in order to ensure its effective and responsible use. This paper aims to provide a comprehensive understanding of the Internet of Emergency Services and its implications for emergency response and disaster management.
Article
Full-text available
Anemia is one of the public health issues that affect children and pregnant women globally. Anemia occurs when the level of red blood cells within the body is reduced. Detecting anemia requires expert blood draw for clinical analysis of hemoglobin quantity. Although this standard method is accurate, it is costive and consumes enough time, unlike the non-invasive approach which is cost-effective and takes less time. This study focused on pallor analysis and used images of the conjunctiva of the eyes to detect anemia using machine learning techniques. This study used a publicly available dataset of 710 images of the conjunctiva of the eyes acquired with a unique tool that eliminates any interference from ambient light. We combined Convolutional Neural Networks, Logistic Regression, and Gaussian Blur algorithm to develop a conjunctiva detection model and an anemia detection model which runs on a Fast API server connected to a frontend mobile app built with React Native. The developed model was embedded into a smartphone application that can detect anemia by capturing and processing a patient's conjunctiva with a sensitivity of 90%, a specificity of 95%, and an accuracy of 92.50% on average performance in about 50 s.
Article
Full-text available
Bimetallic MOFs offer many advantageous properties rather than monometallic MOFs useful for variety of applications including energy storage and conversion, catalysis, gas separation, sensing, etc. Desired morphology with precise control of crystal size is of utmost importance for specific applications. Microfluidics-incorporated continuous synthesis is demonstrated for bimetallic ZIF-8/67 exhibiting significant nanocrystallinity, porosity and surface area. The microfluidic approach of synthesizing MOF is found very time-efficient in comparison with the conventional techniques. Bimetallic ZIF-8/67 samples were synthesized at different reaction conditions, without adding capping agents and modulators, followed by sophisticated characterization techniques including powder X-ray diffraction, field emission scanning electron microscopy, transmission electron microscopy, energy-dispersive spectroscopy, X-ray photoelectron spectroscopy, thermogravimetric analysis, Brunauer–Emmett–Teller analysis and Fourier transform infrared spectroscopy. Graphical Abstract
Article
Full-text available
The Internet of Things (IoT) has revolutionized our world today by providing greater levels of accessibility, connectivity and ease to our everyday lives. It enables massive amounts of data to be traversed across multiple heterogeneous devices that are all interconnected. This phenomenon makes IoT networks vulnerable to various network attacks and intrusions. Building an Intrusion Detection System (IDS) for IoT networks is challenging as they enable a massive amount of data to be aggregated, which is difficult to handle and analyze in real time mainly because of the heterogeneous nature of IoT devices. This inefficient, traditional IDS approach accentuates the need to develop advanced IDS techniques by employing Machine or Deep Learning. This paper presents a deep ensemble-based IDS using Lambda architecture by following a multi-pronged classification approach. Binary classification uses Long Short Term Memory (LSTM) to differentiate between malicious and benign traffic, while the multi-class classifier uses an ensemble of LSTM, Convolutional Neural Network and Artificial Neural Network classifiers to detect the type of attacks. The model training is performed in the batch layer, while real-time evaluation is carried out through model inferences in the speed layer of the Lambda architecture. The proposed approach gives high accuracy of over 99.93% and saves useful processing time due to the multi-pronged classification strategy and using the lambda architecture.
Article
Full-text available
The analysis of the high volume of data spawned by web search engines on a daily basis allows scholars to scrutinize the relation between the user's search preferences and impending facts. This study can be used in a variety of economics contexts. The purpose of this study is to determine whether it is possible to anticipate the unemployment rate by examining behavior. The method uses a cross-correlation technique to combine data from Google Trends with the World Bank's unemployment rate. The Autoregressive Integrated Moving Average (ARIMA), Autoregressive Integrated Moving Average with eXogenous variables (ARIMAX) and Vector Autoregression (VAR) models for unemployment rate prediction are fit using the analyzed data. The models were assessed with the various evaluation metrics of mean absolute error (MAE), root mean square error (RMSE), mean absolute percentage error (MAPE), median absolute error (MedAE), and maximum error (ME). The average outcome of the various evaluation metrics proved the significant performance of the models. The ARIMA (MSE = 0.26, RMSE = 0.38, MAE = 0.30, MAPE = 7.07, MedAE = 0.25, ME = 0.77), ARIMAX (MSE = 0.22, RMSE = 0.25, MAE = 0.29, MAPE = 6.94, MedAE = 0.25, ME = 0.75), and VAR (MSE = 0.09, RMSE = 0.09, MAE = 0.20, MAPE = 4.65, MedAE = 0.20, ME = 0.42) achieved significant error margins. The outcome demonstrates that Google Trends estimators improved error reduction across the board when compared to model without them.
Article
Full-text available
The transmission of information, ideas, and thoughts requires communication, which is a crucial component of human contact. The utilization of Internet of Things (IoT) devices is a result of the advent of enormous volumes of messages delivered over the internet. The IoT botnet assault, which attempts to perform genuine, lucrative, and effective cybercrimes, is one of the most critical IoT dangers. To identify and prevent botnet assaults on connected computers, this study uses both quantitative and qualitative approaches. This study employs three basic machine learning (ML) techniques-random forest (RF), decision tree (DT), and generalized linear model (GLM)-and a stacking ensemble model to detect botnets in computer network traffic. The results reveled that random forest attained the best performance with a coefficient of determination (R 2) of 0.9977, followed by decision tree with an R 2 of 0.9882, while GLM was the worst among the basic machine learning models with an R 2 of 0.9522. Almost all ML models achieved satisfactory performance, with an R 2 above 0.93. Overall, the stacking ensemble model obtained the best performance, with a root mean square error (RMSE) of 0.0084 m, a mean absolute error (MAE) of 0.0641 m, and an R 2 of 0.9997. Regarding the stacking ensemble model as compared with the single machine learning models, the R 2 of the stacking ensemble machine learning increased by 0.2% compared to the RF, 1.15% compared to the DT, and 3.75% compared to the GLM, while RMSE decreased by approximately 0.15% compared to the GLM, DT, and RF single machine learning techniques. Furthermore, this paper suggests best practices for preventing botnet attacks. Businesses should make major investments to combat botnets. This work contributes to knowledge by presenting a novel method for detecting botnet assaults using an artificial-intelligence-powered solution with real-time behavioral analysis. This study can assist companies, organizations, and government bodies in making informed decisions for a safer network that will increase productivity.