Conference PaperPDF Available

Implementation of a visible light based indoor localization system

Authors:

Abstract and Figures

This paper reports the practical implementation and the results of a visible light-based indoor localization system employing fingerprinting technique. The localization system consists of four consumer grade LED luminaires that are positioned 2.5 m above a 3.4 m × 2.2 m floor space. A square wave modulation scheme is employed to allow each luminaire to be identified by a photodiode-based receiver using Fast Fourier Transform (FFT). A position within this floor space can be characterized as an ID which is made up of a vector of the detected signal magnitudes of each of the luminaires. By employing the fingerprinting localization methodology, the mobile receiver's position can be located by comparing the ID vector it is currently receiving to a database of known IDs at each position. A mean error of 1.39 cm within a two dimensional floor space is achievable with a system of this scale. The paper also shows how the accuracy can be traded off with the size of the offline database.
Content may be subject to copyright.
Implementation of a Visible Light based Indoor
Localization System
T. Wenge, M.T. Chew , F. Alam. G. Sen Gupta
School of Engineering and Advanced Technology, Massey University Albany, Auckland, New Zealand
tapiwa245@gmail.com, M.T.Chew@massey.ac.nz, F.Alam@massey.ac.nz, G.Sengupta@massey.ac.nz
Abstract—This paper reports the practical implementation
and the results of a visible light-based indoor localization system
employing fingerprinting technique. The localization system
consists of four consumer grade LED luminaires that are
positioned 2.5 m above a 3.4 m x 2.2 m floor space. A square
wave modulation scheme is employed to allow each luminaire to
be identified by a photodiode-based receiver using Fast Fourier
Transform (FFT). A position within this floor space can be
characterized as an ID which is made up of a vector of the
detected signal magnitudes of each of the luminaires. By
employing the fingerprinting localization methodology, the
mobile receiver’s position can be located by comparing the ID
vector it is currently receiving to a database of known IDs at each
position. A mean error of 1.39 cm within a two dimensional floor
space is achievable with a system of this scale. The paper also
shows how the accuracy can be traded off with the size of the
offline database.
Keywords—Indoor localization, VLC, K-Nearest neighbour,
fingerprinting
I. I
NTRODUCTION
There are many well-established localization systems to
estimate an object’s position in both the indoor and the outdoor
environment. One of the best-known outdoor localization
techniques is the Global Positioning System (GPS) [1]. There
are already a number of indoor localization systems reported in
the literature with a varying degree of accuracy [2]. Wireless
technologies like WiFi [3], RFID [4], and ZigBee [5] are
typically used for indoor localization systems. However, with
these systems, the localization error is often in the order of
meters due to the effects of multipath. RF localization is also
vulnerable to interference. LIDAR [6] and camera-based scene
analysis [7] solutions offer much lower error, in the order of
millimeters, but are costly and require greater computational
power. In recent years, Visible Light based localization
techniques have attracted a lot of attention within the research
communities. This is because visible light communication
(VLC) technologies have proven to have the capability to
replace wireless communication technologies in the near
future. The potential advantages include lower costs due to the
ability to leverage existing infrastructure, durability and
environmental friendliness. But most importantly from the
localization perspective, visible light based systems have the
ability to estimate an object’s location to an accuracy within
the centimeter range [8].
II.
VISIBLE LIGHT BASED LOCALIZATION TECHNIQUE
Visible light based localization algorithms can be broken
down into three categories: proximity, fingerprinting and
geometry based techniques [8]. In most cases, the former two
techniques are considered to be simple and use less complex
algorithms to estimate an objects position as compared to the
latter.
The basic function of geometry positioning is based on the
concept of signal triangulation or trilateration. The object
which has the photosensor attached to it senses the signals from
a minimum of three luminaires and the distance from each light
source is calculated using a characteristic of the received signal
and a model of the falloff of the light. The position of the
object can then be determined using a trilateration or
triangulation algorithm which results in the position of the
object relative to the lights. Geometry based localization can
yield errors of 1.5 cm in simulation as shown in the paper [9].
Signal characteristics that are most commonly discussed in
literature for determining the distance from each luminaire are
received signal strength (RSS), angle of arrival (AOA), time of
arrival (TOA) and time difference of arrival (TDOA). RSS is
the most common approach because it can be done using a
single photosensor and does not require synchronized hardware
which keeps the cost and complexity of implementation low
[10]. AOA implementations generally require more than a
single photosensor or specialized optics [11]. TOA and TDOA
based systems require highly synchronized hardware which
increases the cost of implementation [12].
This paper focuses on a fingerprinting-based localization
technique. An offline database is constructed by taking a set of
RSS measurements that uniquely identify selected locations
within the space that the photosensor equipped object will be
localized in. Once the construction of this database is complete,
localization of the object is performed by capturing the current
signal and then performing a classifying algorithm in order to
determine an estimate of the position based on the offline
database [9].
The performance of a fingerprinting based visible light
localization system is heavily reliant on the algorithm used to
classify the current signal based on the offline database. A
broad range of pattern recognition and machine learning
algorithms would be suitable for this application such as K-
nearest neighbour, Neural Networks and Multiclass Support
Vector Machines (SVMs) [13].
This full text paper was peer-reviewed at the direction of IEEE Instrumentation and Measurement Society prior to the acceptance and publication.
978-1-5386-2092-2/18/$31.00 ©2018 IEEE
In this work a weighted K-Nearest Neighbour (KNN) [13]
matching algorithm was chosen as it allows for low hardware
cost, low computational cost, ease of implementation and high
accuracy. Recent work also reports that weighted KNN
outperforms the trilateration method in terms of accuracy either
with or without ambient light interference conditions [14].
Majority of the work found in the literature showing higher
accuracy is theoretical in nature and typically show simulation
based results [15]. The practical implementations found in
literature are a) often limited to small scale test bed [16] and/or
b) use specialized expensive luminaires and components [16]
and/or c) operate within a controlled environment [17]. In
contrast, we present a real life room scaled implementation that
uses off the shelf consumer grade luminaires. This also
demonstrates how visible light based localization can
potentially be used in a novel way to track objects using only
the existing lighting infrastructure of a built environment with
minor modification.
III. S
YSTEM
O
VERVIEW
Our system aims to locate the position of a photosensor
equipped robot in a 3.4 m X 2.2 m floor space by classifying
the frequency magnitude of the light emitted from four off the
shelf LED luminaires mounted on the 2.4 m high ceiling.
Figure 1 shows the floor space with green gridlines and the
photosensor equipped robot receiving optical signals from the
luminaires above. Each of these 4 luminaires is transmitting its
individual light identity to the robot using square wave
modulation and at the same time providing illumination to the
room as illustrated in Figure 2. This modulation is
implemented using the MOSFET driver circuit shown in
Figure 3.
The transmitted square wave signal and it's Fourier series
expansion from the m
th
luminaire (m=1 to 4) can be written as
3
44
122
( ) rect rect
mm
mm
nT nT
tt
mm TT
n
St A
−−
=




=−




(1)
,
() exp[ 2 ]
l
mmm
lodd
St C j lft
π
=
(2)
2
lm
m
A
C
j
l
π
=
is the Fourier series coefficient and
2
m
m
A
f
j
l
π
=
is the fundamental frequency for the m
th
luminaire.
The multiplexing technique used in this work utilizes the
fact that
()
m
St
contains only odd harmonics of
m
f
The
modulation frequency
m
f
of the m
th
luminaire is chosen to be
the second harmonic of
1m
f
, the modulating frequency of the
(m-1)
th
luminaire. That way the harmonics of a luminaire does
not interfere with the fundamental frequency of any other
luminaire. The modulating frequencies for our developed
system are 800 Hz, 1600 Hz, 3200 Hz and 6400 Hz. Within the
constraints of the hardware used, it is possible to choose
fundamental modulation frequencies that are 50 Hz apart as
long as they do not interfere with the odd harmonics of other
modulating frequencies. As an example it is possible to use 25
non-interfering modulating frequencies between 800 Hz and
2 kHz at 50 Hz interval alone. The frequencies were chosen
after considering several factors. The highest frequency needs
to be lower than the response time and the slew rate of the LED
luminaires. The lowest frequency should have enough
separation from the 100 Hz interference present in regular
lighting infrastructure. There is also published work that
reports about potential health hazards of LED flickering at
200 Hz [18]. At location (x
i
, yi), the received signal at the
output of the photodiode from the m
th
luminaire is given by-
,,
,
() exp[ 2 ]
ll
mi mi m m
l odd
rt GC jlft
π
=
(3)
Fig. 1. Visible light fingerprinting testbed
Fig. 2. System overview of visible light fingerprinting testbed
,
l
mi
G
is a factor that depends on the response of the
photodiode at the frequency
m
lf
and the optical channel
between the luminaire m and the location i.
,
()
mi
rt
is passed through a bank of 4 parallel bandpass
filters each centered at the fundamental frequency f
m
. The
output of the bandpass filters can be written as-
11
,,mi m mi m
HG C=
(4)
Fig. 3. VLC light modulation –MOSFET driver circuit and printed circuit board
H
m
is the gain of the bandpass filter for the center frequency
f
m .
For flexibility and the ease of implementation, the FFT
operation was used which samples the spectrum to perform the
bandpass filtering. However the modulating frequencies used
also allow for a computationally simpler implementation of the
receiver by using cheap, passive analog bandpass filters instead
of performing FFT operation. Figure 4 shows
im
R
,
where (m=1
to 4) for the experimental set up. As can be seen, the values are
strong at the vicinity of the m
th
luminaire and get weaker
moving further away. The
im
R
,
values at each point on the grid
can be used and assigned a location vector ID given by-
1, 2, ,
, ...,
T
iiiMi
RRR R

=
(5)
Fig. 4. Frequency components at two points in the test space
Fig. 5. VLC Receiver circuit and printed circuit board
The receiver used to conduct the experiments samples at a
rate of 31250 Hz. This sampling rate was chosen because it is
significantly greater than the Nyquist theorem requirement for
this system. The samples are buffered into frames of 625
samples that are used for the Fast Fourier Transform (FFT).
The FFT operation can be performed on board the target node
or at the computer. For less powerful devices, all of the
computation including the FFT can be performed at the
computer as long as the received signal samples are transmitted
back. However, this will increase the wireless transmission
data rate and the energy consumption associated with the
transmission.
The circuit implemented to receive the optical signals is
illustrated in Figure 5. Figure 6 shows the heat map of the
frequency magnitude of the 800 Hz luminaire when the
measurements are taken at every intersection of the 10 cm X 10
cm grid. It can be observed that the received signal is the
strongest at the upper right hand corner where the luminaire is
located. The magnitude fall-off at 800 Hz, as the detector’s
distance from the corresponding luminaire increases, is shown
in Figure 7. This demonstrates a Lambertian falloff for the
luminaire [19]. Similar fall off for the other three frequencies
were observed.
Fig. 6. Heat map of 800 Hz luminaire
Fig. 7. Magnitude falloff 800 Hz luminaire
IV.
K
-
NEAREST
-
NEIGHBOUR
Once the offline fingerprint database has been constructed
using equation (5), the location of the mobile receiver can be
estimated during the live phase using the weighted KNN
algorithm.
The ID vector received by the detector at a location (x
j
,y
j
)
during the live phase is given by-
1, 2, ,
,...,
T
live live live live
jjjMj
RRR R
=
(6)
The Euclidean distance d
j,i
of the live ID vector to the ID
vector
R
i
of the offline database is given by-
2
,,,
1
()
Mlive
ji m j mi
m
dRR
=
=−
(7)
The proximity of the live location to every location on the
database are determined by d
j,i
. The weighted KNN algorithm
estimates the location of the receiver (
)
~
,
~
jj
yx
as the weighted
average of the location of the K nearest neighbors. The nearest
neighbors are the K offline locations that produce the smallest
d
j,i
values. The value of K needs to be judiciously selected to
produce the optimum results. The estimated location of the
receiver (
)
~
,
~
jj
yx
is given by-
=
=
×
=
K
k
kj
K
k
kkj
j
w
xw
x
1
,
1
,
~
=
=
×
=
K
k
kj
K
k
kkj
j
w
yw
y
1
,
1
,
~
(8)
Here (x
k
, y
k
) is the location of the k
th
neighbor. The weight
w
j,k
is the reciprocal of the distance calculated by equation (7).
K=4 was used in the experiments as that produced the highest
accuracy as shown in Figure 8.
Fig. 8. Mean error vs K for 20 x 10 database
V. R
ESULTS
A total of 693 equal spaced frequency magnitude
measurements were taken within the 3.4 m x 2.2 m space
shown in Figures 1 and 4. No measurement was performed
along the boundaries of the testbed. This resulted in 33x21=693
measurements within a grid of 3.3 m x 2.1 m. Some of these
are used to construct the offline database and the rest are
designated as live data and are used for validation. The
approximate time taken to collect each data point was 17
seconds. For the experiments, the robot was moved manually
from one point to another. Care was taken to ensure that the
orientation of the robot stayed the same during all the
measurements.
Fig. 9 Error distribution for the 20 x 10 database
Fig. 10 Spatial distribution of error for 20x10 database
A database of 20 cm X 10 cm spacing, according to a
chessboard pattern, was created that results in an offline
database with 345 points and a construction time of 1 hour and
38 mins. The mean and median accuracy achieved were
1.39 cm and 0.46 cm respectively. For our experimental setup,
there is no measurable improvement in accuracy for <20 cm
spacing. The lower bound of the localization accuracy is set by
several factors. Reflection and multipath resulting from
different type of objects present in the room contribute to the
error. Given the cheap photodiode used in the receiver, the
orientation of the receiver has an impact on the received signal
strength. So, if the orientation of the receiver is not kept exactly
the same during the offline calibration and the online phase,
there will be some error in the localization. It is also assumed
that the receiver plane remains at the same level for all
measurements. However, this is difficult to maintain and the
floor and ceiling planes may not be exactly parallel to each
other over the entire test bed. This discrepancy also
contributed to the error. Saturation due to ambient light, the
noise floor of the hardware and the resolution of the A/D
converter also contributed to the overall error. Figure 9 shows
the distribution of the errors. As can be observed, the majority
of the errors are quite small with a few larger errors. Figure 10
shows that most of the large errors occur on the boundaries of
the floor space. This is due to the weighted averaging function
of the weighted KNN algorithm. It might be possible to
mitigate this error by either producing a boundary of
extrapolated data or by not performing localization on the
measurement boundary.
TABLE I
:
SUMMARY OF ERRORS
Spacing
(cm)
Mean
(cm)
Median
(cm)
No. of off-
line points
Construction
time (min)
20 1.39 0.46 345 98
30 4.50 2.70 219 62
40 4.90 4.70 170 49
50 6.80 6.80 137 39
Table I summarizes the results for different offline
databases. As the size of the database decreases, the accuracy
of the localization system becomes poorer. However smaller
databases require less offline measurements. So there is an
obvious tradeoff between accuracy and time spent in offline
measurement.
VI. S
UMMARY AND
F
UTURE
W
ORK
Presented in this paper is the development and
implementation of a visible light based indoor localization
system that employs off-the-shelf luminaires. The developed
system is quite accurate even when applying a computationally
simple fingerprinting technique. The trade-off between the
localization accuracy and the number measurements performed
to construct the offline database has also shown. The developed
system could potentially lead to tracking objects using only the
lighting infrastructure of a building. The work presented here
can be improved upon by further research. One can aim to
improve the multiplexing technique as the square wave
modulation with 50% duty cycle results in a loss of half the
total illumination. Finally, future research could investigate
how to construct the offline database with fewer measurements
by leveraging the intensity distance relationship given by the
Lambertian model.
R
EFERENCES
[1] Trimble Navigation Limited., GPS: The First Global Navigation
Satellite System, Trimble, Ed., 2007.
[2] K. A. Nuaimi and K. Hesham, “A survey of indoor positioning systems
and algorithms,” International Conference on Innovations in Information
Technology (IIT), Abu Dhabi, UAE, 25-27 April, pp. 185–190, 2011.
[3] S. He and S.-H. G. Chan, “WiFi fingerprint-based indoor
positioning:Recent advances and comparisons,” IEEE Communications
Surveys &Tutorials, vol. 18, no. 1, pp. 466–490, 2016.
[4] W. Ruan, Q. Z. Sheng, L. Yao, T. Gu, M. Ruta, and L.
Shangguan,“Device-free indoor localization and tracking through
human-object interactions,” in 2016 IEEE 17th International Symposium
on A World of Wireless, Mobile and Multimedia Networks
(WoWMoM), Coimbra, Portugal, 21-24 June, pp. 1–9, 2016
[5] D. Konings, A. Budel, F. Alam, and F. Noble, “Entity tracking within a
zigbee based smart home,” in 2016 23rd International Conference on
Mechatronics and Machine Vision in Practice (M2VIP), Nanjing, China,
28-30 November, pp. 1–6, 2016
[6] R. Li, J. Liu, L. Zhang, and Y. Hang, “LIDAR/MEMS IMU integrated
navigation (SLAM) method for a small UAV in indoor environments,”
in Inertial Sensors and Systems Symposium (ISS), Karlsruhe, Germany,
16-17 September, pp. 1–15, 2014
[7] A. J. Davison, “Real-time simultaneous localisation and mapping with a
single camera”, Ninth IEEE International Conference on Computer
Vision IEEE (ICCV), 13-16 October, p. 1403, 2003
[8] T.-H. Do and M. Yoo, “An in-depth survey of visible light
communication based positioning systems,” Sensors, vol. 16, no. 5, p.
678, 2016.
[9] P. Luo, M. Zhang, X. Zhang, G. Cai, D. Han, and Q. Li, “An indoor
visible light communication positioning system using dual-tone multi-
frequency technique,” in 2nd International Workshop on Optical
Wireless Communications (IWOW), pp. 25–29, 2013
[10] H. Sharifi, A. Kumar, F. Alam, and K. M. Arif, “Indoor localization of
mobile robot with visible light communication,” in 12th IEEE/ASME
International Conference on Mechatronic and Embedded Systems and
Applications (MESA), Auckland, New Zealand, 29-31 August, pp. 1–6,
2016
[11] S. Lee and S.-Y. Jung, “Location awareness using angle-of-arrival based
circular-pd-array for visible light communication,” in 18th Asia-Pacific
Conference on Communications (APCC), Jeju Island, South Korea, 15-
17 October, pp. 480–485, 2012
[12] J. Nah, R. Parthiban, and M. Jaward, “Visible light communications
localization using TDOA-based coherent heterodyne detection,” in IEEE
4th International Conference on Photonics (ICP), Melaka, Malaysia, 28-
30 October, pp.247–249, 2013
[13] C. M. Bishop, Pattern recognition and machine learning. Springer, 2006.
[14] N. Van Tuan, H. Le-Minh, A. Burton et al., “Weighted k-nearest
neighbour model for indoor VLC positioning,” IET Communications,
vol. 11, no. 6, pp. 864–871, 2017.
[15] Z. Zhou, M. Kavehrad, and P. Deng “Indoor positioning algorithm using
light emitting diode visible light communications,” in Optical
Engineering, 51(8):085009–1, 2012.
[16] H. Kim, D. Kim, S. Yang, Y. Son, and S. Han, “An indoor visible light
communication positioning system using a RF carrier allocation
technique,” in Energy Journal of Lightwave Technology, 31(1):134–144,
2013.
[17] H. Zheng, Z. Xu, C. Yu, and M. Gurusamy, “Led lighting flicker and
potential health concerns: Ieee standard par1789 update,” in 2010 IEEE
Energy Conversion Congress and Exposition (ECCE), pp. 171–178,
2010
[18] A. Wilkins, J. Veitch, and B. Lehman, “A 3-d high accuracy positioning
system based on visible light communication with novel positioning
algorithm,” in Optics Communications, 396:160–168, 2017.
[19] K. Xu, H.-Y. Yu, Y.-J. Zhu, and Y. Sun, “On the ergodic channel
capacityfor indoor visible light communication systems,IEEE Access,
vol. 5, pp. 833–841, 2017.
... Using machine learning algorithms, many different approaches have been proposed with the overall goal of optimizing the location accuracy, complexity, and practical applicability of these techniques [22][23][24]. Among them, KNN is a preferred option [25][26][27][28]. The authors in [25] performed a field experiment for 3-D space based on RSS, fingerprint and KNN. ...
... This approach showed that although the number of fingerprints is reduced, the positioning accuracy is relatively high with a mean error of 2.7 cm when applying to only 12 offline measurements. With a similar method, Wenge et al. [27] applied a square wave modulation model to detect the luminaire from each LED using a Fast Fourier Transform and then compared the difference between ID vectors in the online mode and the offline mode, resulting in a mean error of 1.39 cm. In addition to using KNN for the purpose of estimating location, the authors in [28] used it as a tool to support the later positioning process. ...
Article
Full-text available
Enhancing the accuracy of indoor visible light positioning systems with simple, real-time, and stable methods is one of the interesting challenges in recent research. In this paper, a relatively minor mean positioning error of 8 mm and a 42-52% improvement in computational time could be achieved within a real space of 1.2 m × 1.2 m × 1.2 m by transcending the serious limitations of the traditional knearest neighbors (KNN) algorithm. These disadvantages (slow execution time, high error formation) are a result of finding the nearest neighbors from all the fingerprints, averaging the Euclidean distances, and the excessive passivity of the K value. To overcome the above limitations of KNN, we proposed a maximum received signal strength recognition (MRR) technique and weighted optimum KNN (WOKNN) algorithm, which is a combination of optimum KNN (OKNN) and weighted KNN (WKNN). While MRR was used to reduce the computational time, WOKNN was used to solve the remaining problems. Specifically, OKNN was used to automatically determine the best number of nearest neighbors for each position in the area under consideration, and WKNN helped improve the errors that come from the Euclidean distance averaging process. Based on positive experimental results and a meaningful comparison with various versions of KNN, we demonstrated that the improved conventional KNN algorithm can achieve very high positioning accuracy and is totally suitable for several specific 2-D indoor positioning applications.
... Geometry-based methods in VLP rely on the geometric relationship between the transmitters (LEDs) and the receiver to estimate the receiver's position. These methods use the known locations of the LEDs and measure the distances or angles between the receiver and multiple LEDs to determine the position (see, for example, [64], [65] for implementation examples). Geometry-based methods provide accurate positioning when the geometry of the environment is well defined, but are sensitive to errors in distance or angle measurements. ...
Article
Full-text available
Accurate indoor positioning is becoming increasingly important, especially in highly automated industrial environments with robots. In addition, LED-based lighting is also being used more and more frequently in such application fields. In the present work, the possibility to exploit the LED lighting infrastructure with a novel approach for implementing an accurate indoor positioning system is investigated. For this purpose, a demonstrator luminaire LEDPOS is proposed and evaluated that combines visible light sensing based on backscattered reflections to accurately estimate the two-dimensional position of a retroreflective foil at the floor while providing simultaneously an unimpaired room illumination. In particular, the same LED elements are shared for illumination and for the sensing functionality. Furthermore, the algorithm for data evaluation and position determination is based on a machine learning approach that is implemented on the edge in the luminaire. Thus, the presented approach allows for a simple and cost-efficient implementation in different applications. The experimental characterization of the LEDPOS demonstrator in a real-world scenario shows that a very good positioning accuracy can be achieved, in which the average error for the two-dimensional position of the retroreflective foil within an area of 0.64 m <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> remains in the range of 3 cm.
... On the other hand, mentioned drawbacks could be improved by making further research on this topic. For instance, in the paper [17], researchers use off-the-shelf luminaires. First, an offline database is constructed using RSS values from a photosensor equipped object. ...
Article
Since input weights and hidden biases affect the overall performance of extreme learning machine (ELM) based on a single-hidden layer, two meta-heuristic algorithms of grey wolf optimizer (GWO) and particle swarm optimization (PSO) are adopted to optimize the input weights and hidden biases of ELM respectively in order to obtain higher classification accuracy. Namely, ELM-GWO and ELM-PSO are two enhanced ELM algorithms. Compared with the other existing techniques such as the traditional ELM and adaptive boosting (AdaBoost), the positioning performance of two enhanced ELM algorithms is analyzed by simulation and experiment in visible light positioning (VLP) systems. The experimental results show that the probabilities of two-dimensional (2D) positioning error being less than 10 cm for ELM, ELM-GWO, ELM-PSO, and AdaBoost are 84.25%, 90.25%, 93.25%, and 19.75%, respectively. And the probabilities of three-dimensional (3D) positioning error being less than 10 cm for ELM, ELM-GWO, ELM-PSO, and AdaBoost are 45.17%, 84.67%, 80.33%, and 16%, respectively. Both simulation and experimental results show that two enhanced ELM algorithms have better positioning performance and robustness. In addition, although increasing the number of iterations and the number of hidden neurons can effectively reduce the positioning error, the computational complexity is relatively high in large-scale fingerprint positioning scenarios. Therefore, a hybrid positioning method combining enhanced ELM and region division is further proposed. Both simulation and experimental results show that the proposed hybrid method can significantly improve the positioning accuracy while reducing the computational complexity.
Article
Full-text available
The Internet of Things (IoT) is a transformative technology marking the beginning of a new era where physical, biological, and digital worlds are integrated by connecting a plethora of uniquely identifiable smart objects. Although the Internet of terrestrial things (IoTT) has been at the center of our IoT perception, it has been recently extended to different environments, such as the Internet of underWater things (IoWT), the Internet of Biomedical things (IoBT), and Internet of underGround things (IoGT). Even though radio frequency (RF) based wireless networks are regarded as the default means of connectivity, they are not always the best option due to the limited spectrum, interference limitations caused by the ever-increasing number of devices, and severe propagation loss in transmission mediums other than air. As a remedy, optical wireless communication (OWC) technologies can complement, replace, or co-exist with audio and radio wave-based wireless systems to improve overall network performance. To this aim, this paper reveals the full potential of OWC-based IoT networks by providing a top-down survey of four main IoT domains: IoTT, IoWT, IoBT, and IoGT. Each domain is covered by a dedicated and self-contained section that starts with a comparative analysis, explains how OWC can be hybridized with existing wireless technologies, points out potential OWC applications fitting best the related IoT domain, and discusses open communication and networking research problems. More importantly, instead of presenting a visionary OWC-IoT framework, the survey discloses that OWC-IoT has become a reality by emphasizing ongoing proof-of-concept prototyping efforts and available commercial off-the-shelf (COTS) OWC-IoT products.
Article
Developing a wireless indoor positioning system with high accuracy, reliability, and reasonable cost has been the focus of many researchers. Recent studies have shown that visible-light-based positioning (VLP) systems have better positioning accuracy than radio-frequency-based systems. A notable highlight of those research articles is their combination of VLP and machine learning (ML) to improve the positioning performance in both two-dimensional and three-dimensional spaces. In this paper, in addition to describing VLP systems and well-known positioning algorithms, we analyze, evaluate, and summarize the ML techniques that have been applied recently. We break these into four categories: supervised learning, unsupervised learning, reinforcement, and deep learning. We also provide deep discussion of articles published during the past five years in terms of their proposed algorithm, space (2D/3D), experimental method (simulation/experiment), positioning accuracy, type of collected data, type of optical receiver, and number of transmitters.
Conference Paper
Full-text available
Modern Smart Home Automation (SMA) systems are predominantly based on wireless communication standards. New home automation setups can often include dozens of devices ranging from thermostats, humidity sensors, light switches, digital locks and cooling systems, all communicating on a common network. In this paper we focus on how SMA systems built on one of the most common standards (Zigbee), could be leveraged to provide a secondary benefit in the form of an Indoor Positioning System (IPS). IPS can be implemented in the form of Device Free Localization (DfL), Active tracking or as a combination of both techniques. A system containing a DfL implementation can detect and track moving entities by monitoring the changes in received signal strength (RSSI) values between nodes within a wireless network. DfL does not require the entity that is being tracked to carry an electronic device and actively contribute to the localization process. In Active tracking, the tracked entity contributes to the tracking process. In this paper both techniques are implemented individually, and a combination of both techniques is explored. Having implemented DfL and Active tracking, we were able to localize a person within a 3m × 3m quadrant with DfL with 80% accuracy. Active tracking resulted in a higher resolution of tracking compared to DfL, being able to localize a person within a 2m × 2m area with 95% accuracy. The accuracy of Active Tracking was then further increased to 98%, by coupling Active Tracking with DfL measurements.
Article
Full-text available
This study introduces a weighted k-nearest neighbour model for user positioning in a visible light communications (VLC) system. The new model offers a higher degree of accuracy compared with conventionally existing techniques in VLC, such as trilateration. In the proposed model, the current position of the receiver is estimated based on the positions of the k-NN (known as reference points) pre-defined and recorded in a lookup table. The Euclidean distances from the actual receiver to the reference points are weighted in order to improve accuracy. Simulation results show that the proposed model outperforms the trilateration method by 36 and 50% accuracy with and without ambient light interference, respectively.
Article
Full-text available
Since indoor visible light communication (VLC) modulates the information into the light beams emitted from light-emitting diodes, the channel gain is often modeled as the Lambertian model. For different spatial locations of transmitters and receivers, the channel gain for VLC directly changes with several related geometrical parameters. The distinct geometrical feature makes the ergodic capacity of indoor VLC different from that of radio-frequency communication. However, the issue has not been studied currently. In this paper, we propose the lower bounds on the point-to-point capacity, which have simple expressions with respect to the geometrical parameters. Then, by analyzing indoor human mobility, two typical distributions of the geometrical parameters are considered. Correspondingly, we derive the lower bounds on the ergodic capacity which are related to the variables of the geometrical feature and the distribution model. Furthermore, simulation results show that our lower bounds on the ergodic capacity are effective to reveal the geometrical feature of indoor VLC systems. Moreover, our proposed lower bounds are tight to the numerical upper bounds at high optical signal-to-noise rates (OSNRs), which are the main application zones of indoor VLC systems.
Conference Paper
Full-text available
This paper focuses on the implementation of visible light communication (VLC) with an autonomous mobile robot to provide indoor localization. Recent research on VLC, mostly in the form of simulations, offers the opportunity to conduct real life experiments and test the theory. Accurate indoor positioning is achieved by employing multi-frequency method with received signal strength (RSS) to calculate the distance of the robot from each LED installed above the robot in a plane parallel to the plane of the robot base. Multi-frequency method consists of each LED transmitting its location ID at a different frequency. It is demonstrated that the receiver is able to separate each location ID from simultaneous data transmission with a bandpass filter.
Article
Full-text available
While visible light communication (VLC) has become the candidate for the wireless technology of the 21st century due to its inherent advantages, VLC based positioning also has a great chance of becoming the standard approach to positioning. Within the last few years, many studies on VLC based positioning have been published, but there are not many survey works in this field. In this paper, an in-depth survey of VLC based positioning systems is provided. More than 100 papers ranging from pioneering papers to the state-of-the-art in the field were collected and classified based on the positioning algorithms, the types of receivers, and the multiplexing techniques. In addition, current issues and research trends in VLC based positioning are discussed.
Article
A novel indoor positioning system (IPS) with high positioning precision, based on visible light communication (VLC), is proposed and demonstrated with the dimensions of 100 cm×118.5 cm×128.7 cm. The average positioning distance error is 1.72 cm using the original 2-D positioning algorithm. However, at the corners of the test-bed, the positioning errors are relatively larger than other places. Thus, an error correcting algorithm (ECA) is applied at the corners in order to improve the positioning accuracy. The average positioning errors of four corners decrease from 3.67 cm to 1.55 cm. Then, a 3-D positioning algorithm is developed and the average positioning error of 1.90 cm in space is achieved. Four altitude levels are chosen and on each receiver plane with different heights, four points are picked up to test the positioning error. The average positioning errors in 3-D space are all within 3 cm on these four levels and the performance on each level is similar. A random track is also drawn to show that in 3-D space, the positioning error of random point is within 3 cm.
Conference Paper
Ego-motion estimation for an agile single camera moving through general, unknown scenes becomes a much more challenging problem when real-time performance is required rather than under the off-line processing conditions under which most successful structure from motion work has been achieved. This task of estimating camera motion from measurements of a continuously expanding set of self-mapped visual features is one of a class of problems known as Simultaneous Localisation and Mapping (SLAM) in the robotics community, and we argue that such real-time mapping research, despite rarely being camera-based, is more relevant here than off-line structure from motion methods due to the more fundamental emphasis placed on propagation of uncertainty. We present a top-down Bayesian framework for single-camera localisation via mapping of a sparse set of natural features using motion modelling and an information-guided active measurement strategy, in particular addressing the difficult issue of real-time feature initialisation via a factored sampling approach. Real-time handling of uncertainty permits robust localisation via the creating and active measurement of a sparse map of landmarks such that regions can be re-visited after periods of neglect and localisation can continue through periods when few features are visible. Results are presented of real-time localisation for a hand-waved camera with very sparse prior scene knowledge and all processing carried out on a desktop PC.