Conference PaperPDF Available

Abstract and Figures

Interconnected devices in the Internet of Things (IoT) networks are now a prominent group of diverse systems, such as manufacturing systems, industrial systems, smart homes, services, etc. A modern alternative for control and monitor IoT devices is the use of modern information and communication technologies, including virtual and mixed reality. One of the affordable technologies for promoting mixed reality is for example Apple ARKit or Android ARCore. The development of a system that ensures control of IoT devices using virtual or mixed reality requires a comprehensive multidisciplinary approach.
Content may be subject to copyright.
Control and Monitoring of IoT Devices Using
Mixed Reality Developed by Unity Engine
Szabina Bucsai, Erik Kuˇ
cera, Oto Haffner and Peter Drahoš
Faculty of Electrical Engineering and Information Technology
Slovak University of Technology in Bratislava
Bratislava, Slovakia
Email: szabina.bucsai@stuba.sk, erik.kucera@stuba.sk
Abstract—Interconnected devices in the Internet of Things
(IoT) networks are now a prominent group of diverse systems,
such as manufacturing systems, industrial systems, smart homes,
services, etc. A modern alternative for control and monitor IoT
devices is the use of modern information and communication
technologies, including virtual and mixed reality. One of the af-
fordable technologies for promoting mixed reality is for example
Apple ARKit or Android ARCore. The development of a system
that ensures control of IoT devices using virtual or mixed reality
requires a comprehensive multidisciplinary approach.
Index Terms—Industry 4.0, virtual reality, augmented reality,
mixed reality, mechatronics, Internet of Things
I. INTRODUCTION
The world, as we know it today (or at least think we know
it), is confronted every day with new technologies that affect
almost every area of your life, in a way that is very turbulent.
The biggest breakthroughs in technology we used to call in-
dustrial revolutions. First, three manufacturing revolutionaries
are responsible for mechanical manufacturing methods, elec-
tricity and information technology. Today we live in a time of
the Fourth Industrial Revolution, where artificial intelligence,
the Internet of Things, and the virtual and mixed reality play a
major role. At present, it is possible and common to find and
actively change the status of constructions, technical facilities
and living organisms from any location on the planet. Of
course, if an internet (or at least local) network connection
is available - thanks to IoT. The exact definition does not
exist, but the Internet of Things can be understood as devices
connected to the Internet. This interconnection is ever more
extensive and, according to experts, all physical objects will be
linked to each other over time. We realize that even a few years
ago these concepts were more of a science fiction category,
today it’s a reality [1].
In the beginning, the Internet of Things was designed only
for communication between machines, but over time with
powerful mobile devices, users themselves became part of the
system. Smart mobile phones and tablets are a constant and
self-evident part of their daily lives. Using these devices, they
can monitor and control physical and other digital things -
objects, phenomena and processes [2].
In addition to the computing power of smart devices,
their camera (camera) has been greatly improved, and so the
widespread reality has enough great potential to highlight the
role of smart devices in the Internet of Things world, where
it serves as a bridge between user, physical world, and the
Internet and puts communication at a higher level.
The current main need of the manufacturing sector is to
save skilled labor (which is scarce and expensive) and the
need for continuous collection and evaluation of data obtained
from all processes, at each level. We are talking about digital
twins, fault predictions and service interventions, or about op-
timization and production control process models. A digitally
modified (or created) reality, along with the Internet of Things
and Artificial Intelligence, is a source of continuous and up-
to-date information, warnings and events. Today, it analyzes,
plans, designs, trains, trains, instructs, documents, optimizes,
manages and controls in visual and acoustic form - as easy to
understand as possible.
The goal of the project is to create a system that enables
the control of the Internet of Things using mixed reality.
The development of such a system requires a comprehensive
multidisciplinary approach.
II. VI RTUA L, AU GM EN TE D AN D MIX ED RE AL IT Y
Modern methods of visualisation are now realized on the
basis of the development of new information and communi-
cation technologies technologies (e.g. interactive applications
made in 3D engine [3], virtual reality or mixed/augmented
reality). Visualisation of process modelling, identification and
control of advanced mechatronic systems using virtual and
mixed/augmented reality allows students to get a much faster
and improved understanding of the studied subject compared
to conventional education methods [4].
Currently, there is a trend of using 3D interactive applica-
tions and virtual / augmented / mixed reality in virtual tours
for cars, houses, apartments and other products. Also, many
modern interactive 3D applications for education are being
developed [5] [6].
Automotive company Toyota proposes modern virtual show-
room for customers. It was developed using Unreal Engine.
There are also more interactive applications from Animech
Technologies. Animech Technologies offers a lot of education
modules like Virtual Car, Virtual Truck or Virtual Gearbox [7].
Using these interactive applications pupils can understand the
functionality of mentioned devices and they can investigate
their interior and detach their separate components in detail.
Fig. 1. Example of augmented reality [11]
The arrival of Microsoft HoloLens [8] has led to the
emergence of a completely new segment of mixed reality.
Mixed reality has undisputable benefits over virtual reality,
as the user perceives a real world and also a virtual world
in the same moment. The use of this feature is in practice
undeniable and it is as-sumed that mixed reality can become
a new standard in many areas such as modeling of complex
mechatronic systems, marketing, education, etc.
For Microsoft HoloLens there are many education and
virtual tour applications.
Application HoloTour [9] provides 360-degree spatial video
of ancient places like Rome or Peru. The application com-
plements 3D models of important landmarks that have not
been retained or supplementary holographic information about
elements in the scene.
A. Differences between virtual, augmented and mixed reality
There are more definitions nad explanations of virtual,
augmented and mixed reality. Augmented and mixed reality
are often understood as interchangeable. In this paper, we
describe and use the definition presented by The Foundry [10].
Virtual reality (VR) replicates an environment that simulates
a physical presence in places in the real world or an imagined
world, allowing the user to interact in that world. Devices for
virtual reality are Oculus Rift, Google Cardboard, HTC Vive
etc.
Augmented reality (AR) is a live, direct or indirect view
of a physical, real-world environment whose elements are
augmented (or supplemented) by computer-generated sensory
input such as sound, video, graphics or GPS data. Augmented
reality is an overlay of content on the real world, but that
content is not anchored to or part of it. The real-world content
and the CG content are not able to respond to each other (Fig.
1 and Fig. 2).
Mixed reality (MR) is the merging of real and virtual
worlds to produce new environments and visualisations where
physical and digital objects co-exist and interact in real time.
MR is an overlay of synthetic content on the real world that
is anchored to and interacts with the real world. The key
characteristic of MR is that the synthetic content and the real-
world content can react to each other in real time. Technologies
Fig. 2. Example of augmented reality [11]
Fig. 3. Example of mixed reality - Microsoft HoloLens [11]
for mixed reality are Microsoft HoloLens (Windows Mixed
Reality platform), Apple ARKit and Android ARCore.
III. DES IG N OF T HE SY ST EM
During the design of a solution for monitor and control the
Internet of Things devices by using a mixed reality, we used
the knowledge of the previous chapters.
Fig. 4. Example of mixed reality - Android ARCore and Apple ARKit [11]
A. Solution Requirements
Before embarking on a solution proposal, it is important
to define the requirements for each part of the solution. We
have divided the solution requirements into two main groups:
hardware and software solutions.
B. Hardware Requirements
We need the smallest and most powerful computer. In our
case, this is a single-boot computer that integrates both the
server and the gateway. To perform these tasks, it must have
sufficient computing power so that the operating system on
which we run the server runs smoothly. In addition, it must
collect all the data through which sensors, components and
applications communicate and ensure reliable communication.
All these requirements are met and with the necessary reserve
holds us selected Odroid.
C. Software Requirements
The software must provide the sending and retrieving of
data from all existing system components. At the same time,
it must also be able to integrate other components that we
will eventually expand our system in the future. The mobile
application must be able to recognize selected objects, display
selected information that comes from sensors in the smart
home in real time, control the selected components in a given
way, view mixed and extended reality, or change generated
virtual components based on the data obtained.
D. System design
Specific solutions - from smart homes to intelligent plants -
have their own and unique variation, yet we can each describe
them using an abstract system structure that includes key
components of the solution.
In our case, the system components are explained below:
Remote communication - Internet or local network is
used for this communication. In our case, we use the
Message Queuing Telemetry Transport (MQTT) protocol
as a communication standard.
Agent and controller - We programmed data flows from
IoT devices using the open source visual tool Node-RED
and later reworked them into our own application. Node-
RED and application serve as agent and controller in one.
Sensors - Sensors read and report the status of connected
devices, tools and the local environment in the real world.
We can mark them as eyes and touch the system. In
our system we use temperature sensors, relative humidity,
luminance and CO2concentration.
Actuators - Our system currently does not include all
types of actuators from Big Clown. As an actuator, we
use a light strip. Once the system has been expanded,
actuators such as blind control can also be robotized
system for controlling radiator valves, etc.
Things - Things can be devices or objects. In our case,
they are BigClown kit components.
An important goal of this work is to create a modular
system. The proposed solution is expandable and versatile.
Fig. 5. Our IoT system locally
Fig. 6. Our IoT system globally
The hardware part of our system is only exemplary, but it is
arbitrarily expandable.
The figures (Fig. 5, Fig. 6) show the design of our system
architecture with specific components.
The components are as follows (Fig. 5, Fig. 6):
1) mobile device (iPad)
2) singleboard computer (Odroid)
3) MQTT broker
4) a climate monitoring device
5) LCD thermostat module
6) Power Control Kit
7) LED strip
8) radio module
9) internet
IV. IMPLEMENTATION
Before creating the application, we had to decide how we
would recognize individual scanners that would serve as an
anchor in mixed reality. We had several options to choose
from:
Using QR Code - The QR code name comes from
Quick Response because this code was developed for fast
decoding. It is a two-dimensional barcode that is printed
on paper or in digital form. We can decode encoded
information using a mobile device camera. The QR code
is a square matrix that consists of square modules. The
QR code color is black and white. The benefits of using
QR codes include the rapid generation of a new QR code
for system build and extension. Another advantage is that
each sensor or device can have a unique QR code, so
using a QR code we can distinguish the same objects.
Its disadvantage is that while loading, we must hold the
mobile device in parallel with the code and close enough
to the code.
Using the image - we can also generate mixed or aug-
mented reality based on the image. The advantage of this
method is that one image is enough for a single object,
in addition to creating images is simple and we don’t
need special devices. All you need is a mobile device
where we plan to develop and use the app. It is also easy
to expand the system. However, the use of images has
many disadvantages. Also, as in the case of QR code,
when recognizing an object, we must hold the device
close enough to the object, and the device must be parallel
to the image or at the same angle as when creating the
images. The problem may also be with the same-looking
objects that the application does not have to distinguish
based on the image. For these cases it is better to use the
QR code.
After studying the creation of a suitable image, we find
that the image must meet certain properties. The image
size (width and height) must be between 500 and 1000
pixels. The image cannot contain repeating patterns, low
texture, and low contrast. The color of the image may
be deceptive as the computer sees the image in shades of
gray. Colors that the human eye can distinguish very well
can be almost the same for this technology. The layout of
the textured portion must be uniform, contain little text,
and the white portion must be kept to a minimum.
Using a three-dimensional model - an interesting option
is to create a three-dimensional map based on three-
dimensional objects. This method is very similar to the
previous one. An application based on the live camera
image of the device is looking for compliance with the
created model. The advantage of this method is the ability
to locate the target object from any angle and from a
greater distance. In addition, the application does not
lose as easily found an object as in the case of the
previous two methods. The disadvantage is that creating
a three-dimensional map is a lengthy and complicated
task, which can also reduce the ease of scalability of the
system.
After considering the advantages and disadvantages of the
Fig. 7. An environment for creating object’s photos
above methods, we decided to use a three-dimensional model.
Although creating and extending the system is more time-
consuming, smooth running and more intuitive application
control was more important to our case. Also, such a solution
is unique in practice and is a contribution of this paper.
A. Creation of Three-dimensional Map
To create a three-dimensional map, we had to capture the
required objects. We can create images in photo or video
format. But first we had to create suitable conditions for
taking pictures. For appropriate conditions, we need good
and even illumination, with the least disturbing objects in the
background, and the background without edges (Fig. 7). We
used the iPhone 7 mobile device, which has a 12 megapixel
camera to match the recommended values, to create snapshots
and videos. When creating photos, we turned on HDR mode
to make the exposure even in the image.
We tried to shoot short videos and take pictures to shoot. In
both cases, it is important to get images from every angle we
want the application to recognize the object from. Individual
images had to overlap, they had to be created from different
distances and could not be too close. After converting the
images into a suitable format (.wto - Wikitude Object Target
Collection), we found that the destination maps created with
the camera are much more accurate and detailed than those
obtained with the video. The difference can be seen in Fig. 8.
After getting a good enough map, we rotated and moved to
format the acquired model so that the position and location of
the model matched the reality.
B. Creation of Occluder
The goal of the application is to monitor and control the
Internet of Things devices with a mixed reality. To create a
mixed reality, we need to create an occluder that recognizes
Fig. 8. Comparison of created models - using images (left) and video (right)
Fig. 9. A creation of occluder
an object in the virtual world after recognizing the object. The
occluder is invisible in the real world. In the virtual world, it
represents a recognized object around which we want to create
a mixed reality. The cursor does not allow virtual objects that
are positioned behind the object to be displayed in front of
the object in the application (see Fig. 10).
C. Creation of GUI in Mixed Reality
When designing the graphical user interface in mixed re-
ality, we tried to present various options for monitoring and
controlling devices in the Internet of Things. That is why we
have selected multiple objects so that we can have a different
approach to each solution for each object.
1) Climate monitoring device: This device includes a tem-
perature sensor, relative humidity sensor, atmospheric pressure
sensor and light intensity sensor. In mixed reality we decided
to display the measured values of these sensors. The app is
a subscriber to topics that transmit these sensors. After mea-
suring the new value, the application automatically refreshes
Fig. 10. An example of created occluder
the table displayed above the climate monitor sensor set (see
Fig.11-Fig.13).
Measuring and transmitting these values is not continuous,
but only at given time intervals. Thus, when the application
was started, the application could not display any measured
values until the sensors measured them and sent them. We
needed to modify the source code of the gateway application
to get the most recently measured values as soon as the
application started. We have added a flag to the messages, a
so-called retain flag, which says that the broker does not have
to drop the message after sending it, but to save and send it
to the new subscribers of the topic.
Most of these measured values usually range between values
that are favorable to human health and cannot be directly
influenced. Exceptions are air temperature and carbon dioxide
concentration. Air temperature control is demonstrated on the
LCD module, which functions as a thermostat in addition to
temperature measurement. For the climate monitoring device,
we decided to alert the user to the carbon dioxide concentration
if that concentration exceeds a particular value that is already
Fig. 11. Climate monitoring device - normal Carbon Dioxide concentration
unfavorable to human health and needs to be ventilated.
The concentration of carbon dioxide in the external environ-
ment is about 250-400 ppm. In the interior we can consider the
concentration up to 1000 ppm to be normal. From 1000 ppm
to 2000 ppm, people already consider the air unpleasant. In
the range of measured values, it is better to ventilate the room.
Above 2000 ppm, the carbon dioxide concentration has a
noticeable adverse effect on human health, so it is necessary to
ventilate above this value. People may feel nausea, headache,
sleepiness and concentration problems.
In the application, we decided to represent these three differ-
ent ranges using a mixed reality. After recognizing the climate
monitoring device in addition to the table with measured
values, the application also displays a virtual model of the
flower next to the sensor box. This flower has different colors
of leaves and flowers according to the concentration of carbon
dioxide. In good conditions, up to 1000 ppm, the color of the
leaves is green and the flowers are red. Above 1000 ppm (but
still below 2000 ppm), the color of the leaves is still green, but
the color of the flowers turns yellow. In unfavorable conditions,
that is, if the carbon dioxide concentration is above 2000 ppm,
the color of both leaves and flowers turns gray (see Fig.11-
Fig.13).
2) LCD module: The LCD module serves as a thermostat
for temperature adjustment. This module has a separate sensor
for measuring air temperature and does not receive but only
sends messages. The desired temperature can only be set using
the module buttons. Since the LCD module functions as a
thermostat, it will set the heating on and off, if necessary. This
setting is done using the MQTT broadcast messages, so our
application can also receive these messages. Based on these
messages, the application notifies the user whether the heater
is on or off. The announcement is also highlighted with a
different font color (see Fig.14 and Fig.15).
3) Power control kit: This kit consists of a digital LED strip
and an integrated power relay. It broadcasts messages when
Fig. 12. Climate monitoring device - higher Carbon Dioxide concentration
Fig. 13. Climate monitoring device - very high Carbon Dioxide concentration
Fig. 14. LCD thermostat module with deactivated request for heating
Fig. 15. LCD thermostat module with activated request for heating
its status changes and is a subscriber to multiple topics. One
group of themes is to turn the relay on and off, the second is
to control the LED strip.
In this case, we decided to control the LED strip using the
app. To do this, we used virtual buttons, or better, objects that
act like buttons. Creating and merging these buttons brings
with it a number of challenges that have to be solved.
As objects, we have selected a lotus flower and a play
button. We have changed the model of the lotus flower to get
lotus flowers in different color designs - both in pink and red
and green. With different color designs, we wanted to change
the color of the LED strip or turn on one of the effects (we
decided to use the rainbow effect). Since the LED strip is
dimmable, we have also selected a three-dimensional model
of the play button, which has been turned 90 degrees one
and the other direction to a button up and a down button that
represent an increase and decrease in light intensity.
Typically, we will encounter mobile applications with sim-
ple buttons that are part of the user interface. These buttons
exist independently of reality, have a fixed location in the user
interface and on the mobile device screen. To create buttons
in expanded or mixed reality, the position of these buttons is
relative to the position of the object that serves as an anchor.
They must move with the object, while rotating the camera
around the object, the buttons must also be rotated as the
object being tracked. We solved this problem by adding virtual
objects to the prefab of occluder. In prefab we set the exact
location and size of these virtual objects. Then we set the
application to automatically render these virtual objects after
recognizing the object, in this case the integrated power relay.
Another challenge in using virtual buttons is the location
of a button in a three-dimensional world. In common mobile
applications, the buttons have an accurate and permanent lo-
cation on the user interface. When you click on that particular
location, the button performs the programmed action. In the
case of virtual buttons, this location is constantly changing,
and we need to determine which part of the screen corresponds
to that object. We used the ray casting method to determine
Fig. 16. Specifying the location of buttons on the mobile device screen
Fig. 17. Colliders around virtual buttons
this place (see Fig. 16).
For each model we wanted to use as a virtual button, we
created a collider that surrounds the entire model. Thus, the
model can behave from any viewing angle as a button. The
shape of each model is closest to the sphere shape, so we
created a collider in the form of this geometric shape (Fig.
17). In fact, in an application, these invisible colliders respond
to tapping a finger on a mobile device screen, not the three-
dimensional models themselves in mixed reality.
As mentioned above, to determine where the button on the
mobile device is located, we used the ray casting method.
The operation of the method was shown in Fig. 16. At a
given moment, we see a three-dimensional model (labeled 4)
at a specific point in the camera’s image (labeled 2). On the
mobile device screen (labeled 1), we can see the projection
of a three-dimensional model (labeled 3). When you tap your
finger on the mobile device screen with an invisible beam, we
recalculate whether a real-world tap on the screen fell on the
collider.
If the real-world tap on the screen falls on the collider, the
MQTT protocol application changes the color or brightness of
the LED strip or turns on the rainbow effect.
Fig. 18. Virtual buttons around power control kit
With these three different approaches, we were able to
demonstrate the relatively extensive use of a combination of
the Internet of Things and mixed reality. We showed how we
can view and change real-time information from devices in
the Internet of Things in mixed reality. We showed how we
can display and change manually set values in real time in
mixed reality. And how we can visualize and change the mixed
reality on the basis of information coming from the Internet
of Things and how we can control the Internet of things with
mixed reality.
V. CONCLUSION
The goal of the project is to create a system that enables
the control of the Internet of Things using mixed reality.
The development of such a system requires a comprehensive
multidisciplinary approach. After considering the advantages
and disadvantages of the methods of anchoring GUI in mixed
reality, we decided to use a three-dimensional model of
"things" in IoT. Although creating and extending the system
is more time-consuming, smooth running and more intuitive
application control was more important to our case. Also, such
a solution is unique in practice and is a contribution of the
proposed paper. We tested the created system on Apple iPad
Pro. The system has met each pre-determined requirement.
ACKNOWLEDGMENT
This work has been supported by the Cultural and Educa-
tional Grant Agency of the Ministry of Education, Science,
Research and Sport of the Slovak Republic, KEGA 038STU-
4/2018, by the Scientific Grant Agency of the Ministry of
Education, Science, Research and Sport of the Slovak Republic
under the grant VEGA 1/0819/17, by the Slovak Research and
Development Agency APVV-17-0190, and by the Tatra banka
Foundation within the grant programme E-Talent, project
No. 2019et006 (Development of Autonomous Vehicle using
Virtual World).
REFERENCES
[1] Y. Nadirbek, Adilov, Farukh, and E. Farkhod, “Development and
improvement of systems of automation and management of techno-
logical processes and manufactures,” Journal of Automation, Mobile
Robotics and Intelligent Systems, vol. 11, no. 03, pp. 53–57, 2017. doi:
10.14313/JAMRIS_3-2017/28
[2] J. Lee, K. Lee, B. Nam, and Y. Wu, “Iot platform-based iar: a prototype
for plant o m applications,” in 2016 IEEE International Symposium
on Mixed and Augmented Reality (ISMAR-Adjunct), Sep. 2016. doi:
10.1109/ISMAR-Adjunct.2016.0063 pp. 149–150.
[3] Triseum. (2017) Variant: Limits. [Online]. Available: https://triseum.
com/calculus/variant/
[4] J. Filanova, “Application of didactic principles in the use of videocon-
ferencing in e-learning (in slovak),” in Innovation process in e-learning.
EKONOM, March 2013. ISBN 978-80-225-3610-3 pp. 1–7.
[5] J. Majernik, M. Madar, and J. Mojzisova, “Integration of virtual patients
in education of veterinary medicine,” in 2017 Federated Conference on
Computer Science and Information Systems (FedCSIS). IEEE, 2017.
doi: 10.15439/2017F134
[6] K. Zhang, J. Suo, J. Chen, X. Liu, and L. Gao, “Design and implemen-
tation of fire safety education system on campus based on virtual reality
technology,” in 2017 Federated Conference on Computer Science and
Information Systems (FedCSIS). IEEE, 2017. doi: 10.15439/2017F376
[7] A. Technologies. (2014) Virtual gearbox. [Online]. Available: http:
//www.animechtechnologies.com/showcase/virtual-gearbox/
[8] P. A. Rauschnabel, A. Brem, and Y. Ro, “Augmented reality smart
glasses: definition, conceptual insights, and managerial importance,”
Working paper, The University of Michigan-Dearborn, Tech. Rep., 2015.
[9] M. Corporation. (2017) Holotour. [Online]. Available: https://www.
microsoft.com/en-us/hololens/apps/holotour
[10] Foundry. (2017) Vr? ar? mr? sorry, i’m confused. [Online]. Available:
https://www.foundry.com/industries/virtual-reality/vr-mr-ar-confused
[11] E. Kucera, E. Stark, and O. Haffner, “Virtual tour for smart home
developed in unity engine and connected with arduino,” in FedCSIS
Position Papers, 2017.
... Augmented reality (AR) is a technology that provides a real-time, real-world view with the addition of visual, sound, or computer-generated GPS data [6]. The current application of AR is very broad and covers a wide range of fields. ...
Article
Full-text available
In the rapidly evolving era of technology, smart homes have become a significant trend. This research aims to develop a smart home control model through an Android augmented reality (AR) application using a marker-based method. The marker-based approach utilizes physical markers recognized by the application to project virtual objects into the real world. The developed application enables users, including individuals with disabilities and the elderly, to easily control various features in a smart home using their Android devices. Physical markers used as references in the AR application are identified. Once a marker is detected, virtual objects appear above it, allowing users to control lights, fans, and various smart home devices. Testing has shown that the AR application with the marker-based method can accurately recognize markers and provide quick responses to user commands. Users have also reported a positive interactive experience. This research represents an innovative contribution to the development of disability-friendly and elderly friendly smart home technology. It is expected that this technology will advance the creation of more inclusive, convenient, and efficient smart homes in the future.
... There is a wide range of AR use in industrial environment, from intelligent predictive maintenance control using smart glasses [10], through control, monitoring and diagnostics of mechatronic devices using a mobile device [11] to modern alternative for control and monitor IoT devices using modern information and communication technologies, including virtual, augmented, and mixed reality [12]. ...
Conference Paper
The article is focused on the use of progressive techniques in the process of design and implementation of a heterogeneous multirobotic cell control system. The task of visualization as a modern approach to the creation of applications for industrial automation is presented. The way of using the visual approach in the creation of the robotic cell workspace monitoring system and the identification of work parts based on machine vision is declared. Visual programming in the form of Stateflow diagrams is also used for the creation of a superior control system based on PLC. Visualization techniques are also used in the process of multirobotic cell operation control by a web-oriented application using augmented reality. The article provides a comprehensive view of the representation of visualization techniques in design process of applications for industrial automation on a specific implementation example.
... This technology is embedded in a broad range of networked devices, systems and sensors using processing power advances, electronic downsizing and network linkages to provide new capability. The effect of the IOT revolutionfrom new market prospects, business models, security, privacy, and technological interoperability -is the subject of many congresses, papers and news pieces, and discussed [2] - [3]. The extensive use of IoT devices promises that many elements of how we live will be transformed. ...
Conference Paper
Full-text available
Virtual reality (VR), augmented reality (AR), and mixed reality (MR) are finding new applications in clinical research. For pre-surgery applications, this study uses VR, AR, and MR techniques in Windows PC, Apple iPad, and mobile devices. According to our proposed system, every public individual must have their own printed or soft copy-based MR code. People must present their personal MR code to the offered MR vision camera cum scanner. whenever they visit a mall, a theatre, an amusement park, an airport, a bank, a farmers market, a hairdresser, an electrical store, or a cafeteria The appropriate person's biomedical sensor values are instantaneously retrieved from the server by the MR vision Software System. The MR vision Software System will also provide us the details in real-time images based on current sensor values and database data, i.e. green MR images for entry permitted folks and red MR images for Covid afflicted persons with data sets.
... There are also several cases belonging to the third category which communicate XR and IoT devices bidirectionally. Through the same MR interface, researchers monitored indoor environment indicators like temperature and controlled physical actuators like a LED bulb strip (Bucsai et al., 2020) (Lee et al., 2019). This twoway communication is also implemented between MR glasses and an IoT socket (Blanco-Novoa et al., 2019) (Blanco-Novoa et al., 2020). ...
Article
Full-text available
Performance-based architectural design pursues building performance objectives like energy efficiency to guide design decisions. Nevertheless, these informed decisions are usually made according to performance simulation software results, which are quite effort-consuming and tedious. Meanwhile, this low feedback efficiency blocks architects’ mind flows from creating new design alternatives. Therefore, the current authors develop a digital platform by coupling Internet of Things (IoT) and Mixed Reality (MR). It streams and visualizes real-time sensory data of indoor illuminance into an MR glass HoloLens. It enables students to intuitively observe spatio-temporal illuminance fluctuation when manually modifying the physical shading device model. Therefore, they can quickly learn how design decisions influence indoor illuminances. At last, in the validation experiment with first and second year students in an architecture school, we compare this platform with two other teaching methods and platforms. Experimental results identify that the IoT-MR platform can effectively attract students and increase their learning effects.
Chapter
This research develops an autonomous system with cyber-physical integrating features on public utility that has potential to stand uncertainty and provide both resilience and sustainability. Funded by National Science and Technology Council (Taiwan) which are promoting collaboration between academic and industry, this research is enabled to transplant the research result on chemical fiber factory. By working with the experts on site, the autonomous system takes the human-centric approach to solve the need of the industry. The developed system collects machine data on public utility (i.e., heating, ventilation and air-conditioning (HVAC), air handling unit (AHU), chiller, boiler, cooling tower and solar powered street light) with both self-made sensor and commercialized sensor, and display them on a panoramic view monitoring system in real-time. The system uses AI approach to model and control energy consumption of the public utility while utilizing hyperparameter optimization features to decrease the model training time cost. Finally, the workers’ safety is also insured by analyzing the movement of workers on site and it would set off alarm if any potentially dangerous behavior was detected.In the early stage of the project, each of the techniques above was developed separately and focused on only part of the public utility; each technique will be integrated afterward. For example, the data collection from self-made sensor and commercialized sensor are tested in HVAC system. Based on the existing solar panel data, hyperparameter optimization is being studied. The worker safety detection is used for indoor closed-circuit television (CCTV) setup around the AHU, air duct and the surrounding of production line. This paper presents the main structure of this autonomous system and general view of each technique.KeywordsAutonomous SystemCyber-Physical IntegrationHyperparameter OptimizationFault Detection
Preprint
Full-text available
The internet-of-things (IoT) refers to the growing number of embedded interconnected devices within everyday ubiquitous objects and environments, especially their networks, edge controllers, data gathering and management, sharing, and contextual analysis capabilities. However, the IoT suffers from inherent limitations in terms of human-computer interaction. In this landscape, there is a need for interfaces that have the potential to translate the IoT more solidly into the foreground of everyday smart environments, where its users are multimodal, multifaceted, and where new forms of presentation, adaptation, and immersion are essential. This work highlights the synergetic opportunities for both IoT and XR to converge toward hybrid XR objects with strong real-world connectivity, and IoT objects with rich XR interfaces. The paper contributes i) an understanding of this multi-disciplinary domain XR-IoT (XRI); ii) a theoretical perspective on how to design XRI agents based on the literature; iii) a system design architectural framework for XRI smart environment development; and iv) an early discussion of this process. It is hoped that this research enables future researchers in both communities to better understand and deploy hybrid smart XRI environments.
Article
Full-text available
The development of means and systems of industrial automation, taking place along with the widespread use of modern information technologies, makes it possible to identify trends characteristic of this field of science and technology, and to predict the directions by which the most important changes will occur in the near future. It is shown that the main trend is a constant increase in the level of built-in artificial intelligence in control systems. From the standpoint of the most demanded and relevant areas of research and development, the driving forces and trends in the development and improvement of industrial automation systems are analyzed. © 2017, Industrial Research Institute for Automation and Measurements. All rights reserved.
Conference Paper
Full-text available
Nowadays, virtual tours are very popular and many people would like to see a virtual house before the acquisition of the real one. The paper demonstrates a creation of a virtual tour for smart home developed in Unity 3D engine. This virtual tour is connected with Arduino microcontroller which has attached several sensors and actuators. These electronic devices react to the events in the virtual tour and vice versa.
Article
Full-text available
In this article, we propose a five-stage-model of media evolution, in which Augmented Reality Smart Glasses, such as Microsoft Hololens or Google Glass, represent the latest development of media technologies. Based on that, we provide a profound definition of smart glasses. We then outline the managerial importance of this new technology for marketing, market research, logistics, operations management, innovation management, and the management of collaborations. We also theorize factors for successful smart glasses applications and provide valuable recommendations for managers, app developers, and policy makers.
Application of didactic principles in the use of videoconferencing in e-learning (in slovak)
  • filanova
J. Filanova, "Application of didactic principles in the use of videoconferencing in e-learning (in slovak)," in Innovation process in e-learning. EKONOM, March 2013. ISBN 978-80-225-3610-3 pp. 1-7.