ArticlePDF Available

QoE Assessment for IoT-Based Multi Sensorial Media Broadcasting

Authors:
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
1
Abstract— One of the goals of next generation TV broadcast
services is to provide realistic media contents to the users. The
user’s sense of reality can be reinforced by adding to
conventional media multiple sensorial effects, through five-sense
stimulus (i.e., taste, sight, touch, smell, and hearing). In a smart
TV broadcasting context, especially in a home environment, to
deliver the additional effects, customary devices (e.g., air
conditioning, lights, etc.), provided of opportune smart features,
have to be preferred to ad-hoc devices, often deployed in other
applications as for example in gaming systems. In this context, a
key issue is the interconnection among the smart TV and the
customary devices that deliver the additional sensorial effects to
the user. In smart home use cases, the Internet of Things (IoT)
paradigm has been widely adopted to connect smart devices and
this paper presents an IoT-based architecture for multi sensorial
media delivery to TV users in a home entertainment scenario. In
such a framework, home customary devices, act as smart objects
interconnected via IoT network to the smart TV and play a role
to implement additional effects to the conventional broadcast TV
service. In this study, the requirements in terms of
synchronization between media and devices is analyzed and the
architecture of the system is defined accordingly. Furthermore, a
prototype is implemented in a real smart home scenario with real
customary devices, which allowed a subjective test measurement
campaign to assess the Quality of Experience of the users and the
feasibility of the proposed multi sensorial media TV service.
Index Terms— quality of experience, multi sensorial media,
next generation TV, IoT, smart home.
I.INTRODUCTION
URING the last decade, the evolution of the TV market
has been terrific. Broadcasters have been facing news
challenges to cope with an increasing demand of new services
from user’s side. With the convergence of second-screen
adoption and the abundance of real-time news consumption
via social channels, the broadcast landscape underwent a
major transformation. Viewers have begun to demand highly
customized experiences that meet their individual needs.
In short, the evolving needs of the viewer seem to be in the
future of broadcast television. In the next years, it is likely that
this will become even more evident, with more people
L. Jalal, M. Anedda and M. Murroni are with the Department of Electrical
and Electronic Engineering, University of Cagliari, 09123 Cagliari, Italy (e-
mail: lana.jalal, matte o.an edd a, mu rroni@ diee .unica.it).
Vlad Popescu the is with Department of Electronics and Computers,
Transilvania University of Brasov, 500019 Brasov, Romania (e-mail:
vlad.popescu@unitbv.ro).
demanding customized television experiences through user-
generated content and the option of micro bundled packages.
To keep up, broadcasters must stay current with the latest
innovations to engage with their customers.
Despite the increasing market of handled devices such as
smartphones and tablets, and consequent demand of
spontaneous access to video content form mobile broadband
users, the total minutes watching video per week of traditional
home TV is still predominant [1]. The global service
providers’ offer of advanced whole-home video delivery
enables consumers to new services. Over-the-top (OTT)
content providers are offering movies and TV shows for either
download or direct streaming over the Internet, the type of
shows that consumers prefer to watch on a big-screen high
definition TV. Within this framework, home entertainment
systems have known for the past few years a constant
evolution in size and complexity, delivering new levels of
experience and adventure to consumers. To adapt to users’
need the home entertainment sector developed a true dedicated
electronic playground, with large-screen displays, consoles for
gaming, audio gear, and docking stations, generally managed
through a single remote control giving the complete command
to the user. Technology companies have been announcing
linkage of TV screens, PCs, video recorders, game consoles,
and other electronic devices together in the same home
network, allowing the user to share content among these
devices. On the other hand, due to the complexity and cost, the
home entertainment products reached only a niche of the
population. This slow down the evolution of the TV
broadcasting services, which are still based on content media
not able to exploit all the features that home entertainment
systems could provide, being intended for traditional TV
services.
Furthermore, in the last years the concept of smart home
has gained attention from the information and communication
technology (ICT) community. There has been a massive
interest in the ability of embedded devices, sensors and
actuators to communicate and create a ubiquitous cyber-
physical world. Smartness has been extended to customary
devices traditionally populating the users’ houses, such as
domestic appliance, for instance. Finally, today’s new
technologies enable also the interaction with the residential
environment: control of utilities (lighting, heating, ventilation,
air-conditioning, automated window treatments, pool and spa
controls), control of security (garage and access controls),
control of home appliances locally or remotely from a
smartphone [2].
QoE Assessment for IoT based Multi Sensorial
Media Broadcasting
Lana Jalal, Student Member, IEEE Matteo Anedda, Member, IEEE, Vlad Popescu, Member, IEEE, and
Maurizio Murroni, Senior Member, IEEE
D
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
2
A crucial role on the rapid evolution of this scenario has
been played by the Internet of Things (IoT) paradigm [3] and
by the recent development of short-range mobile
communication technologies that together with an improved
energy-efficiency are expected to create a pervasive
connection of “things” [3]. The drastic increase in the number
of smart devices and sensors connected to the IoT has the
potential to change how consumers interact with networked
technology, including media and entertainment platforms
[4][5]. This represents an interesting opportunity for the
entertainment industry to include the growing volume of
customer interaction that comes with IoT in order to create
more responsive and interactive applications, redefining the
level of interaction between entertainment providers and their
customers [6] [7]. There is now the condition for the TV
service broadcasters to redefine their content and products and
have chances to reach also the traditional user who will be
more and more immersed in a smart home scenario,
surrounded by customary devices able to cooperate and
provide enhanced TV experience.
One of the goals of next generation TV broadcast services
is to provide realistic media contents to the users. The user’s
sense of reality can be improved by adding to conventional
media multiple sensorial effects, through five-sense stimulus
(i.e., taste, sight, touch, smell, and hearing). The main point of
adding effects is to give the user the sensation of being part of
the multimedia content, so achieving a better user experience.
In a smart home environment, to deliver the additional effects,
customary devices (e.g., air conditioning, lights, etc.),
provided of opportune smart features can be deployed.
Enriching traditional multimedia with additional effects,
introduces a number of challenging issues like the assessment
of the quality of experience (QoE) for audio-visual sequences
enriched with additional sensory effects. QoE evaluation is
based on mean opinion score (MOS) subjective tests
measurement campaigns [8] and accurate procedures have to
be followed to certificate the validity [9][10].
This paper proposes an IoT architecture to enable multi
sensorial media home TV services. For this purpose, to deliver
effects, customary home devices jointly participate to the
creation of the extended media experiences. The proposed
architecture relies on the Cloud IoT platform Lysis [11]. In
Lysis the smart TV and the rendering devices, such as remote
switches for air conditioning, lighting and vibration are
represented as Virtual Objects (VOs) [12]. Micro Engines
(MEs) combine and control VOs so that the requirements in
terms of synchronization between media and devices are
fulfilled. A set-up based on Raspberry Pi computational
platforms and Arduino devices enhanced with switching
capabilities has been implemented to test the proposed
architecture.
According to procedure defined in [13] and [9] a
measurement campaign has been then executed on a
population of 40 users of various gender, age and instruction,
to assess the QoE based on MOS. The goal of the
measurements was to evaluate if the common TV users would
be positively impressed by the multi sensorial media. Result
show that multi sensorial media TV services implemented on
home customary devices cooperating through an IoT
architecture can be delivered to the users with positive effect
to their QoE.
The paper is organized as follows: section II provides
information on multi sensorial media, section III presents the
proposed IoT architecture and gives detail on the
implementation. Section IV describes the experimental set up
to assess the QoE. Results are discussed in section V. Finally,
section VI draws the conclusion and suggests future works.
II.MULTI SENSORIAL MEDIA
The concept of receiving sensory effects with audio-visual
content is shown in Fig. 1. The processing terminal is
responsible for managing the actual media audio-visual
resource associated with sensory effect metadata (SEM) in a
synchronized way based on the user's setup in terms of both
Tem p e r a tu r eLight Motion
&
Scent Wind
Effects
Audi o / Vide o
Processing
Termi na l Effects Renderer
Media Renderer
AV &
Semantics
User
Production
Fig. 1. Multi sensorial media concept
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
3
media and sensory effect rendering [14][15][16]. SEM is a
description of supplementary effects based on Sensory Effect
Description Language (SEDL), which is an XML-based
language used to describe sensory effects. Media and effect
renders are used to reproduce audio-visual media and
supplementary effects that enable the stimulation of senses
other than audition and vision [14]-[19]. For example, a
mobile phone vibration, fan/ventilator, heater/cooler, can be
used to address haptic sensations, whereas vaporizer devices
can stimulate the olfactory system [17]. The stimulation of the
visual system can be further enhanced using ambient lighting
devices. The main point of adding effects is to give the user
the sensation of being part of the multimedia content, so as to
enhance user's viewing experience by increasing the sense of
reality. The sensory effect role on viewing experience and user
enjoyment was demonstrated in [19] [20], where authors show
that the viewing experience can be improved by adding effects
to the multimedia content. Furthermore, it has been shown that
when sensory effects are used emotions like fun, worry, fear,
etc. are perceived stronger [21].
Synchronization between sensory effects and video content,
when implementing multi sensorial media applications is a
challenge for future research, the impact of synchronism
between the sensory effects and multimedia content had been
investigated by [22]-[25]. According to [25] haptic media
could be presented with a delay up to 1s behind the video
content in order to be acceptable by most of the users; in
contrast airflow media could be released either 5 s ahead of or
3 s behind the video content to achieve the acceptable level.
The results in [18] indicate that the time window for releasing
a certain scent ranges from about 30 s before to up to 20 s
after the content is displayed.
In current implementations of multi sensorial media, the
rendering devices are ad-hoc devices connected to relays or
other types of electronic switches triggering the action in a
direct manner using wireless communication standards such as
Bluetooth, WiFi, ZigBee or ZWave. The drawback of such
approach relies in the need of developing an entire
communication architecture for controlling the rendering
devices, with inherent difficulties in terms of scalability and
manageability. Most of the few existing commercial devices
for multi sensorial media in home entertainment have a
proprietary architecture which does not give room for
extensions and further development.
Based on these considerations, in this paper we propose a
different approach for connecting the rendering devices using
a typical cloud-based IoT architecture, presented in the
following section.
The specificity of the proposed work is to implement multi-
sensorial experience in a smart home environment using only
customary devices. In the envisaged smart home scenario, real
customary objects, such as the ones used to render the effects,
are not connected through a local or ad hoc network, but they
“exist” and are accessible in a real world because they are
connected to the IoT and at any time can join the smart TV
and create a multi-sensorial rendering environment.
III.IOT ARCHITECTURE AND IMPLEMENTATION
A. Architecture
The proposed architecture for multi sensorial media relies
on the Cloud IoT platform named Lysis [11], which foresees
four layers, as depicted in Fig. 2:
Physical Layer. Cloud implemented, this layer includes
objects capable of accessing the Internet, called real
world objects (RWOs) due to their direct connection with
the physical environment where they sense and act. For
this particular scenario, the RWOs are either electronic
devices with processing capabilities and integrated
peripherals, such as smartphones, or computational
Fig. 2. IoT architecture for multi sensorial media
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
4
platforms equipped with switching capabilities, for
example Raspberry Pi or Arduino platforms, able to
switch on and off rendering devices. The physical layer
communicates with the upper layers using standard wired
or wireless communication (e.g., Wi-Fi, Bluetooth, LTE,
USB, Gigabit Ethernet etc.) methods and data protocols
(i.e., HTTP and MQTT).
Virtualization Layer. For de-coupling the hardware part
from the cloud-based software representation, most IoT
solutions introduce the virtual object (VO) concept as a
digital counterpart of any entity in the real world [23]
[24], so that each object in the Physical Layer is
represented by a virtualization. The VO is a key part of
the overall solution and depicts the RWO in terms of
semantic description and functionalities. It is equipped
with two interfaces, which allow for a standardized
communication procedure: on one side, it enables the
VO to communicate with the aggregation layer, while
on the other side it represents the access point to the
real world, providing the connection with the RWO.
For our specific purpose, the virtualization layer is
implemented by means of a software driver installed on
the RWO, as detailed in the next section.
Aggregation Layer. This layer is responsible for the
aggregation of data coming from one or more VOs in
order to ensure a high re-usability level. The Micro
Engine (ME) is a mash-up of one or more VOs and
even other MEs, in charge for getting and processing
data from VOs into high-level services requested by the
higher layers (application layer).
Application Layer. At this level, user applications are
responsible for the final processing and presentation.
The deployment and execution of applications is based
on the use of one or more MEs.
B. Implementation
For the media renderer the RWO is a high desktop PC
connected to the TV via a 4K HDMI cable and connected to
the upper layers through Gigabit Ethernet. For the
implementation of the multi sensorial effects on the proposed
architecture, we rely on home customary devices. Specifically,
the devices involved in this architectural implementation are:
the fan of an air conditioning (AC) wall-mounted split to
reproduce airflow effects, an RGB smart LED light system
with integrated Wi-Fi connection as to the light enhancement
effects and the integrated call vibration feature of a set of
smartphones to provide haptic effects. The virtualization of the
smartphones is done through an application software
opportunely developed and running on Android operative
system. The AC fan is controlled via an infrared (IR) remote
using an Arduino board with an IR shield which represents the
system’s RWO. The RGB Smart LED lights are connected via
WiFi to a smartphone running an application able to
automatically extract light effects form the phone camera
while placed in front of the TV. In this case the RWO is the
smartphone controlling the RGB Smart LED lights. Using a
software driver for the Arduino board and an iOS app for the
smartphones, we were able to virtualize the RWOs and make
them accessible through the virtualization layer (i.e., VOs) to
the upper layers of the proposed architecture. This allowed us
to have full control of the system and respect the
synchronization constraints specified in section II. The
communication with the RWO was implemented using the
MQTT data protocol over the various communication
standards which assured a low latency, with values lower than
1s for this specific implementation.
The hardware and software deployed to implement the
system are presented in Table I:
TABLE I
LIST OF HARDWARE / SOFTWARE DEPLOYED
Item
Description
Samsung Serie 6 JU6800
Professional high-performance 4K
monitor with 60-inch diagonal
Haier Model AS09BS4HRA
AC wall split INVERTER technology
9000 btu
Philips hue personal wireless
lighting RGB Smart LED
Three RGB smart LED lights
Android/iOS Smartphones
1 iOS to synchronize the RGB smart
LED lights with the video sequences by
adding related colors
2 Android to generate the haptic effect
Arduino MEGA 2560
Arduino microcontroller equipped with
IR and WiFi shields
PlaySEM/ SER [26]
PlaySEM SE Video Player
PlaySEM SER
Desktop PC
Cpu: Intel Core I7-7700k
RAM: 2x16GB 3000MHz
Video Board: GTX 1080 Ti 11GB
GDDRSX
Mother Board Asus ROG Strix 270I
Gaming
Antec H1200-Pro cooling
Corsair CX850M 850 Watt supplier
Hard drive: SSD 480 GB
IV.EXPERIMENTAL S ET UP
A. Test environment
The measurement tests have been performed at the QoE
Lab of the Department of Electrical and Electronic
engineering of the University of Cagliari, Italy. The QoE lab is
a 4Í4Í2.70 m (lÍwÍh) separate room furnished with
parquet floor, a three seat sofa and equipped with an Haier
inverter technology air conditioner wall split [27], a three
RGB smart Philips LED lights system [28], a SAMSUNG TV
UHD 4K Flat Smart JU6800 Series 6 with a 60-inch diagonal
[29] and Wi-Fi internet connection, with the purpose of
replicating the living room environment in a smart home
scenario.
The setup of the test environment was performed according to
ITU-T Recommendation P.911 [30]. Our tests involved two
assessors per session, simultaneously rating the test video
sequences with multi sensorial effects. The participants sat
down in the sofa in front of the air conditioning wall split fan,
which is placed above the smart TV at a height of 2.5 m. The
monitor has been calibrated before the starting of the test.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
5
A sketch of the setup geometry is shown in Fig. 3. The
distance d from the monitor is 2.5 times the height H of the
video monitor (2.5 H), 186 cm for position 1 and 2 (i.e., angle
α ±24°). Vision angle is referred to the surface normal to
center screen angle. Three RGB smart Philips LED lights are
placed behind the monitor, to give the feeling that the lights
integrated in the frame of the monitor, as shown in Fig. 4(a).
The RGB smart Philips LED lights are piloted by a
smartphone placed behind the sofa with the camera faced in
front of the TV screen as shown in Fig. 4(b). Each assessor is
provided with a smartphone with vibration call feature as
shown in Fig. 4(c). Fig. 5 shows a real panoramic of the QoE
lab setup.
B. Participant cohort
40 participants (31 males and 9 females) from various
backgrounds, between 22–50 years old, with the average age
32 years, had been invited to this assessment; only one
participant took part in a similar assessment. For each
participant, the following information was asked: age, gender,
education, and occupation. Prior to a session, the participants
were screened for normal visual acuity on the Snellen chart
[31]. A person taking the test covers one eye from 3 meters
away, and reads aloud the letters of each row, beginning at the
top. The smallest row that can be read accurately indicates the
visual acuity in that specific eye. Moreover, the participants
were tested through the Ishihara color test [32] to detect color
blindness. The Ishihara color test consists of 38 so called
pseudo isochromatic plates, each of them showing either a
number or some lines. According to what you can see and
what not, the test gives feedback of the degree of your red-
green color vision deficiency. According to this test, the
observer could be “none”, “weak”, “moderate” or “strong”
red-green colorblind. All observers reported normal or
corrected-to-normal vision, and had no color vision
deficiency.
C. Assessment Procedure
The Single Stimulus Continuous Quality Scale (SSCQS)
method as defined by ITU-R Rec. BT.500-13 [33] was used in
this assessment, as shown in Fig. 6. In the assessment 30-40
seconds video sequences are shown randomly interleaved with
5 seconds of grey screen used by assessors to rate the video
sequences. The SSCQS does not imply the use of reference
sequences to be shown to the observers. This is not a
limitation in this particular assessment scenario where the aim
is to evaluate the delight/annoyance caused by adding multi
sensorial effects to conventional TV services. Observers are
supposed to be familiar with conventional TV since they
experience it daily. The use of SSCQS allowed to reduce the
overall time of the assessment for each participant pair to less
than 20 minutes, thus avoiding lack of concentration due to
user tiredness.
The rating scale used in the assessment is based on MOS as
defined in the ITU-T Rec. P.911. The ITU-T Rec. P.911
defines five-level rating scale as reported in table II. Each
participant in the assessment asked to give his/her quality
rating for each video sequences.
(a)
(b)
(c)
Fig. 4. Equipment used in the assessment (a) RGB smart LED lights (b)
Smartphone camera (c) Smartph ones
Fig. 3. Setup of the proposed geometry
Fig. 5. Panoramic of th e test environ ment
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
6
The participants watched 10 video sequences at different
resolutions and bit-rates enriched with light (L), vibration (V)
and air flow (A) sensory effects as described in section III,
hereinafter referred to as multi sensorial sequences. L effects
is extracted automatically from the video content as described
in section III, whereas V and A effects are manually annotated
to the videos using the sensory effect video annotation tool
[34].
The sequences have been selected from the category
action, sport, documentary, and commercial. The news
category was not included in this assessment, since according
to [19] sensory effects have low influence on news category.
Table III describes the details of each multi sensorial sequence
with the resolution, bit-rates, category, duration, the effects
added to the sequence, and the video scenario description.
Before the start of the assessment session there was an oral
presentation, prepared to make the participants familiar with
this type of assessment, and explaining the rating scale.
Each session involved two participants rating the multi
sensorial sequences, which allowed reducing the number of
sessions required to obtain reliable statistics.
After displaying all the multi sensorial sequences the
participants had to answer the post-experiment questions, in
which they were asked to comment on their experience
regarding to the multi sensorial media. The survey aimed to
assess the following:
improvement in the sense of reality
if the sensorial effects are distracting
the grade of delight in the fruition of the multi
sensorial media
the appropriateness of effects timing with the audio-
video content
the impact of each additional sensory effect
The assessment overall time was about 20 minutes for each
participant.
S1, S2, S3,… Video Sequences
V1, V2, V3,…Gray screen 5 seconds vote for video
sequence
Fig.6. SSCQS assessment
TABLE II
FIVE-LEVE L RATING SCA LE
5
Excellent
4
Good
3
Fair
2
Poor
1
Bad
TABLE III
MULTI SENSORIAL SEQUENCES
Video
Number
Video
Sequen ce
Resolution
Bit-rate
(Kbit/sec)
Category
Duration
(sec)
Effects
Scenario
S1
Ice ski 1
1920x1080
15411
Sport
22
L, A
ice skiing
S2
Ice ski 2
1920x1080
14979
Sport
23
L, A, V
subjective view, ice
skiing, falling down
S3
Skyfall
3840x2160
12703
Action
33
L, A, V
subway crash, car crash,
falling down, wind, gun
shots, explosion
S4
Firework s
3840x2160
10207
Documentary
35
L
different color fireworks
S5
Elysium
3840x1610
11364
Action
40
L, A, V
Car crash, shots, wind,
explosion
S6
2012
1280x720
2186
Action
30
L, A, V
earth quick, tornado
S7
Pastranas
1280x720
2619
Sport
32
L, A, V
rally
S8
Berrecloth
1280x720
3552
Sport
32
L, A, V
subjective view,
bicycling down the rock
cliffs
S9
Earth
1280x720
4116
Documentary
21
L, A, V
wind, animal jump
S10
Bridgestone
1280x720
2421
Commercial
30
L, A, V
windy weather, car
moving
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
7
Fig. 7. Mean opinion score a nd confidence interval
1
1,5
2
2,5
3
3,5
4
4,5
5
Ice Ski 1 Ice Ski 2 Skyfall Fireworks Elysium 2012 Pastranas Berr ecloth Earth Bridgestone
MOS
Video Sequence
Fig. 8. Ratings
11
2
121 1
2
11
5
7
6
10 12 11 9
4
20
14
17
18 20
13 14 13 15
20
9
4
10
66767 7 8
0
5
10
15
20
25
30
35
Ice Ski 1 Ice Ski 2 Skyfall Fireworks Elysium 2012 Pastranas Berrecl oth Earth Bridgestone
Participants
Video Sequence
Excellent
Good
Fair
Poor
Bad
Fig. 9. %GOB and %POW
410 464 4
6
34
16
22
19
31 38 34 28
12
90
56
84
74 81
63 62 62 68
88
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
Ice Ski 1 Ice Ski 2 Skyfall Fireworks Elysium 2012 Pastranas Berrecloth Earth Br idgeston e
Percent
Video Sequence
% GOB
% Rest
% POW
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
8
V.RESULTS
Of the 40 participants took part in this assessment, 8
outliers had been eliminated according to the procedure
described in [33]. The reliability of the ratings given by a
subject has been detected by checking the correlation between
the average ratings and the i-th rating. An outlier is an
observation that appears to deviate markedly from other
observation in the sample and may indicate bad data.
Evaluators were managed accordingly to the Pearson
Correlation Coefficient (PCC). PCC is an index of the strength
and direction of a linear relationship between two interval
level variables. For PCC<0.75 the evaluator is considered as
outlier.
A. MOS ratings
The mean opinion score (MOS) and confidence interval 95%
for each multi sensorial sequence are shown in Fig. 7 The
participant’s ratings for each video is shown in Fig. 8. The
ratings in percent of good or better (%GOB) using the values
from Good and Excellent, the percent of poor or worse
(%POW) using the values from Poor and Bad, and the rest
percent (%Rest) for fair are described in Fig. 9. It can be notes
as for higher resolution video sequences the perceived quality
is higher. This was expected since the sight sense in human
being is well known to be predominant. Nonetheless, greater
results are achieved sequences showing high dynamic motion
in nature environments (e.g., Ice Ski 1, Bridgestone), but the
degrades when in similar scenario the subjective view is
included (e.g., Ice Ski 2, Berrecloth). This can be justified by
the fact that in subjective view sequence the expectation of the
observer is higher, and the impact of the effects delivered by
the customary devices deployed is perceived as week.
B. Survey results
The post experiment questions evaluation gave the
following results:
85% of the participants agree that sensorial effects
improve the sense of reality, 10% had neutral
opinion, 5% disagree;
67.5% of the participants were not distracted by the
sensory effects, 17.5% had neutral opinion, 15%
were distracted;
80% of the participant enjoyed the multi sensorial
media, 12.5% had neutral opinion, and 7.5% did not
enjoy the multi sensorial media experience.
70% of the participants judged appropriate the timing
of the sensory effects, 18% had neutral opinion, 12%
disagree.
Concerning the sensory effects impact, the results show
that its different from a video category to another category.
The experiments indicate that additional light effects are more
pleasant for the eyes with nature and sports videos than with
action videos. This is due to the fact that additional light
effects reduce eye strain due to a smoother lighting difference
between display and background which is more accentuated
on high dynamic video sequences.
The majority of participants consider that the airflow and
vibration effects in the multi sensorial media, improve the
sense of reality. Also, most participants agree that both effects
result in an enjoyable experience.
VI. CONCLUSIONS AND FUTURE WORK
This paper investigated the feasibility of an IoT based
approach to reproduce multi sensorial media sequences on a
real smart home television scenario. A cloud IoT architecture
has been designed and implemented based on home customary
devices by respecting the synchronization constraints. A
quality of experience assessment campaign has been
performed based on subjective tests. The result showed the
feasibility of the proposed approach in terms of
synchronization constraint, increase of the sense of reality and
general overall satisfaction of the users. The IoT solution
allows implementing the system in a real smart home scenario
without the need to deploy dedicated specific hardware.
Furthermore, the IoT approach allows scalability and also the
possibility to add customized features to the overall system. In
a smart home scenario, the user preferences can be saved by
the IoT architecture and the setting of the devices can be
adjusted accordingly. This opens new outlooks in the
broadcasting area, since it is possible to forecast new services
that could be provided to consumers tailored on their
experience preferences.
Nonetheless, the QoE assessment tests highlighted some
issues that need to be further investigated in future works. The
main critical issue derived from the individual comments of
the participants concerned the vibration effects delivered by
the smartphones which have resulted sometime annoying
during the assessment depending on their placement.
Participants either held the smartphone in their hand, or placed
it in their pockets, or leaning beside them in the sofa.
Moreover, most of the participants complained about the
limitation of impact of the RGB lights led due to the distance
set between the sofa and the TV monitor. It seems that the
viewing distance as specified by the ITU-T Recommendation
P.911 does not match with the need of the users in case of
light enhancement effects. On the other hands the air flow
effect strongly depend on the distance between the AC fan and
the sofa. It appears evident that different room geometries and
more tests have to be performed in order to find the right
tradeoff.
VII. ACKNOWLEDGMENT
The paper has been prepared as part of the research
programme funded by the European Union Commission
through the Erasmus Mundus Marhaba Project.
The research activities described in this paper have been
conducted within the R&D project “Cagliari2020” partially
funded by the Italian University and Research Ministry
(grant# MIUR_PON04a2_00381).
REFERENCES
[1] The State of Traditional TV: Updated With Q2 2017 Data.
https://www.marketingcharts.com/featured-24817.
[2] Z. L. Wu and N. Saito, The Smart Home [Scanning the Issue]," in
Proceedings of the IEEE, vol. 101, no. 11, pp. 2319-2321, Nov.
2013.,doi: 10.1109/JPROC.2013.2282668.
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
9
[3] L. Atzori, A. Iera, G. Morabit o, The Internet of Things: A survey,
Computer Networks, vol. 54, no. 15, pp. 2787-2805, 2010.
[4] M. Fadda, M. Murroni , and V. Popescu, “An unlicensed indoor HDTV
multi-vision system in the DTT bands,” IEEE Transactions on
Broadcasting, vol. 58, no. 3 , pp. 338346, 2012.
[5] M. Fadda, M. Murroni, and V. PopescuA cognitive radio indoor
HDTV multi-vision system in the TV white spacesIEEE Transactions
on Consumer Electronics, vol. 58, no. 2, pp. 302-310, 2012
[6] C. Alippi: Intelligence for Embedded Systems. Springer Verlag, 2014,
283pp, ISBN 978-3-319-05278-6.
[7] Cyber-physical systems". Program Announcements & Information.
The National Science Foundation, 4201 Wilson Boulevard, Arlington,
Virginia 22230, USA. 2008-09-30. Retrieved 2009 -07-21
[8] V. Menkovski and A. Liotta, Adaptive Psychometric Scaling for
Video Quality Assessment,” Signal Processing: Image Communication,
vol. 26, no. 8, pp.788-799, 2012.
[9] L. Jalal and M. Murroni, Enhancing TV broadcasting services: A
survey on mulsemedia quality of experience, in Proc. IE EE
International Symposium on Broadband Multimedia Systems and
Broadcasting (BMSB), Cagliari, Italy, 2017, pp. 1-7. doi:
10.1109/BMSB.2017.7986192
[10] L. Jalal, V. Popescu and M. Murroni, "Quality-of-experience parameter
estimation for multisensorial media using Particle Swarm
Optimization," 2017 International C onference on Optimization of
Electrical and Electronic Equipment (OPTI M) & 2017 Intl Aegean
Conference on Electrical Machines and Power Electronics (ACEMP),
Brasov, 2017, pp. 965-970. doi: 10.1109/OPTIM.2017.7975095
[11] R Girau, S Martis, L Atzori “Lysis: A platform for IoT distributed
applications over socially connected objects, IEEE Internet of Things
Journal, 2017
[12] M. Nitti, V. Pilloni, G. Colistra, and L. Atzori, “The virtua l object as a
major elemen t of the internet of thin gs: a survey, IEEE
Communications Surveys & Tutorials, vol. 18, no. 2, pp. 12281240,
2015
[13] R. Sotelo, J. Joskowicz, M. Anedda, M. Murroni, D. D. Giu sto,
“Subjective video quality assessments for 4K UHDTV,” in Proc. IEEE
International Symposium on Broadband Multimedia Systems and
Broadcasting, BMSB, Cagliari, Italy, 2017, pp. 1-7.
[14] G. Ghinea, C. Timmerer, W. Lin, and SR. Gulliver, “Mulsemedia: State
of the Art, Perspectives, a nd Challenges,” ACM Trans. Multimedia
Computing, Communications and Applications, vol. 11, no. 17, 2104.
[15] B. Choi, E. Lee, and K.Yoon, “Streaming media with sensory effect,”
In: Information Science and Applications (ICISA), pp. 16, 2011.
[16] M. Waltl, C. Timmere r, B. Rainer, and H. Hellwagner, “Sensory effects
for ambient experiences in the World Wide Web,” Multimedia tools
and applications. Vol. 70, no. 2, pp. 1141-60, 2014.
[17] J. J. Kaye, “Making Scents: Aromatic Output for HCI. Interactions,”
vol. 11, no. 1, pp 48-61, 2004.
[18] G. Ghinea and O. Ademoye, “User Perception of Media Content
Association in Olfaction-Enhanced Multimedia,” ACM Transactions on
Multimedia C omputing, Communications and Applications, vol. 8, no.
52, 2012.
[19] M. Waltl, C. Timmerer, and H. Hellwagner, “Increasing the User
Experience of Multimedia Presentations with Sensory Effects ,” in Proc.
11th International Workshop on Image Analysis for Multimedia
Interactive Services, 2010, pp. 1-4.
[20] Z. Yuan, S. Chen, G. Ghinea, and G. Muntean, “User quality of
experience of mulsemedia applications,” ACM Trans. Multimedia
Comput. Commun, vol. 11, no. 1, pp. 15:115:19, Sep. 2014.
[21] B. Rainer, M. Waltl, E. Cheng, M. Shujau, C. Timmerer, S. Davis, I.
Burnett, C. Ritz and H. Hellwagner, “Investigating the impact of
sensory effects on the Quality of Experience and emotional response in
web videos,” in Proc. 4th International Workshop on Quality of
Multimedia Experience, QoMEX 2012, pp . 278-283.
[22] N. Murray, Y. Qiao, B. Lee, A. K. Karunakar, and G. Muntean,
“Multiple-scent enhanced multimedia synchronization,” ACM
Transactions on Multimedia Computing, Communications, and
Applications (TOMM), vol. 11, no.12, 2014.
[23] Y. Ishibashi, T. Kanbara, and S. Tasaka, Inter-stream synchronization
between haptic media and voice in collaborative virtual environments,
in Proc. 12th Annu. ACM Int. Conf. Multimedia ACM, New York,
NY, USA, 2004, pp. 604-611.
[24] Q. Zeng, Y. Ishibashi, N. Fukushima, S. Sugawara, K. E. Psannis,
“Influences of inter-stream synchronization errors among haptic media
sound and video on quality of experience in networked ensenmble,”
Proc. 2n d IEEE Global
[25] Z. Yuan, T. Bi, G. Muntean, and G. Ghinea, “Perceived
synchronization of mulsemedia services,” IEEE Transactions on
Multimedia, vol. 17, no. 7, pp. 957-966, 2015.
[26] E. Saleme, and C. Santos, “PlaySEM: a Platform for Rendering
MulSeMedia Compatible with MPEG-V,” in Proc. WebMedia’15,
Manaus, Brazil, 2015, pp.145-148.
[27] “Haier inverter technology” [Online]. Available:
http://www.haier.net/en/.
[28] “Philips hue personal wireless lighting” [Online]. Available:
https://www2.meethue.com/en-us.
[29] “TV 60” UHD 4K Flat Smart Serie 9 JU6800 [Online]. Available:
http://www.samsung.com/it/tvs/uhd-ju6800/UE60JU6800KXZT/.
[30] ITU-T Rec. P.911, “Subjective audiovisual quality assessment methods
for multimedia applications,” Dec. 1998.
[31] U.S. Department of Commerce /National bureau of Standards. “Size of
Letters Required for Visibility as a Function of Viewing Distance and
Observer Visual Acuity”
[32] S. Ishihara, Tests for color-blindness” Handaya, Tokyo, Hongo
Harukicho, 1917.
[33] ITU-R Rec. BT.500-13 Methodology for the subjective assessment of
the quality of television pictures,2012.
[34] M. Waltl, B. Rainer, C. Timmerer , H. Hellwagner, “An End to-End
Tool Chain for Sensory Experience based on MPEG-V,” Signal
Processing: Imag e Communicati on, vol. 28, no. 2, pp. 136-150, 2013.
Lana Jalal (S’17) graduated from Control and Systems
Engineering, University of Technology/Iraq in 2004.
She received her MSc degree in Computer and
Automation Engineering, University of Sulaimani/Iraq
in 2013, and she is currently a Ph.D. student in
Electronic Engineering and Computer Science at the
Department of Electrical and Electronic Engineering of
the University of Cagliari, Italy. Her research interests
include Mobile Robot navigation, Artificial Intelligence, Mulsemedia and
User Experien ce.
Matteo Anedda (S’12-M’18) received the B.Sc. degree
in Electr onic Engineering in Feb ruary 2011 and the
M.Sc degree (Summa cum Laude) in
Telecommunication Engineering in July 2012, both
from the University of Cagliari. He is currently
Assistant Professor at the University of Cagliari. He has
been a visiting Erasmus student for eight months at the
University of Basque Country (EHU/UPV), Bilbao,
Spain, in 2010, where he has carried out his M.Sc.
thesis under the supervision of Prof. Pab lo Angueira. His M.Sc. thesis entitled
Heuristic Optimization for DVB-T/H SFN Coverage Using Simulat ed
Annealing Algorithm” has been published in the 2011 IEEE International
Symposiu m on Broadband Multimedia Systems and Broadcasting (BMSB)
and he received a n a ward for his M.Sc. thesis from Order of the Engineers of
Cagliari. In 2015 he spent seven months as visiting Ph.D. student at the
Dublin City University (DCU), Performance Engineering Laboratory (PEL),
Dublin, Ireland, under the supervision of Dr. Gabriel-Miro Muntean and in
2016 he spent seven months as vis iting Ph.D. stude nt at the Universidad de
Montevideo ( UM), Monte video, Uruguay, under the supervision of Prof.
Rafael Sotelo.
Vlad Popescu received the M.Sc. degree in Electronics
and Computer Engineering in 1999 and the PhD degree
in Telecommunications in 2006, both from the
Transilvania University of Brașov/Romania. In 2000 he
spent four months at the University of Malmö in
Sweden specializing in Multimedia applications. In
2001-2002 he spent one year as a research fellow at the
> REPLACE THIS LINE WITH YOUR PAPER IDENTIFICATION NUMBER (DOUBLE-CLICK HERE TO EDIT) <
10
Technical University in Aachen, Germany. In 2004-2005 he returned to
Aachen at the same University to finish the experimental part of his PhD
studies on wireless communication in und erground environments. In 2009 he
has been visiting professor at the Department of Electrical and Electronic
Engineering of the University of Cagliari . Dr. Popescu is since 2000 with the
Department of Electronics and Computers, Transilvania University of
Brașov/Romania, currently as an associate professor. He also collaborates
close with the Department of Electrical and Electronic Engineering of the
University of Cagliari/Italy both on research and didactical level. His main
research topics of interest are telecommunications, cognitive radio systems,
multimedia applica tion s an d data a cqu isition.
Maurizio Murroni (M’02) (SM’13) received a M.Sc.
degree in Electronic Engineering in 1998 and a Ph.D.
degree in Electronic Engineering and Computers in
2001 from the University of Cagliari. He was graduate
visiting scholar at the School of E lectronic Engineering,
Information Technology and Mathematics, University
of Su rrey, Guildford, U.K. in 1998 and a visiting Ph.D.
scholar at the Image Processing Group, Polytechnic University, Brooklyn,
NY, USA, in 2000. In 2006, he was visiting lecturer at the Dept. of
Electronics and Computers at the T ransilvania University of Brasov in
Romania and in 2011 visiting professor at the Dept. Electronics and
Telecommunications, Bilbao Faculty of Engineering, University of the Basque
Country (UPV/EHU) in Spain. Currently, is assistant professor at the
Department of electrical and Electronic Engineering of the University of
Cagliari. Since October 2010, he is coordinator of the research unit of the
Italian University C onsortium for Telecommunications (CNIT) at the
University of Cagliari and since 2016 chair of the IEEE Broadcast
Technology Society Italy chapter. Dr. Murroni is co-author of an extensive list
of journal articles and peer-reviewed conference papers and received several
best paper awards. He served as chair for various international conferences
and workshops. He was co-author of the 1900.6-2011 - IEEE Standard for
Spectrum Sensing Interface s a nd Data Structures for Dynamic Spectrum
Access and other Advanced Radio Communication Systems. His research
currently focuses on broadcasting, cognitive radio system, signal processing
for radio communications, multimedia data transmission and processing.
... In refs. [35,36], for example, a study was conducted on smart homes; it can be replicated in other SC contexts. In addition to high-quality audiovideo content, additional effects were provided by exploiting traditional devices (e.g., air conditioning, lights, etc.), equipped with appropriate smart features, as opposed to ad hoc devices often used in other applications, such as gaming systems [37]. ...
Article
Full-text available
Smart cities and 6G are technological areas that have the potential to transform the way we live and work in the years to come. Until this transformation comes into place, there is the need, underlined by research and market studies, for a critical reassessment of the entire wireless communication sector for smart cities, which should include the IoT infrastructure, economic factors that could improve their adoption rate, and strategies that enable smart city operations. Therefore, from a technical point of view, a series of stringent issues, such as interoperability, data privacy, security, the digital divide, and implementation issues have to be addressed. Notably, to concentrate the scrutiny on smart cities and the forthcoming influence of 6G, the groundwork laid by the current 5G, with its multifaceted role and inherent limitations within the domain of smart cities, is embraced as a foundational standpoint. This examination culminates in a panoramic exposition, extending beyond the mere delineation of the 6G standard toward the unveiling of the extensive gamut of potential applications that this emergent standard promises to introduce to the smart cities arena. This paper provides an update on the SC ecosystem around the novel paradigm of 6G, aggregating a series of enabling technologies accompanied by the descriptions of their roles and specific employment schemes.
... The Internet of Things (IoT) generates big data, which is stored in the cloud for further processing and application support for IoT devices provided by the cloud, but the QoE of applications never consider for research. Merging of QoE applications will help organizations to develop applications based on QoE for Industrial IoT development [163,164]. ...
Article
Full-text available
The services are delivered to the user through a protocol interface of different resources. Service delivery conferring on quality of service (QoS) mentioned in service level agreement (SLA) and user satisfaction has been a major problem to access cloud services. To avoid such situations quality of experience (QoE) domain was added to the cloud computing environment to assess user satisfaction and the need for service delivery. In this paper, we present a review of QoE-based solutions, frameworks, and models, which have been proposed for different cloud computing environments. Finally, we discuss the advantages of frameworks and models, their limitations, and open issues for future research in cloud computing.
... As an important process, broadcasting in traditional networks uses a common channel that can be listened to by all users and there are three typical broadcasting methods [25]. The first one is simple flooding, where every node rebroadcasts the message once Z. Gu et al. it receives it. ...
Article
Smart agriculture has a broad prospect with the fast development of information technologies, such as Internet of Things, big data, edge computing and artificial intelligence. As more intelligent devices are deployed for agricultural scenarios, such as quality monitoring and information aggregation, the communication channels among these devices will be crowded and the performance can be greatly affected. Cognitive Radio Networks (CRNs) are promising in promoting better spectrum utilization, where unlicensed users can opportunistically use the vacant spectrum assigned to licensed users. On the basis of CRNs, many agricultural scenarios can be empowered with efficient communication capabilities. However, the broadcasting problem, which handles the information dissemination, has not been thoroughly studied in CRNs. Existing works are either centralized solutions performing broadcast on single/multiple channels, or distributed algorithms without performance guarantee for general networks. In this paper, we propose efficient distributed algorithms with short broadcast delay and high success ratio. The difficulties lie in the non-uniform channel availability, which entails broadcasting on multiple channels even for single-hop neighbors, and the distributed behaviors, where the users are only aware of the local information. Our contributions are threefold. First, we propose an efficient distributed rendezvous algorithm which spurs the neighbors to find the common channels in a short time. Second, we handle the single-hop broadcast by presenting two distributed algorithms and they guarantee a successful broadcast in and time slots respectively, where is the number of channels and is the maximum number of neighbors. Finally, we extend these algorithms to multi-hop broadcast and the broadcast delay is only up to times the delay of single-hop, where is the network diameter. We also conduct extensive simulations and the results corroborate our theoretical analyses.
... A DDING multisensorial information to video content in order to increase users' Quality of Experience (QoE) is attracting attention in both industrial and academic research environments [1], [2]. ...
Article
Full-text available
Improving user experience during the delivery of immersive content is crucial for its success for both the content creators and audience. Creators can express themselves better with multisensory stimulation, while the audience can experience a higher level of involvement. The rapid development of mulsemedia devices provides better access for stimuli such as olfaction and haptics. Nevertheless, due to the required manual annotation process of adding mulsemedia effects, the amount of content available with sensorial effects is still limited. This work introduces an innovative mulsemedia-enhancement solution capable of automatically generating olfactory and haptic content based on 360 video content, with the use of neural networks. Two parallel neural networks are responsible for automatically adding scents to 360 videos: a scene detection network (responsible for static, global content) and an action detection network (responsible for dynamic, local content). A 360 video dataset with scent labels is also created and used for evaluating the robustness of the proposed solution. The solution achieves a 69.19% olfactory accuracy and 72.26% haptics accuracy during evaluation using two different datasets.
... The TV market has undergone a major evolution, and broadcasters have faced new challenges to meet growing user demand for new services. Viewers have begun to demand highly personalized experiences that meet their individual needs [1]. In the near future of broadcast TV, this aspect is likely to become even more evident, with more people demanding personalized TV experiences through user-generated content and the option of bundled micro packages. ...
... Moreover, with the advantage of providing high reliability, low latency, BC system is becoming more and more important in the applications relying on Ultra-reliable and lowlatency communication (URLLC), tactile Internet and Internet of Things (IoT), for e.g. in automated traffic, industrial control, and virtual reality [2], [3]. To meet these requirements, an indispensable technology is the multiple input multiple output antenna (MIMO) technology with the potential to significantly improve the spectrum efficiency and energy efficiency [4], [5]. In particular, MIMO techniques have been widely adopted in next-generation broadcast standards, including digital video broadcasting. ...
Article
n this paper, we study the problem of optimizing the weighted total system rate (WSR) for a downlink broadcast communication system using multiple input-multiple output (MIMO) antenna technology, wherein a Base station (BS) transmits multiple data streams simultaneously to K multi-antenna MIMO mobile stations (MSs). Upon the power constraint, the optimal solution is to find the pre-coding matrices at the BS and the decoding matrices at the MSs. However, this type of optimization problem is usually nonlinear and non-convex, so it is relatively difficult to solve by analytical methods. To tackle the problem, we propose a novel algorithm to optimize the WSR of the system based on the Harris Hawking Optimization (HHO) algorithm using the linear least squares mean error (MMSE) filter at the MSs. Numerical results have been used to demonstrate the outperformance of the proposed algorithm, comparing with existing methods such as Block Diagonalism with Waterfilling algorithm and Particle Swarm Optimization, particularly at the low signal-to-noise (SNR) range. In the end, we may propose an adaptive method that combines the advantages of different algorithms at various SNR domains to maximize the system performance.
Thesis
Full-text available
This work aims to investigate the adequacy of accountability and civil liability systems in the context of the Internet of Things. In order to carry out this examination, different approaches have been used: in particular, the benefits of historical, sociological and technical analysis have been relied upon, to be integrated with the study of norms, decisions and practices typical of the jurist. The first chapter aims to provide the main coordinates for understanding the phenomenon of the Internet of Things: this required a historical and technical approach, albeit minimal. Today's state of the art is in fact the result of the development of fundamental technologies such as cloud computing, and the more modern edge computing and fog computing. These, in turn, required an introduction to how they work. In the second chapter, the focus shifted to the relationship between law and technology, with a focus on digital technologies. The protection of personal data, that is the subject of this thesis can in fact be traced back to the vast field of studies known as Law&Tech, characterised by the influence that technology exerts on law. In addition to this, the second chapter was dedicated to the framing of personal data, the object of the protection of the discipline under examination. The main reference was Article 4 of the GDPR and Opinion No. 4 of 2007 of the Article 29 Working Party. The latter, by issuing soft law acts (opinions and guidelines) plays a key role in the interpretation of data protection provisions. A great importance was attached to soft law in the course of the thesis: the acts of the European Data Protection Board, of the Italian Data Protection Authority, of the European Data Protection Supervisory, and the European agencies (such as Enisa) constitute first-rate references for the analysis of regulatory texts and for the evaluation of technological implementation practices. The third chapter was devoted to the accountability principle. This has been considered by important commentators as the element on which the modernisation of the data protection discipline was based: it was in fact introduced by the GDPR, and in contrast to the former Directive 95/46/EC, it imports a series of obligations aimed to making the main figures accountable for processing: the controller and the processor. This paradigm shift is analysed by retracing the main stages that led to today's accountability principle, with particular emphasis on the minimum security measures provided for in Article 33 of the former Italian privacy code. The fourth chapter focuses instead on civil liability for unlawful processing of personal data. The reference provision is Article 82 of the GDPR, and starting from it, the active and passive subjective profiles, the objective profiles, the nature of the criterion of imputation of liability, and the relationship between the injured party and the damaging party were examined. The protracted study took into consideration the unresolved problems of Italian civil liability, especially as regards the criterion of imputation of liability and the relationship between the injured party and the damaged party. This analysis was supplemented by a systematic reading of the Regulation, with the consequence of finding the minimum and maximum limits of civil liability in the compliance with the principle of accountability. In particular, the balance set by the GDPR between the circulation and protection of personal data, the principle of adequacy, and finally the limits of the state of the art and implementation costs were examined. The results obtained in the third and fourth chapters on accountability and civil liability systems were then tested in the context of the Internet of Things. This necessitated an introduction on the circulation model of personal data, and the risks arising from it: in particular, algorithmic discrimination and influences on personal self-determination were examined. Algorithms were taken into consideration, by virtue of their great inferential capacity, as tools for the extraction of new data, sometimes burdened by biases imprinted at the time of design, and at other times vitiated by biases that emerged later than the time of programming. Respecting the principle of accountability in the IoT, so as not to be condemned for damages under Article 82 GDPR, is very complex. The technological phenomenon in question is very intricate, characterised by great opacity and chains of processing. The lack of transparency makes it complex to be accountable, while the concatenation of accountable treatments, in certain cases, can lead to the inconsistency of personal data protection. Finally, some problematic liability profiles linked to the industrialisation of relations were compared to those arising from their digitalisation.
Article
With the development of human-machine interactions, users are increasingly evolving towards an immersion experience with multi-dimensional stimuli. Facing this trend, cross-modal collaborative communication is considered an effective technology in the Industrial Internet of Things (IIoT). In this paper, we focus on open issues about resource reuse, pair interactivity, and user assurance in cross-modal collaborative communication to improve quality of service (QoS) and users’ satisfaction. Therefore, we propose a novel architecture of modal-aware resource allocation to solve these contradictions. First, taking all the characteristics of multi-modal into account, we introduce network slices to visualize resource allocation, which is modeled as a Markov Decision Process (MDP). Second, we decompose the problem by the transformation of probabilistic constraint and Lyapunov Optimization. Third, we propose a deep reinforcement learning (DRL) decentralized method in the dynamic environment. Meanwhile, a federated DRL framework is provided to overcome the training limitations of local DRL models. Finally, numerical results demonstrate that our proposed method performs better than other decentralized methods and achieves superiority in cross-modal collaborative communications.
Article
The provision to the users of realistic media contents is one of the main goals of future media services. The sense of reality perceived by the user can be enhanced by adding various sensorial effects to the conventional audio-visual content, through the stimulation of the five senses stimulation (sight, hearing, touch, smell and taste), the so-called multi-sensorial media (mulsemedia). To deliver the additional effects within a smart home (SH) environment, custom devices (e.g., air conditioning, lights) providing opportune smart features, are preferred to ad-hoc devices, often deployed in a specific context such as for example in gaming consoles. In the present study, a prototype for a mulsemedia TV application, implemented in a real smart home scenario, allowed the authors to assess the user’s Quality of Experience (QoE) through test measurement campaign. The impact of specific sensory effects (i.e., light, airflow, vibration) on the user experience regarding the enhancement of sense of reality, annoyance, and intensity of the effects was investigated through subjective assessment. The need for multi sensorial QoE models is an important challenge for future research in this field, considering the time and cost of subjective quality assessments. Therefore, based on the subjective assessment results, this paper instantiates and validates a parametric QoE model for multi-sensorial TV in a SH scenario which indicates the relationship between the quality of audiovisual contents and user-perceived QoE for sensory effects applications.
Conference Paper
Full-text available
There is no ITU recommendation concerning subjective tests for video quality assessment in UHD resolution. A clear methodology needs to be established. In this paper we present a subjective quality assessment test of HEVC/H.265 compressed 4K Ultra-High-Definition (UHD) videos in a laboratory viewing environment. We describe the methodology employed, including the setup room based on different parameters such as user position, viewing angle and viewing distance, and the video dataset we used which has been previously employed. Finally, we make some considerations on the experience collected on the methodology and on the fact that the same experiment when conducted in different countries with different languages and cultures may lead to different results.
Article
Full-text available
This paper presents Lysis, which is a cloud-based platform for the deployment of Internet of Things applications. The major features that have been followed in its design are the following: each object is an autonomous social agent; the PaaS (Platform as a Service) model is fully exploited; re-usability at different layers is considered; the data is under control of the users. The first feature has been introduced by adopting the Social IoT concept, according to which objects are capable of establishing social relationships in an autonomous way with respect to their owners with the benefits of improving the network scalability and information discovery efficiency. The major components of PaaS services are used for an easy management and development of applications by both users and programmers. The re-usability allows the programmers to generate templates of objects and services available to the whole Lysis community. The data generated by the devices is stored at the objects owners cloud spaces. The paper also presents a use-case that illustrates the implementation choices and the use of the Lysis features.
Article
Full-text available
This study looked at users' perception of interstream synchronization between audiovisual media and two olfactory streams. The ability to detect skews and the perception and impact of skews on user Quality of Experience (QoE) is analyzed. The olfactory streams are presented with the same skews (i.e., delay) and with variable skews (i.e., jitter and mix of scents). This article reports the limits beyond which desynchronization reduces user-perceived quality levels. Also, a minimum gap between the presentations of consecutive scents is identified, necessary to ensuring enhanced user-perceived quality. There is no evidence (not considering scent type) that overlapping or mixing of scents increases user QoE levels for olfaction-enhanced multimedia.
Article
Full-text available
The Internet of Things (IoT) paradigm has been evolving toward the creation of a cyber-physical world where everything can be found, activated, probed, interconnected, and updated, so that any possible interaction, both virtual and/or physical, can take place. A Crucial concept of this paradigm is that of the virtual object, which is the digital counterpart of any real (human or lifeless, static or mobile, solid or intangible) entity in the IoT. It has now become a major component of the current IoT platforms, supporting the discovery and mash up of services, fostering the creation of complex applications, improving the objects energy management efficiency, as well as addressing heterogeneity and scalability issues. This paper aims at providing the reader with a survey of the virtual object in the IoT world. Virtualness is addressed from several perspectives: historical evolution of its definitions, current functionalities assigned to the virtual object and how they tackle the main IoT challenges, and major IoT platforms, which implement these functionalities. Finally, we illustrate the lessons learned after having acquired a comprehensive view of the topic.
Article
Full-text available
User Quality of Experience (QoE) is of fundamental importance in multimedia applications and has been extensively studied for decades. However, user QoE in the context of the emerging multiple-sensorial media (mulsemedia) services, which involve different media components than the traditional multimedia applications, have not been comprehensively studied. This article presents the results of subjective tests which have investigated user perception of mulsemedia content. In particular, the impact of intensity of certain mulsemedia components including haptic and airflow on user-perceived experience are studied. Results demonstrate that by making use of mulsemedia the overall user enjoyment levels increased by up to 77%.
Conference Paper
MulSeMedia is related to the combination of traditional media (e.g. text, image and video) with other objects that aim to stimulate other human senses, such as mechanoreceptors, chemoreceptors and thermoreceptors. Existing solutions embed the control of actuators in the applications, thus limiting their reutilization in other types of applications or different media players. This work presents PlaySEM, a platform that brings a new approach for simulating and rendering sensory effects that operates independently of any Media Player, and that is compatible with the MPEG-V standard, while taking into account reutilization requirement. Regarding this architecture conjectures are tested focusing on the decoupled operation of the renderer.
Book
Addressing current issues of which any engineer or computer scientist should be aware, this monograph is a response to the need to adopt a new computational paradigm as the methodological basis for designing pervasive embedded systems with sensor capabilities. The requirements of this paradigm are to control complexity, to limit cost and energy consumption and to provide adaptation and cognition abilities allowing the embedded system to interact proactively with the real world. The quest for such intelligence requires the formalization of a new generation of intelligent systems able to exploit advances in digital architectures and in sensing technologies. The book sheds light on the theory behind intelligence for embedded systems with specific focus on: • robustness (the robustness of a computational flow and its evaluation); • intelligence (how to mimic the adaptation and cognition abilities of the human brain), • the capacity to learn in non-stationary and evolving environments by detecting changes and reacting accordingly; and • a new paradigm that, by accepting results that are correct in probability, allows the complexity of the embedded application the be kept under control. Theories, concepts and methods are provided to motivate researchers in this exciting and timely interdisciplinary area. Applications such as porting a neural network from a high-precision platform to a digital embedded system and evaluating its robustness level are described. Examples show how the methodology introduced can be adopted in the case of cyber-physical systems to manage the interaction between embedded devices and physical world. Researchers and graduate students in computer science and various engineering-related disciplines will find the methods and approaches propounded in Intelligence for Embedded Systems of great interest. The book will also be an important resource for practitioners working on embedded systems and applications. © Springer International Publishing Switzerland 2014. All rights are reserved.