Figure - available from: Multimedia Tools and Applications
This content is subject to copyright. Terms and conditions apply.
MOS comparison between the proposed adaptive algorithm and other quality settings utilizing video streams at different motions for several video clips

MOS comparison between the proposed adaptive algorithm and other quality settings utilizing video streams at different motions for several video clips

Source publication
Article
Full-text available
Video streaming over Internet has been gaining momentum and several quality adaptation schemes have been reported to improve quality of the streamed videos. Most of these schemes focus on adjusting the video encoding rate to match certain network conditions. This paper presents a new quality adaptation algorithm for real-time video streaming over I...

Citations

... We have used the SSIM for the quality evaluation, as it is more accurate in assessing the quality of the video file. SSIM has been designed to improve the traditional metrics like (PSNR) and mean-squared error (MSE) [42]. SSIM is used for measuring the similarity between two images. ...
Article
Full-text available
Visible light communication (VLC) is a promising technology that can jointly be used to accomplish the typical lighting functionalities of the light emitting diodes (LEDs) and data transmission, where light intensity is modulated with the aid of a high rate data that cannot be noticed by the human eye.qu In this paper, a VLC simulation framework to study the effect of LEDs’ distributions on different room dimensions is proposed by considering the performance metrics such as light intensity quality in accordance with the International Organization for Standardization (ISO) recommendation, and data transmission efficiency measured in terms of bit error rate (BER). To achieve that, a VLC communication system is designed that modulates the data, transmits it over the room utilizing the communication channel that is modeled using an accurate ray-tracing algorithm, and receives it. Our work is different from most of the published works, which studied either the data transmission efficiency or lighting quality but not both. In addition, our study investigates the effect of having different rooms dimensions and different number of transmitters on data transmission quality and light illumination. Consequently, this paper can be used as a methodological study to design an efficient VLC system that satisfies the ISO lighting requirement and the VLC application-specific BER requirements. Furthermore, a video transmission use case has been demonstrated, which shows how video quality can be significantly improved when the number of transmitters is increased. However, considering the ISO lighting requirements, one can put a limit on the number of LEDs that can achieve the required application BER and lighting requirements, thus achieving both objectives efficiently.
... Several attempts on designing application and data link level solutions to mitigate packet loss and bit errors have been made. These attempts consider the wired and wireless Internet connections for the purpose of improving the transmission speed and quality, but without changing the current transmission control protocols [1][2][3][4][5]. However, to obtain significant results, more research has to be done in the data transmission process. ...
... GE Markov chain is used to capture temporally correlated pattern of packet loss. For the GE Markov chain, we apply transitioning probabilities of γ = 0.99875 and β = 0.875 [21], then p, r , G and B are calculated as follows Then (1,2) are used to calculate the GE parameters that correspond to the packet loss under investigation. In this paper, five loss rates are tested (5, 10, 15, 20, and 25%). ...
Article
Full-text available
With the ever increasing demand for higher speed internet connectivity that can fulfil the application continuous need for higher bandwidth Google being the pioneer in many web-based services has launched a new UDP-based protocol named quick UDP internet connections (QUIC), which aims at providing faster data delivery without requiring upgrades or modifications to the network infrastructure. The goal of this paper is to provide an overview about QUIC protocol, propose the design and implementation of a test-bed, that is used experimentally to evaluate QUIC protocol under different network conditions and scenarios. In particular, the performance advantage of QUIC in terms of delay and throughput are examined taking into account different network conditions that resemble the real internet environment. Two scenarios are proposed, the first one investigates the protocol performance under a controlled network environment, while the second one tests the protocol in a real uncontrolled network. To achieve that, a test-bed is proposed and implemented that emulates the network impairments encountered in real-network such as packet loss, bit errors, and bandwidth limitation in a controlled manner. After that, QUIC is tested in real operational wired and wireless networks. In both scenarios, QUIC outperforms TCP in terms of delay, which strengthens QUIC position for being a potential alternative to TCP.
... The SSIM) [27] index is a method for measuring the similarity between two images. SSIM is designed to improve on traditional methods like (PSNR) and mean-squared error (MSE) [28]. The SSIM metric is calculated on various windows of an image. ...
Preprint
Full-text available
Visible light communication (VLC) is a promising technology that can jointly be used to accomplish the typical lighting functionalities of the light-emitting diodes (LEDs) and data transmission, where light intensity can be modulated on a high rate that cannot be noticed by the human eye. In this paper, a VLC simulation framework to study the effect of LEDs’ distributions on different room dimensions is proposed by considering the performance metrics such as light intensity quality in accordance with the International Organization for Standardization (ISO) recommendation, and data transmission efficiency measured in terms of bit error rate (BER). To achieve the abovementioned performance metrics, a VLC communication system is designed that modulates the data, transmits it over the room utilizing the communication channel that is modeled using an accurate ray-tracing algorithm, and receives it by exploiting different receivers that are uniformly distributed in the room. Our work is different from the other published works which either studied the data transmission efficiency or lighting quality but not both. Consequently, this study can be used as a methodological foundation to design an efficient VLC that satisfies the ISO lighting requirement, application-specific BER and quality. Furthermore, a video transmission use case has been demonstrated which shows how the video quality can be significantly improved when increasing the number of transmitters, thus justifying the need for increasing the number of transmitters in scenarios that involves video transmission in an indoor VLC environment.
... To ensure the required quality of service (QoS) to video application end users, it is necessary to have an automated system for video quality monitoring and measurement. In recent years, many researchers have been dealing with problems related to these issues [5][6][7][8]. An important component of such system is the component dealing with PL rate (PLR) measurement [9,10]. ...
Article
Full-text available
Video signals are a very important part of multimedia applications. Due to limited network bandwidth, video signals are subjected to the compression process, which introduces different compression artifacts. During network transmission, additional artifacts are introduced in video signals due to random bit errors and packet loss (PL). Both mentioned artifact types degrade visual quality of the video signal and thus, it has to be continuously monitored to ensure the required quality of service (QoS) provided to end users. An important component of the video quality monitoring system deals with video transmission artifact detection. In this paper, a no-reference (NR) pixel-based video transmission artifact detection algorithm is proposed, called the packet loss area measure (PLAM) algorithm. When detecting video transmission artifacts, the PLAM algorithm takes into account spatial and temporal information of a video signal. The performance of the proposed PLAM algorithm has been compared to those of the three existing different PL detection algorithms on a broad set of significantly different video signals from two publicly available video databases. One of these databases, called the Referent Packet Loss (RPL) database, has been created within this research and is presented in this paper. The algorithm performance testing results show that PLAM achieves high performance and overcomes other tested algorithms. Furthermore, the results show that the PLAM algorithm is very robust when detecting video transmission artifacts in video signals of different contents, with distinct degradation levels and PL error-concealment methods used in decoder post-processing. Due to its low computational complexity, the PLAM algorithm is capable of processing Full HD and Ultra HD video signals with the frame rate up to 100 and 25 frames per second (fps), respectively, in real time, in the case when high-end CPU is used.
... This will also increase the source code distortion, which in turn will reduce the decoded video quality. The testbed under study was previously reported in Murshed et al. (2013), Khalifeh et al. (2016) and Khalifa et al. (2016), focusing on the practical assessment of video quality towards development of a real-time adaptation algorithm for video streaming. ...
... To illustrate the effect of packet loss on real-time video stream, using RTMFP protocol, video snapshots are taken for three different video types (Khalifa et al., 2016) with different motions: high-motion football video, medium-motion foreman talk video, and slow-motion news video. Each of these videos is streamed with different packet-loss rates. ...
... This will also increase the source code distortion, which in turn will reduce the decoded video quality. The testbed under study was previously reported in Murshed et al. (2013), Khalifeh et al. (2016) and Khalifa et al. (2016), focusing on the practical assessment of video quality towards development of a real-time adaptation algorithm for video streaming. ...
... To illustrate the effect of packet loss on real-time video stream, using RTMFP protocol, video snapshots are taken for three different video types (Khalifa et al., 2016) with different motions: high-motion football video, medium-motion foreman talk video, and slow-motion news video. Each of these videos is streamed with different packet-loss rates. ...
Article
Multimedia transmission over lossy networks has been rapidly increasing, especially with the advent of different high-speed and cost-effective Internet technologies. Despite these technology advances, however, the quality of video streaming over lossy networks is still a major challenge towards obtaining the best quality of service. In this paper, a new experimental testbed that is based on the real-time media flow protocol (RTMFP) is proposed to enable researchers to investigate and optimise the effect of changing multiple key parameters relevant to the video-streaming quality. The impact of these parameters, which include video resolution, frames per second, and compression rate on the video quality is investigated with various settings of network packet loss. The obtained results and relationships between these parameters have led to the development of a new algorithm that is capable of changing the parameters under study, in real time, according to the estimated packet loss towards maximum obtainable quality. Design details and experimental setup of the proposed testbed are presented along with key practical considerations and experimental evaluation results.
Article
Internet has evolved into a nonlinear and open system that operates far from equilibrium. Aiming at the issue that the network delay is usually in an uncertain state, we primarily construct the Internet macro-topology based on complex network theory. On this basis, we statistically calculate ratios of bottleneck-delays generated on valid paths with different hops and find that the monthly rates have exceeded 50%, which indicates that bottleneck-delay occurs universally. Furthermore, bottleneck-delay time series exhibits stochastic oscillation characteristics throughout the evolution process. Thus, starting from these properties, chaos theory is introduced to analyze its evolution behavior. We use the phase space reconstruction technique and the G-P algorithm to reconstruct the bottleneck-delay time series in phase space. Then, we obtain its chaotic attractor saturation correlation dimension is the fractional dimension and the largest Lyapunov exponent is greater than 0, which confirms that its evolution process has chaotic characteristics., it demonstrates that bottleneck-delay time series has chaotic characteristic. Finally, we propose the chaos-RBF neural network that is adopted the radial basic function (RBF) neural network combining with chaos theory to predict the bottleneck-delay time series. Through error statistics and comparative analysis, the experimental results demonstrate that the chaos-RBF neural network has high prediction accuracy and can better reflect the changing trend of the bottleneck-delay. By calculating the largest Lyapunov exponent, there will be better prediction effects within 5 months in the future.
Article
Full-text available
Studies on the delay characteristics under the Internet macro-topology provides a reference for resolving the real-time performance issue of the data transmission of Internet devices. With the overall advancement of IPv6 Internet deployment, the changes in network structure and paths will generate different degrees of delay. In this context, a comparative analysis of the behavioral characteristics of IPv4 and IPv6 network delay was performed in this study. We selected the sampled data of valid paths located at four monitors on different continents under the CAIDA_Ark project to obtain statistics for the network delay and communication diameter on the IPv4 and IPv6 Internet and found that their correlation was extremely weak. Furthermore, the communication diameter on IPv6 Internet was slightly shorter than that on IPv4 Internet. The network delay exhibited a bimodal or multimodal heavy-tailed distribution. The network delay and maximum link delay for IPv4 and IPv6 Internet were strongly correlated, indicating that the bottleneck delay affects the relationship between the network delay and communication diameter. Next, we analyzed the relationship between network delay and bottleneck delay for IPv4 and IPv6 Internet and found that bottleneck delay has a more significant impact on the network delay on the valid paths for IPv4 Internet than for IPv6 Internet. After mapping the IP addresses at both ends of the bottleneck delay to the Autonomous Systems (ASes), we found that the bottleneck delay on the valid paths for the IPv4 Internet was mostly distributed in the intra-AS, whereas it was in the inter-AS for the IPv6 Internet. Finally, we analyzed the factors affecting bottleneck delay and found that propagation delay in the long-distance range is an important factor (L > 4000 km on IPv4 Internet and L > 7000 km on IPv6 Internet). In addition, for IPv4 Internet, queuing delay is an important factor affecting bottleneck delay, whereas in the process of data communication on the IPv6 Internet, the impact of propagation and queuing delays on the bottleneck delay is weakened.