Table 2 - uploaded by Heng Yao
Content may be subject to copyright.
Detection accuracy with different bit rate settings.

Detection accuracy with different bit rate settings.

Source publication
Article
Full-text available
Today’s H.264/AVC coded videos have a high quality, high data-compression ratio. They also have a strong fault tolerance, better network adaptability, and have been widely applied on the Internet. With the popularity of powerful and easy-to-use video editing software, digital videos can be tampered with in various ways. Therefore, the double compre...

Contexts in source publication

Context 1
... optimal detection accuracy was also obtained at the same time. Table 2 lists the detection accuracy with different bit rate settings where the best results for each combination of (R1, R2) are bold and underlined. Note that all of the accuracy data contains two aspects: whether the single-and double-compressed videos were correctly detected. ...
Context 2
... that all of the accuracy data contains two aspects: whether the single-and double-compressed videos were correctly detected. Table 2, in most cases, shows that the method proposed in this paper has a better performance for double- compression detection than other methods in different target bit rate videos. The greater the value of the bit rate means that the quality of the compressed video is better. ...
Context 3
... optimal detection accuracy was also obtained at the same time. Table 2 lists the detection accuracy with different bit rate settings where the best results for each combination of (R 1 , R 2 ) are bold and underlined. Note that all of the accuracy data contains two aspects: whether the single-and double-compressed videos were correctly detected. ...
Context 4
... that all of the accuracy data contains two aspects: whether the single-and double-compressed videos were correctly detected. Table 2, in most cases, shows that the method proposed in this paper has a better performance for double-compression detection than other methods in different target bit rate videos. The greater the value of the bit rate means that the quality of the compressed video is better. ...

Similar publications

Article
Full-text available
Motion-compensated frame-interpolation (MCFI), a frame interpolation technique to increase the motion continuity of low frame-rate video, can be utilized by counterfeiters for faking high bitrate video or splicing videos with different frame-rates. For existing MCFI detectors, their performances are degraded under real-world scenarios such as H.264...
Article
Full-text available
Video copy-move forgery detection is one of the hot topics in multimedia forensics to protect digital videos from malicious use. Several approaches have been presented through analyzing the side effect caused by copy-move operation. However, based on multiple similarity calculations or unstable image features, few can well balance the detection eff...

Citations

... In the current scenario, the H.264 encoder is a highly recommended compression model compared to other traditional encoders [1]. The main aim of this H.264 encoder is to compress the video without degrading the quality of the video [2]. The architecture of the H.264 encoder is the same as the H.263 and MPEG-4 compression models [3]. ...
Article
Full-text available
In the recent era, the utilization of the H.264 encoder has been increasing due to its outstanding performance in video compression. However, compressing video with reduced power is still a challenging issue faced by H.264 encoders. Thus, the proposed study intends to minimize the power consumption of H.264 encoders on FPGA by optimizing the basic components of H.264, thereby enhancing performance. For this purpose, the elements like Motion Estimation, intra-prediction, transform unit and entropy encoder are optimized through the effective schemes introduced in the proposed work. Initially, the Motion Estimation unit can be alternated by optimizing the fundamental components of Block Matching Algorithms. To design the Block Matching Algorithms, the proposed study introduces low-power arithmetic units like an add-one circuit-based Carry SeLect Adder and Sum of Absolute Difference. With the help of these methods, the Block Matching Algorithms has designed, and the Motion Estimation Unit can be effectively optimized. Then, by adopting a comparator-less reusing method, the intra-prediction unit is optimized. Next, the transform unit is optimized by proposing a Steerable Discrete Cosine Transform and finally, the entropy encoders are optimized by combining Golomb and Rice entropy encoders. The proposed study uses the schemes above to improve the efficiency of H.264 encoders on FPGA. The experimental analysis in the proposed study is done using Xilinx software. The simulation results show that the proposed work obtained higher power, LUTs, delay, PSNR, frequency and MSE than other competing methods.
... In [12], authors trained a One-Class classifier on the reconstructed frame residual to detect the double compression. In [13], the authors directly study the bit size of each encoded frame. They showed that relocated I-frame requires more bits than typical P or B-frame and can thus be detected. ...
... In Fig. 3, a comparison between the theoretical and the empirical distribution is given under H 0 and H 1 . One can see how the empirical distributions match the theoretical model given in (13). ...
Article
Full-text available
With the 2019 Coronavirus pandemic, we have seen an increasing use of remote technologies such has remote identity verification. The authentication of the user identity is often performed through a biometric matching of a selfie and a video of an official identity document. In such a scenario, it is essential to verify the integrity of both the selfie and the video. In this article, we propose a method to detect double video compression in order to verify the video integrity. We will focus on the H.264 compression which is one of the mandatory video codecs in the WebRTC Requests For Comments. H.264 uses an integer approximation of the Discrete Cosine Transform (DCT). Our method focuses on the DCT coefficients to detect a double compression. The coefficients roughly follow a Laplacian distribution, we will show that the distribution parameters vary with respect to the quantisation parameter used to compress the video. We thus propose a statistical hypothesis test to determine whether or not a video has been compressed twice.
... In [161], authors trained a One-Class classifier on the reconstructed frame residual to detect the double compression. In [162], the authors directly study the bit size of each encoded frame. They showed that relocated I-frame requires more bits than typical P or B-frame and can thus be detected. ...
Thesis
Digital media are parts of our day-to-day lives. With years of photojournalism, we have been used to consider them as an objective testimony of the truth. But images and video retouching software are becoming increasingly more powerful and easy to use and allow counterfeiters to produce highly realistic image forgery. Consequently, digital media authenticity should not be taken for granted any more. Recent Anti-Money Laundering (AML) relegation introduced the notion of Know Your Customer (KYC) which enforced financial institutions to verify their customer identity. Many institutions prefer to perform this verification remotely relying on a Remote Identity Verification (RIV) system. Such a system relies heavily on both digital images and videos. The authentication of those media is then essential. This thesis focuses on the authentication of images and videos in the context of a RIV system. After formally defining a RIV system, we studied the various attacks that a counterfeiter may perform against it. We attempt to understand the challenges of each of those threats to propose relevant solutions. Our approaches are based on both image processing methods and statistical tests. We also proposed new datasets to encourage research on challenges that are not yet well studied.
... Moreover, measuring the strength of blocking artifacts requires decoding and analyzing all the frames, thus increasing significantly the computational complexity. Still leveraging on the fact that in a double compressed video the time correlation is weak for I-frames that are re-encoded as P-frames, Yao et al. [13] observed that these frames require a larger number of bits in the bitstream. Thus, the bit size of each frame is considered as the main feature in [13], allowing a fast and accurate detection of double encoding. ...
... Still leveraging on the fact that in a double compressed video the time correlation is weak for I-frames that are re-encoded as P-frames, Yao et al. [13] observed that these frames require a larger number of bits in the bitstream. Thus, the bit size of each frame is considered as the main feature in [13], allowing a fast and accurate detection of double encoding. As all previous schemes, the method proposed in [13] cannot cope with B-frames; in addition it has been tested by using H.264 only for the last compression and CBR as coding strategy. ...
... Thus, the bit size of each frame is considered as the main feature in [13], allowing a fast and accurate detection of double encoding. As all previous schemes, the method proposed in [13] cannot cope with B-frames; in addition it has been tested by using H.264 only for the last compression and CBR as coding strategy. Finally, a different approach to the problem was proposed in [8], where authors employ a deep learning approach to distinguish, in a frame-wise fashion, relocated I-frames from other frames (which means: single encoded frames or double encoded frames that maintained the original type). ...
Article
The Variation of Prediction Footprint (VPF), formerly used in video forensics for double compression detection and GOP size estimation, is comprehensively investigated to improve its acquisition capabilities and extend its use to video sequences that contain bi-directional frames (B-frames). By relying on a universal rate-distortion analysis applied to a generic double compression scheme, we first explain the rationale behind the presence of the VPF in double compressed videos and then justify the need of exploiting a new source of information such as the motion vectors, to enhance the VPF acquisition process. Finally, we describe the shifted VPF induced by the presence of B-frames and detail how to compensate the shift to avoid misguided GOP size estimations. The experimental results show that the proposed Generalized VPF (G-VPF) technique outperforms the state of the art, not only in terms of double compression detection and GOP size estimation, but also in reducing computational time.
... Besides, some novel image segmentation technologies can be considered for locating the tampered region more precisely and accelerating our method. Finally, the double compression detection tasks on some other media such as audio [28] and video [29,30] will be considered in the future. ...
Article
Full-text available
With the wide use of various image altering tools, digital image manipulation becomes very convenient and easy, which makes the detection of image originality and authenticity significant. Among various image tampering detection tools, double JPEG image compression detector, which is not sensitive to specific image tampering operation, has received large attention. In this paper, we propose an improved double JPEG compression detection method based on noise-free DCT (Discrete Cosine Transform) coefficients mixture histogram model. Specifically, we first extract the block-wise DCT coefficients histogram and eliminate the quantization noise which introduced by rounding and truncation operations. Then, for each DCT frequency, a posterior probability can be obtained by solving the DCT coefficients mixture histogram with a simplified model. Finally, the probabilities from all the DCT frequencies are accumulated to give the posterior probability of a DCT block being authentic or tampered. Extensive experimental results in both quantitative and qualitative terms prove the superiority of our proposed method when compared with the state-of-the-art methods.
... Kim et al. [8] proposed a modality fusion method designed for combining spatial and temporal fingerprint information to improve video copy detection performance. For video double compression detection, Yao et al. [9] proposed a method that analyzed the periodic features of the string of data bits and the skip macroblocks for all I-frames and P-frames in a double-compressed H.264/AVC video. Jiang et al. [10] analyzed degradation mechanisms during recompression to detect double compression with the same coding parameters. ...
Article
A high definition (HD) video usually means a good visual quality, and the video with low quantization parameter (QP) or high bit rate has a good video definition. However, forgers prefer to directly re-encode lower definition videos using lower QP or higher bit rate without video quality improvement to pretend to be HD videos. Therefore, fake HD video detection is necessary in video forensics. In this paper, a novel method is proposed to detect the fake HD for high efficient video coding (HEVC) videos based on prediction mode feature (PMF). According to our analysis, the HEVC prediction mode scheme of the subsequent encoding could be influenced by the previous lower quality encoding, hence the frequencies of prediction unit with all types of prediction modes can be used to detect the fake HD videos. Firstly, a 4-D features are extracted from Planar, DC, H0 direction and V0 direction intra prediction modes. Secondly, a 6-D features are extracted from Skip, Merge and AMVP inter prediction modes. Finally, these two feature sets are combined into the PMF to detect fake HD videos and further estimate their original QPs and bit rates. Experimental results show that the performance of the proposed method outperforms state-of-the-art works.
... As one of the most popular media, JPEG (Joint photographic experts group) images are easily accessible and thus are liable to be altered or manipulated with various basic operations such as image resizing, filtering, splicing, noise addition, contrast enhancement, rotation, double compression and so on [25,26]. With no visual traces left, such images are often not clear in processing history which could be detrimental in some specific situations [2]. ...
Article
Full-text available
With the increasing tendency of the tempering of JPEG images, development of methods detecting image forgery is of great importance. In many cases, JPEG image forgery is usually accompanied with double JPEG compression, leaving no visual traces. In this paper, a modified version of DenseNet (densely connected convolutional networks) is proposed to accomplish the detection task of primary JPEG compression among double compressed images. A special filtering layer in the front of the network contains typically selected filtering kernels that can help the network following to discriminating the images more easily. As shown in the results, the network has achieved great improvement compared to the-state-of-the-art method especially on the classification accuracy among images with lower quality factors.
... A video picture is usually split up into sequences of consecutive similar/closely related pictures (Haskell and Puri, 2012). MPEG-2 standard introduced three kinds of frame types: intra-coded frame (I-frame), predictive frame (P-frame) and bi-predictive frame (B-frame) (Jiang et al., 2011;Yao et al., 2017). These frame types were also extended to subsequent coding standards. ...
Article
Full-text available
Peer (P2P) networks have emerged as an efficient and affordable means of transmitting videos to numerous end-users via the Internet. The dynamic and heterogeneous nature of P2P streaming systems (P2PSS) makes testing, analyzing and verification a cumbersome task. However, formal methods offer efficient approaches to rigorously analyze and verify P2PSS. This paper demonstrates the use of formal verification techniques for analyzing the behavioral properties of P2PSS. We use temporal logics to analyze whether all the possible behaviors within the P2P streaming systems conform to the defined specifications. Specifically, we apply model checking to check the consistency, completeness and certainty of the model if the temporal properties of the proposed system satisfies the required specifications. Furthermore, the P2PSS framework was modeled and verified using Simulink Design Verifier (SDV) in MATLAB simulation tool. The simulations results showed 100% validation for all frames and 50% validation for I-frames prioritisation. Further, the probability of a peer capable of forwarding frames while receiving is at most 0.5.
Chapter
Methods that can determine if two given video sequences are captured by the same device (e.g., mobile telephone or digital camera) can be used in many forensics tasks. In this paper we refer to this as “video device matching”. In open-set video forensics scenarios it is easier to determine if two video sequences were captured with the same device than identifying the specific device. In this paper, we propose a technique for open-set video device matching. Given two H.264 compressed video sequences, our method can determine if they are captured by the same device, even if our method has never encountered the device in training. We denote our proposed technique as H.264 Video Device Matching (H4VDM). H4VDM uses H.264 compression information extracted from video sequences to make decisions. It is more robust against artifacts that alter camera sensor fingerprints, and it can be used to analyze relatively small fragments of the H.264 sequence. We trained and tested our method on a publicly available video forensics dataset consisting of 35 devices, where our proposed method demonstrated good performance.KeywordsH.264 Video CompressionVideo Device MatchingDigital Video ForensicsDeep Learning