A preview of this full-text is provided by Springer Nature.
Content available from Multimedia Tools and Applications
This content is subject to copyright. Terms and conditions apply.
Vol.:(0123456789)
Multimedia Tools and Applications (2024) 83:49013–49037
https://doi.org/10.1007/s11042-023-17586-x
1 3
Fake‑checker: Afusion oftexture features anddeep learning
fordeepfakes detection
NoorulHuda1· AliJaved2· KholoudMaswadi3· AliAlhazmi4· RehanAshraf5
Received: 17 February 2023 / Revised: 20 September 2023 / Accepted: 18 October 2023 /
Published online: 3 November 2023
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023
Abstract
The evolution of sophisticated deep learning algorithms such as Generative Adversarial
Networks has made it possible to create deepfakes videos with convincing reality. Deep-
fake identification is important to address internet disinformation campaigns and lessen
negative social media effects. Existing studies either use handcrafted features or deep
learning-based models for deepfake detection. To effectively combine the attributes of
both approaches, this paper presents a fusion of deep features with handcrafted texture fea-
tures to create a powerful fused feature vector for accurate deepfakes detection. We pro-
pose a Directional Magnitude Local Hexadecimal Pattern (DMLHP) to extract the 320-D
texture features and extract the deep feature vector of 2048-D using inception V3. Next,
we employ the Principal Component Analysis to reduce the feature dimensions to 320 for
a balanced representation of features after fusion. The deep and handcrafted features are
combined to form a fused feature vector of 640-D. Further, we employ the proposed fea-
tures to train the XGBoost model for the classification of frames as genuine or forged. We
evaluated our proposed model on Faceforensic + + and Deepfake Detection Challenge Pre-
view (DFDC-P) datasets. Our method achieved the accuracy and area under the curve of
97.7% and 99.3% on Faceforensic + + , whereas 90.8% and 93.1% on the DFDC-P dataset,
respectively. Moreover, we performed a cross-set and cross-dataset evaluation to show the
generalization capability of our model.
Keywords Deepfakes· Deep Convolutional Neural Networks· Generative Adversarial
Networks
* Rehan Ashraf
rehan@ntu.edu.pk
1 Department ofComputer Science, University ofEngineering andTechnology, Taxila47050,
Pakistan
2 Department ofSoftware Engineering, University ofEngineering andTechnology, Taxila47050,
Pakistan
3 Department ofManagement Information Systems, Jazan University, 45142Jazan, SaudiArabia
4 College ofComputer Science andInformation Technology, Jazan University, 45142Jazan,
SaudiArabia
5 Department ofComputer Science, National Textile University, Faisalabad, Pakistan
Content courtesy of Springer Nature, terms of use apply. Rights reserved.