Research Article

MITIGATING DEEPFAKE-BASED IMPERSONATION AND SYNTHETIC DATA RISKS IN REMOTE HEALTHCARE SYSTEMS

by  Ruvimbo Mashinge, Kumbirai Bernard Muhwati, Kelvin Magora, Joy Awoleye
journal cover
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Issue 41
Published: September 2025
Authors: Ruvimbo Mashinge, Kumbirai Bernard Muhwati, Kelvin Magora, Joy Awoleye
10.5120/ijca2025925724
PDF

Ruvimbo Mashinge, Kumbirai Bernard Muhwati, Kelvin Magora, Joy Awoleye . MITIGATING DEEPFAKE-BASED IMPERSONATION AND SYNTHETIC DATA RISKS IN REMOTE HEALTHCARE SYSTEMS. International Journal of Computer Applications. 187, 41 (September 2025), 27-42. DOI=10.5120/ijca2025925724

                        @article{ 10.5120/ijca2025925724,
                        author  = { Ruvimbo Mashinge,Kumbirai Bernard Muhwati,Kelvin Magora,Joy Awoleye },
                        title   = { MITIGATING DEEPFAKE-BASED IMPERSONATION AND SYNTHETIC DATA RISKS IN REMOTE HEALTHCARE SYSTEMS },
                        journal = { International Journal of Computer Applications },
                        year    = { 2025 },
                        volume  = { 187 },
                        number  = { 41 },
                        pages   = { 27-42 },
                        doi     = { 10.5120/ijca2025925724 },
                        publisher = { Foundation of Computer Science (FCS), NY, USA }
                        }
                        %0 Journal Article
                        %D 2025
                        %A Ruvimbo Mashinge
                        %A Kumbirai Bernard Muhwati
                        %A Kelvin Magora
                        %A Joy Awoleye
                        %T MITIGATING DEEPFAKE-BASED IMPERSONATION AND SYNTHETIC DATA RISKS IN REMOTE HEALTHCARE SYSTEMS%T 
                        %J International Journal of Computer Applications
                        %V 187
                        %N 41
                        %P 27-42
                        %R 10.5120/ijca2025925724
                        %I Foundation of Computer Science (FCS), NY, USA
Abstract

The security and integrity of remote healthcare systems are raised as an urgent issue of fighting deepfake technologies, voice cloning, and synthetic data. Since telemedicine platforms are increasingly relying on audiovisual communication and electronic health records (EHR), they actively become appealing victim targets in high-level impersonation attacks. An exhaustive architecture to curb threats postulated through deepfakes, through a combination of multimodal biometric (face, voice, gesture) authentication, real-time deepfake detection, and provenance tracking using blockchain is proposed in this research. We can show that the proposed system can achieve higher than 95 per cent detection accuracy and can eliminate session compromise within two seconds using simulated attack scenarios on well publicised data sets including DFDC, VoxCeleb and MIMIC-III. Our results verify that multi-layered defenses have the potential of securing clinical integrity and patient privacy without much impairment of user experience. The paper establishes the working principle of scalable, resilient, adaptive, and secure telehealth ecosystems against changing threats in synthetic media settings.

References
  • Vogt, E. L., H. L. Lee, A. Singh, E. M. L. Yu, S. R. B. Huang, S. A. G. Ooi, J. B. L. Lim, E. C. L. Tan, and G. C. L. Lim. 2022. “Quantifying the Impact of COVID-19 on Telemedicine Utilization: Retrospective Observational Study.” Interact. J. Med. Res. 11, no. 1: e29880. https://doi.org/10.2196/29880.
  • CTeL. 2024. “Five Years Later: How Telehealth Transformed Access to Healthcare Post-COVID-19.” CTeL Telehealth Research, Policy, Action. Accessed June 20, 2025. https://www.ctel.org/breakingnews/five-years-later-how-telehealth-transformed-access-to-healthcare-post-covid-19.
  • Geddes, L. 2024. “‘One part of the solution’: how virtual NHS wards are now a reality.” The Guardian, February 7. Accessed June 20, 2025. https://www.theguardian.com/society/2024/feb/07/how-virtual-nhs-wards-now-reality.
  • Bates, A. 2024. “How to Spot and Prevent Deepfakes Spreading Medical Misinformation.” Eularis. Accessed June 20, 2025. https://eularis.com/how-to-spot-and-prevent-deepfakes-spreading-medical-misinformation/.
  • Health Management. 2024. “The Growing Threat of Deepfakes and Social Engineering in Healthcare.” Health Management. Accessed June 20, 2025. https://healthmanagement.org/c/cybersecurity/news/the-growing-threat-of-deepfakes-and-social-engineering-in-healthcare.
  • Goodfellow, I., J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. 2020. “Generative adversarial networks.” Commun. ACM 63, no. 11: 139–144. https://doi.org/10.1145/3422622.
  • Wang, Z., Q. She, and T. E. Ward. 2020. “Generative Adversarial Networks in Computer Vision: A Survey and Taxonomy.” arXiv. http://arxiv.org/abs/1906.01529.
  • Pei, G., Z. Wang, Z. Ding, Z. Liu, P. Zhang, J. Xu, Y. Yu, and D. Zhang. 2024. “Deepfake Generation and Detection: A Benchmark and Survey.” arXiv. http://arxiv.org/abs/2403.17881.
  • Shen, J., R. Skerry-Ryan, N. Liu, Y. Jia, M. Castonguay, T. Nguyen, E. Battenberg, Z. Chen, Y. Isola, and B. Catanzaro. 2018. “Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions.” In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4779–4783. IEEE. https://doi.org/10.1109/ICASSP.2018.8461368.
  • Jia, Y., Y. Zhang, R. Weiss, N. Chen, R. Zeghidour, and Y. Wu. 2019. “Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis.” arXiv. http://arxiv.org/abs/1806.04558.
  • Zhang, B., H. Cui, V. Nguyen, and M. Whitty. 2025. “Audio Deepfake Detection: What Has Been Achieved and What Lies Ahead.” Sensors 25, no. 7: 1989. https://doi.org/10.3390/s25071989.
  • Vaccari, C., and A. Chadwick. 2020. “Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News.” Soc. Media + Soc. 6, no. 1. https://doi.org/10.1177/2056305120903408.
  • Putterman, S. 2019. “Zuckerberg's video about ‘billions of people’s stolen data’ is a deepfake.” PolitiFact, June 12. Accessed June 21, 2025. https://www.politifact.com/factchecks/2019/jun/12/instagram-posts/zuckerberg-video-about-billions-peoples-stolen-dat/.
  • Lalchand, S., V. Srinivas, B. Maggiore, and J. Henderson. 2023. “Deepfake banking and AI fraud risk.” Deloitte Insights. Accessed June 21, 2025. https://www.deloitte.com/us/en/insights/industry/financial-services/deepfake-banking-fraud-risk-on-the-rise.
  • Beaumont, H. 2024. “‘A lack of trust’: How deepfakes and AI could rattle the US elections.” Al Jazeera, June 19. Accessed June 21, 2025. https://www.aljazeera.com/news/2024/6/19/a-lack-of-trust-how-deepfakes-and-ai-could-rattle-the-us-elections.
  • Azzuni, H., and A. El Saddik. 2025. “Voice Cloning: Comprehensive Survey.” arXiv. http://arxiv.org/abs/2505.00579.
  • Croitoru, F.-A., A. El-Dawy, A. A. Al-Maadeed, J. T. Hu, A. Abu-Hamdan, S. S. S. A. Mohamed, M. J. Al-Mulla, and A. Al-Maadeed. 2024. “Deepfake Media Generation and Detection in the Generative AI Era: A Survey and Outlook.” arXiv. http://arxiv.org/abs/2411.19537.
  • Nguyen, H. H., F. Fang, J. Yamagishi, and I. Echizen. 2019. “Multi-task Learning For Detecting and Segmenting Manipulated Facial Images and Videos.” arXiv. https://doi.org/1906.06876.
  • Qawasmi, M., and O. Al-Kadi. 2025. “Detecting face tampering in videos using deepfake forensics.” Multimed. Tools Appl. https://doi.org/10.1007/s11042-025-20865-4.
  • Giuffrè, M., and D. L. Shung. 2023. “Harnessing the power of synthetic data in healthcare: innovation, application, and privacy.” npj Digit. Med. 6, no. 1: 186. https://doi.org/10.1038/s41746-023-00927-3.
  • Kokosi, T., and K. Harron. 2022. “Synthetic data in medical research.” BMJ Med. 1, no. 1: e000167. https://doi.org/10.1136/bmjmed-2022-000167.
  • Gonzales, A., G. Guruswamy, and S. R. Smith. 2023. “Synthetic data in health care: A narrative review.” PLOS Digit. Heal. 2, no. 1: e0000082. https://doi.org/10.1371/journal.pdig.0000082.
  • Goyal, M. K. 2023. “Synthetic Data Revolutionizes Rare Disease Research: How Large Language Models and Generative AI are Overcoming Data Scarcity and Privacy Challenges.” Int. J. Recent Innov. Trends Comput. Commun. 11, no. 11: 1368–1380. https://doi.org/10.17762/ijritcc.v11i11.11411.
  • Frid-Adar, M., E. Klang, M. Amitai, J. Goldberger, and H. Greenspan. 2018. “Synthetic Data Augmentation using GAN for Improved Liver Lesion Classification.” arXiv. http://arxiv.org/abs/1801.02385.
  • Chen, R. J., M. Y. Lu, T. Y. Chen, D. F. K. Williamson, and F. Mahmood. 2021. “Synthetic data in machine learning for medicine and healthcare.” Nat. Biomed. Eng. 5, no. 6: 493–497. https://doi.org/10.1038/s41551-021-00751-8.
  • Cai, Z., and M. Li. 2024. “Integrating frame-level boundary detection and deepfake detection for locating manipulated regions in partially spoofed audio forgery attacks.” Comput. Speech Lang. 85: 101597. https://doi.org/10.1016/j.csl.2023.101597.
  • Sun, G., Y. Zhang, H. Yu, X. Du, and M. Guizani. 2020. “Intersection Fog-Based Distributed Routing for V2V Communication in Urban Vehicular Ad Hoc Networks.” IEEE Trans. Intell. Transp. Syst. 21, no. 6: 2409–2426. https://doi.org/10.1109/TITS.2019.2918255.
  • Xia, J.-Y., S. Li, J.-J. Huang, Z. Yang, I. M. Jaimoukha, and D. Gündüz. 2023. “Metalearning-Based Alternating Minimization Algorithm for Nonconvex Optimization.” IEEE Trans. Neural Networks Learn. Syst. 34, no. 9: 5366–5380. https://doi.org/10.1109/TNNLS.2022.3165627.
  • Ansari, U., P. Kamble, and A. Shinde. 2023. “Deepfakes detection using human eye blinking.” Int. Res. J. Mod. Eng. Technol. Sci. 5, no. 11: 3344–3345. https://www.irjmets.com/uploadedfiles/paper//issue_11_november_2023/46123/final/fin_irjmets1701874611.pdf.
  • Yu, N., V. Skripniuk, S. Abdelnabi, and M. Fritz. 2022. “Artificial Fingerprinting for Generative Models: Rooting Deepfake Attribution in Training Data.” arXiv. http://arxiv.org/abs/2007.08457.
  • Tan, C., Y. Zhao, S. Wei, G. Gu, P. Liu, and Y. Wei. 2024. “Frequency-Aware Deepfake Detection: Improving Generalizability through Frequency Space Learning.” arXiv. http://arxiv.org/abs/2403.07240.
  • Dong, C., A. Kumar, and E. Liu. 2022. “Think Twice Before Detecting GAN-generated Fake Images from their Spectral Domain Imprints.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 7855–7864. IEEE. https://doi.org/10.1109/CVPR52688.2022.00771.
  • Wang, X., H. Guo, S. Hu, M.-C. Chang, and S. Lyu. 2023. “GAN-Generated Faces Detection: A Survey and New Perspectives.” Frontiers in Artificial Intelligence and Applications 378: 558–572. https://doi.org/10.3233/FAIA230558.
  • Shahzad, S. A., A. Hashmi, Y.-T. Peng, Y. Tsao, and H.-M. Wang. 2023. “AV-Lip-Sync+: Leveraging AV-HuBERT to Exploit Multimodal Inconsistency for Video Deepfake Detection.” arXiv. http://arxiv.org/abs/2311.02733.
  • Raza, M. A., and K. M. Malik. 2023. “Multimodaltrace: Deepfake Detection using Audiovisual Representation Learning.” In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 993–1000. IEEE. https://doi.org/10.1109/CVPRW59228.2023.00106.
  • Kumaran, U., S. R. Rammohan, S. M. Nagarajan, and A. Prathik. 2021. “Fusion of mel and gammatone frequency cepstral coefficients for speech emotion recognition using deep C-RNN.” Int. J. Speech Technol. 24, no. 2: 303–314. https://doi.org/10.1007/s10772-020-09792-x.
  • Bajwa, M. K. Z., A. Castiglione, and C. Pero. 2025. “Mel Spectrogram-Based CNN Framework for Explainable Audio Deepfake Detection.” In Advanced Information Networking and Applications. AINA 2025. Lecture Notes on Data Engineering and Communications Technologies, edited by L. Barolli, 407–416. Cham: Springer. https://doi.org/10.1007/978-3-031-87784-1_37.
  • Khan, I. R., S. Aisha, D. Kumar, and T. Mufti. 2023. “A Systematic Review on Deepfake Technology.” In Lecture Notes in Networks and Systems Proceedings of Data Analytics and Management, 669–685. Singapore: Springer Nature Singapore. https://doi.org/10.1007/978-981-19-7615-5_55.
  • Khanjani, Z., G. Watson, and V. P. Janeja. 2023. “Audio deepfakes: A survey.” Front. Big Data 5. https://doi.org/10.3389/fdata.2022.1001063.
  • Khan, A. A., A. A. Laghari, S. A. Inam, S. Ullah, M. Shahzad, and D. Syed. 2025. “A survey on multimedia-enabled deepfake detection: state-of-the-art tools and techniques, emerging trends, current challenges & limitations, and future directions.” Discov. Comput. 28, no. 1: 48. https://doi.org/10.1007/s10791-025-09550-0.
  • Choi, J.-E., K. Schäfer, and S. Zmudzinski. 2024. “Introduction to Audio Deepfake Generation: Academic Insights for Non-Experts.” In 3rd ACM International Workshop on Multimedia AI against Disinformation, 3–12. New York, NY, USA: ACM. https://doi.org/10.1145/3643491.3660286.
  • Dasgupta, S., J. Mason, X. Yuan, O. Odeyomi, and K. Roy. 2025. “Enhancing Deepfake Detection using SE Block Attention with CNN.” https://doi.org/10.1109/icABCD62167.2024.10645262.
  • Kroiß, L., and J. Reschke. 2025. “Deepfake Detection of Face Images based on a Convolutional Neural Network.” arXiv. http://arxiv.org/abs/2503.11389.
  • Abbasi, M., P. Váz, J. Silva, and P. Martins. 2025. “Comprehensive Evaluation of Deepfake Detection Models: Accuracy, Generalization, and Resilience to Adversarial Attacks.” Appl. Sci. 15, no. 3: 1225. https://doi.org/10.3390/app15031225.
  • Ghosh, A., H. H. Singh, R. Singh, H. Singh, and M. Singh. 2025. “Blockchain-Assisted Serverless Framework for AI-Driven Healthcare Applications.” Int. J. Adv. Comput. Sci. Appl. 16, no. 5: 473–482. https://doi.org/10.14569/IJACSA.2025.0160546.
  • Johnson, A. E. W., T. J. Pollard, L. Shen, L. H. Lehman, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. A. Celi, and R. G. Mark. 2016. “MIMIC-III, a freely accessible critical care database.” Sci. Data 3, no. 1: 160035. https://doi.org/10.1038/sdata.2016.35.
  • Tolosana, R., R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia. 2020. “Deepfakes and beyond: A Survey of face manipulation and fake detection.” Inf. Fusion 64: 131–148. https://doi.org/10.1016/j.inffus.2020.06.014.
  • Ding, Y.-Y., J.-X. Zhang, L.-J. Liu, Y. Jiang, Y. Hu, and Z.-H. Ling. 2020. “Adversarial Post-Processing of Voice Conversion against Spoofing Detection.” In 2020 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 556–560.
  • Karras, T., S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. 2020. “Analyzing and Improving the Image Quality of StyleGAN.” arXiv. http://arxiv.org/abs/1912.04958.
  • Kruse, C. S., P. Karem, K. Shifflett, L. Vegi, K. Ravi, and M. Brooks. 2018. “Evaluating barriers to adopting telemedicine worldwide: A systematic review.” J. Telemed. Telecare 24, no. 1: 4–12. https://doi.org/10.1177/1357633X16674087.
  • Yan, X., W. Li, P. Li, J. Wang, X. Hao, and P. Gong. 2013. “A Secure Biometrics-based Authentication Scheme for Telecare Medicine Information Systems.” J. Med. Syst. 37, no. 5: 9972. https://doi.org/10.1007/s10916-013-9972-1.
Index Terms
Computer Science
Information Sciences
No index terms available.
Keywords

Deepfakes Telehealth Security Voice Cloning Synthetic Data Multimodal Biometrics Blockchain Remote Healthcare Impersonation Detection Electronic Health Records Adversarial AI

Powered by PhDFocusTM