Research Article

The Marketing-Fraud Convergence: When Legitimate AI Tools Enable Financial Crime

by  Francis Martinson
journal cover
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Issue 98
Published: April 2026
Authors: Francis Martinson
10.5120/ijcaa6160f6a9822
PDF

Francis Martinson . The Marketing-Fraud Convergence: When Legitimate AI Tools Enable Financial Crime. International Journal of Computer Applications. 187, 98 (April 2026), 1-5. DOI=10.5120/ijcaa6160f6a9822

                        @article{ 10.5120/ijcaa6160f6a9822,
                        author  = { Francis Martinson },
                        title   = { The Marketing-Fraud Convergence: When Legitimate AI Tools Enable Financial Crime },
                        journal = { International Journal of Computer Applications },
                        year    = { 2026 },
                        volume  = { 187 },
                        number  = { 98 },
                        pages   = { 1-5 },
                        doi     = { 10.5120/ijcaa6160f6a9822 },
                        publisher = { Foundation of Computer Science (FCS), NY, USA }
                        }
                        %0 Journal Article
                        %D 2026
                        %A Francis Martinson
                        %T The Marketing-Fraud Convergence: When Legitimate AI Tools Enable Financial Crime%T 
                        %J International Journal of Computer Applications
                        %V 187
                        %N 98
                        %P 1-5
                        %R 10.5120/ijcaa6160f6a9822
                        %I Foundation of Computer Science (FCS), NY, USA
Abstract

The rapid commercialization of generative AI has created a troubling convergence: the same synthetic media tools marketed for legitimate business applications, including AI avatars for marketing videos, voice cloning for content localization, and video generation for advertising, provide identical capabilities exploited for financial fraud, identity theft, and social engineering attacks. This paper analyzes this marketing-fraud convergence through systematic examination of current synthetic media platforms and documented fraud cases. Building on the Authenticity Spectrum Framework (ASF) introduced in prior work [1], the analysis demonstrates that architectural similarities between marketing and fraud applications create fundamental governance challenges that platform-level controls alone cannot address. Analysis of representative platforms reveals that tools generating synthetic user-generated content for advertising operate on identical technical principles to systems enabling deepfake business email compromise, synthetic identity fraud, and investment scams. This paper presents a dual-use risk assessment framework enabling financial institutions, platform operators, and regulators to evaluate synthetic media services for fraud potential. The framework maps specific technical capabilities to established financial crime vectors, providing actionable guidance for compliance and risk management programs.

References
  • F. Martinson, “The Authenticity Spectrum Framework: Classifying Deepfake and Generative AI Risks in Synthetic Media,” International Journal of Computer Applications, vol. 187, no. 88, pp. 34–38, 2026. DOI: 10.5120/ijca2026926538
  • Federal Reserve, “Synthetic Identity Fraud in the U.S. Payment System,” Federal Reserve Bank Reports, 2024.
  • European Union, “Regulation (EU) 2024/1689 (AI Act),” Official Journal of the EU, 2024.
  • F. Martinson and D. Rangel, “A Comprehensive Analysis of Game Hacking through Injectors,” International Journal of Computer Applications, vol. 185, no. 33, pp. 56–63, 2023.
  • A. M. Abukari, M. Amini, and F. Martinson, “A Revealed Architecture of Camera-based Attacks for Smartphones,” International Journal of Computer Applications, vol. 185, no. 27, pp. 45–49, 2023.
  • R. Chesney and D. K. Citron, “Deep fakes: A looming challenge,” California Law Review, vol. 107, pp. 1753–1820, 2019.
  • NIST, “AI Risk Management Framework,” NIST AI 100-1, 2023.
  • FTC, “Consumer Alert: AI Voice Cloning Scams,” 2024.
  • Y. Mirsky and W. Lee, “The creation and detection of deepfakes,” ACM Computing Surveys, vol. 54, no. 1, pp. 1–41, 2021.
  • FBI, “Internet Crime Report 2023,” IC3, 2024.
  • Deloitte, “Generative AI and Fraud Risk,” Deloitte Insights, 2024.
  • C. Vaccari and A. Chadwick, “Deepfakes and disinformation,” Social Media + Society, vol. 6, no. 1, 2020.
  • Partnership on AI, “Framework for Responsible Practices in Synthetic Media,” 2023.
  • J. Kietzmann et al., “Deepfakes: Trick or treat?” Business Horizons, vol. 63, no. 2, pp. 135–146, 2020.
  • M. Westerlund, “The emergence of deepfake technology,” Technology Innovation Management Review, vol. 9, no. 11, 2019.
Index Terms
Computer Science
Information Sciences
No index terms available.
Keywords

Synthetic Media Financial Fraud Deepfakes AI Governance Dual-Use Technology Identity Fraud Voice Cloning Business Email Compromise

Powered by PhDFocusTM