Research Article

Causal Representation Learning for Bias Detection in AI Hiring Systems

by  Ajay Guyyala, Prudhvi Ratna Badri Satya, Krishna Teja Areti, Vijay Putta
journal cover
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
Volume 187 - Issue 74
Published: January 2026
Authors: Ajay Guyyala, Prudhvi Ratna Badri Satya, Krishna Teja Areti, Vijay Putta
10.5120/ijca2026926254
PDF

Ajay Guyyala, Prudhvi Ratna Badri Satya, Krishna Teja Areti, Vijay Putta . Causal Representation Learning for Bias Detection in AI Hiring Systems. International Journal of Computer Applications. 187, 74 (January 2026), 1-12. DOI=10.5120/ijca2026926254

                        @article{ 10.5120/ijca2026926254,
                        author  = { Ajay Guyyala,Prudhvi Ratna Badri Satya,Krishna Teja Areti,Vijay Putta },
                        title   = { Causal Representation Learning for Bias Detection in AI Hiring Systems },
                        journal = { International Journal of Computer Applications },
                        year    = { 2026 },
                        volume  = { 187 },
                        number  = { 74 },
                        pages   = { 1-12 },
                        doi     = { 10.5120/ijca2026926254 },
                        publisher = { Foundation of Computer Science (FCS), NY, USA }
                        }
                        %0 Journal Article
                        %D 2026
                        %A Ajay Guyyala
                        %A Prudhvi Ratna Badri Satya
                        %A Krishna Teja Areti
                        %A Vijay Putta
                        %T Causal Representation Learning for Bias Detection in AI Hiring Systems%T 
                        %J International Journal of Computer Applications
                        %V 187
                        %N 74
                        %P 1-12
                        %R 10.5120/ijca2026926254
                        %I Foundation of Computer Science (FCS), NY, USA
Abstract

Artificial intelligence is used widely in HR hiring systems for resume screening and ranking, yet models trained on past decisions often carry group bias through hidden paths from protected attributes to hiring outcomes. This study presents a causal representation learning framework that reduces these effects by using structural modeling, adversarial training, and counterfactual simulation. The method is tested on a structured dataset of 225 applicants and the Utrecht Fairness Recruitment Dataset with close to ten thousand records. The framework lowers the demographic parity gap from 19% to 9% and reduces the equal opportunity gap from 22% to 11%. Counterfactual consistency rises from 67.1% to 84.6%, while the Causal Disparity Index drops from 28% to 11%. Predictive performance also improves, reaching 84.3% accuracy, 82.7% precision, 79.4% recall, and an F1 score of 80.9%. Graph reconstruction error decreases from 0.071 to 0.026. These results show that causal representation learning supports fair and reliable HR hiring systems without reducing predictive strength.

References
  • Dexter Aiden and Lewis Michael. Artificial intelligence in business: Enhancing operational efficiency and navigating ethical challenges. DOI, 10:30525–2736311, 2024.
  • Wael Abdulrahman Albassam. The power of artificial intelligence in recruitment: An analytical review of current ai-based recruitment strategies. International Journal of Professional Business Review: Int. J. Prof. Bus. Rev., 8(6):4, 2023.
  • Jose M Alvarez, Alejandra Bringas Colmenarejo, Alaa Elobaid, Simone Fabbrizzi, Miriam Fahimi, Antonio Ferrara, Siamak Ghodsi, Carlos Mougan, Ioanna Papageorgiou, Paula Reyero, et al. Policy advice and best practices on bias and fairness in ai. Ethics and Information Technology, 26(2):31, 2024.
  • Suchinta Arif and M Aaron MacNeil. Applying the structural causal model framework for observational causal inference in ecology. Ecological Monographs, 93(1):e1554, 2023.
  • Enrico Barbierato, Andrea Pozzi, and Daniele Tessera. Controlling bias between categorical attributes in datasets: A twostep optimization algorithm leveraging structural equation modeling. IEEE Access, 11:115493–115510, 2023.
  • Rashid Manzoor Bhat, P Rajan, and Lakmini Gamage. Redressing historical bias: Exploring the path to an accurate representation of the past. Journal of Social Science, 4(3):698– 705, 2023.
  • Jeroen Bovenberg, Erik Knoop, and Marc Vink. Utrecht fairness recruitment dataset, 2020.
  • Alycia N Carey and Xintao Wu. The statistical fairness field guide: perspectives from social and formal sciences. AI and Ethics, 3(1):1–23, 2023.
  • Zhisheng Chen. Collaboration among recruiters and artificial intelligence: removing human prejudices in employment. Cognition, Technology & Work, 25(1):135–149, 2023.
  • Zhisheng Chen. Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and social sciences communications, 10(1):1–12, 2023.
  • Hugo Cossette-Lefebvre and Jocelyn Maclure. Ai’s fairness problem: understanding wrongful discrimination in the context of automated decision-making. AI and Ethics, 3(4):1255– 1269, 2023.
  • Paul De Boeck, Jolynn Pek, Katherine Walton, Duane T Wegener, Brandon M Turner, Barbara L Andersen, Theodore P Beauchaine, Luc Lecavalier, Jay I Myung, and Richard E Petty. Questioning psychological constructs: Current issues and proposed changes. Psychological Inquiry, 34(4):239– 257, 2023.
  • Mahmut Demir and Yusuf G¨unaydın. A digital job application reference: how do social media posts affect the recruitment process? Employee Relations: The International Journal, 45(2):457–477, 2023.
  • Christianah Pelumi Efunniyi, Angela Omozele Abhulimen, Anwuli Nkemchor Obiki-Osafiele, Olajide Soji Osundare, Edith Ebele Agu, and Ibrahim Adedeji Adeniran. Strengthening corporate governance and financial compliance: Enhancing accountability and transparency. Finance & Accounting Research Journal, 6(8):1597–1616, 2024.
  • Alessandro Fabris, Nina Baranowska, Matthew J Dennis, David Graus, Philipp Hacker, Jorge Saldivar, Frederik Zuiderveen Borgesius, and Asia J Biega. Fairness and bias in algorithmic hiring: A multidisciplinary survey. ACM Transactions on Intelligent Systems and Technology, 16(1):1–54, 2025.
  • Raymond Feng, Flavio Calmon, and Hao Wang. Adapting fairness interventions to missing values. Advances in Neural Information Processing Systems, 36:59388–59409, 2023.
  • Isabel O Gallegos, Ryan A Rossi, Joe Barrow, Md Mehrab Tanjim, Sungchul Kim, Franck Dernoncourt, Tong Yu, Ruiyi Zhang, and Nesreen K Ahmed. Bias and fairness in large language models: A survey. Computational Linguistics, 50(3):1097–1179, 2024.
  • Rub´en Gonz´alez-Sendino, Emilio Serrano, and Javier Bajo. Mitigating bias in artificial intelligence: Fair data generation via causal models for transparent and explainable decisionmaking. Future Generation Computer Systems, 155:384–401, 2024.
  • Vikas Hassija, Vinay Chamola, Atmesh Mahapatra, Abhinandan Singal, Divyansh Goel, Kaizhu Huang, Simone Scardapane, Indro Spinelli, Mufti Mahmud, and Amir Hussain. Interpreting black-box models: a review on explainable artificial intelligence. Cognitive Computation, 16(1):45–74, 2024.
  • Valentin Hofmann, Pratyusha Ria Kalluri, Dan Jurafsky, and Sharese King. Ai generates covertly racist decisions about people based on their dialect. Nature, 633(8028):147–154, 2024.
  • ICT Institute. Utrecht fairness recruitment dataset. https://www.kaggle.com/datasets/ictinstitute/ utrecht-fairness-recruitment-dataset, 2024. Accessed: August 4, 2025.
  • Tonni Das Jui and Pablo Rivas. Fairness issues, current approaches, and challenges in machine learning models. International Journal of Machine Learning and Cybernetics, 15(8):3095–3125, 2024.
  • Ays¸e Kale and O˘guz Altun. An efficient identity-preserving and fast-converging hybrid generative adversarial network inversion framework. Engineering Applications of Artificial Intelligence, 138:109287, 2024.
  • Sara Kassir, Lewis Baker, Jackson Dolphin, and Frida Polli. Ai for hiring in context: a perspective on overcoming the unique challenges of employment research to mitigate disparate impact. AI and Ethics, 3(3):845–868, 2023.
  • Qinyun Lin, Amy K Nuttall, Qian Zhang, and Kenneth A Frank. How do unobserved confounding mediators and measurement error impact estimated mediation effects and corresponding statistical inferences? introducing the r package conmed for sensitivity analysis. Psychological Methods, 28(2):339, 2023.
  • Francesco Locatello, Gabriele Abbati, Thomas Rainforth, Stefan Bauer, Bernhard Sch¨olkopf, and Olivier Bachem. On the fairness of disentangled representations. Advances in neural information processing systems, 32, 2019.
  • Beenu Mago, Vimlesh Tanwar, Azra Fatima, and Siti Hajar Othman. Reimagining diversity and inclusion in hr practices with ai-driven fairness algorithms for bias mitigation and equity optimization. International Journal of Environmental Sciences, pages 1218–1229, 2025.
  • Moinak Maiti, Parthajit Kayal, and Aleksandra Vujko. A study on ethical implications of artificial intelligence adoption in business: challenges and best practices. Future Business Journal, 11(1):34, 2025.
  • Sabah Mariyam, Mohammad Alherbawi, Snigdhendubala Pradhan, Tareq Al-Ansari, and Gordon McKay. Biochar yield prediction using response surface methodology: effect of fixed carbon and pyrolysis operating conditions. Biomass Conversion and Biorefinery, 14(22):28879–28892, 2024.
  • Otavio Parraga, Martin D More, Christian M Oliveira, Nathan S Gavenski, Lucas S Kupssinsk¨u, Adilson Medronha, Luis V Moura, Gabriel S Sim˜oes, and Rodrigo C Barros. Fairness in deep learning: A survey on vision and language research. ACM Computing Surveys, 57(6):1–40, 2025.
  • Alejandro Pe˜na, Ignacio Serna, Aythami Morales, Julian Fierrez, Alfonso Ortega, Ainhoa Herrarte, Manuel Alcantara, and Javier Ortega-Garcia. Human-centric multimodal machine learning: Recent advances and testbed on ai-based recruitment. SN Computer Science, 4(5):434, 2023.
  • Alireza Pirhadi, Mohammad Hossein Moslemi, Alexander Cloninger, Mostafa Milani, and Babak Salimi. Otclean: Data cleaning for conditional independence violations using optimal transport. Proceedings of the ACM on Management of Data, 2(3):1–26, 2024.
  • Drago Plecko and Elias Bareinboim. Reconciling predictive and statistical parity: A causal approach. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 14625–14632, 2024.
  • Drago Pleˇcko, Nicolas Bennett, and Nicolai Meinshausen. fairadapt: Causal reasoning for fair data preprocessing. Journal of Statistical Software, 110:1–35, 2024.
  • Goutham Rajendran, Simon Buchholz, Bryon Aragam, Bernhard Sch¨olkopf, and Pradeep Ravikumar. From causal to concept-based representation learning. Advances in Neural Information Processing Systems, 37:101250–101296, 2024.
  • Atul Rawal, Adrienne Raglin, Danda B Rawat, Brian M Sadler, and James McCoy. Causality for trustworthy artificial intelligence: status, challenges and perspectives. ACM Computing Surveys, 57(6):1–30, 2025.
  • Carlotta Rigotti and Eduard Fosch-Villaronga. Fairness, ai & recruitment. Computer Law & Security Review, 53:105966, 2024.
  • Paul R Sackett, Matthew J Borneman, and Brian S Connelly. High stakes testing in higher education and employment: Appraising the evidence for validity and fairness. American Psychologist, 63(4):215, 2008.
  • Emrullah S¸AHiN, Naciye Nur Arslan, and Durmus¸ O¨ zdemir. Unlocking the black box: an in-depth review on interpretability, explainability, and reliability in deep learning. Neural Computing and Applications, 37(2):859–965, 2025.
  • Bernhard Sch¨olkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Toward causal representation learning. Proceedings of the IEEE, 109(5):612–634, 2021.
  • Milind Shah and Nitesh Sureja. A comprehensive review of bias in deep learning models: Methods, impacts, and future directions. Archives of Computational Methods in Engineering, 32(1):255–267, 2025.
  • Nima Shahbazi, Yin Lin, Abolfazl Asudeh, and HV Jagadish. Representation bias in data: A survey on identification and resolution techniques. ACM Computing Surveys, 55(13s):1– 39, 2023.
  • Francisco Silva, H´elder P. Oliveira, and Tania Pereira. Causal representation learning through higher-level information extraction. ACM Computing Surveys, 57(2):1–37, 2024.
  • Melika Soleimani, Ali Intezari, James Arrowsmith, David J Pauleen, and Nazim Taskin. Reducing ai bias in recruitment and selection: an integrative grounded approach. The International Journal of Human Resource Management, pages 1–36, 2025.
  • Cong Su, Guoxian Yu, JunWang, Zhongmin Yan, and Lizhen Cui. A review of causality-based fairness machine learning. Intelligence & Robotics, 2(3):244–274, 2022.
  • Toan Khang Trinh, Daiyang Zhang, et al. Algorithmic fairness in financial decision-making: Detection and mitigation of bias in credit scoring applications. Journal of Advanced Computing Systems, 4(2):36–49, 2024.
  • Sikun Xu, Zhenling Jiang, Zhengling Qi, and Dennis Zhang. A causal approach to representation learning for unstructured data. Available at SSRN, 2025.
  • Yifan Yang, Mingquan Lin, Han Zhao, Yifan Peng, Furong Huang, and Zhiyong Lu. A survey of recent methods for addressing ai fairness and bias in biomedicine. Journal of Biomedical Informatics, 154:104646, 2024.
Index Terms
Computer Science
Information Sciences
No index terms available.
Keywords

Causal Representation Learning Fair Hiring Systems Bias Detection Counterfactual Analysis Fairness Metrics HR Hiring Data

Powered by PhDFocusTM