|
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
| Volume 187 - Issue 77 |
| Published: January 2026 |
| Authors: Bharti Borade, Charansing N. Kayte |
10.5120/ijca2026926240
|
Bharti Borade, Charansing N. Kayte . A Comprehensive Literature Review on Deep Learning–Driven Multilingual Chatbots for Low-Resource Languages with a Focus on Marathi–Hindi–English Interaction. International Journal of Computer Applications. 187, 77 (January 2026), 44-53. DOI=10.5120/ijca2026926240
@article{ 10.5120/ijca2026926240,
author = { Bharti Borade,Charansing N. Kayte },
title = { A Comprehensive Literature Review on Deep Learning–Driven Multilingual Chatbots for Low-Resource Languages with a Focus on Marathi–Hindi–English Interaction },
journal = { International Journal of Computer Applications },
year = { 2026 },
volume = { 187 },
number = { 77 },
pages = { 44-53 },
doi = { 10.5120/ijca2026926240 },
publisher = { Foundation of Computer Science (FCS), NY, USA }
}
%0 Journal Article
%D 2026
%A Bharti Borade
%A Charansing N. Kayte
%T A Comprehensive Literature Review on Deep Learning–Driven Multilingual Chatbots for Low-Resource Languages with a Focus on Marathi–Hindi–English Interaction%T
%J International Journal of Computer Applications
%V 187
%N 77
%P 44-53
%R 10.5120/ijca2026926240
%I Foundation of Computer Science (FCS), NY, USA
Conversational Artificial Intelligence (AI) has undergone substantial progress, evolving from rule-based systems to advanced transformer-driven multilingual models. However, research for low-resource Indian languages—particularly Marathi and Hindi—remains limited despite rapid technological advances. This review synthesizes studies from 2000 to 2025, covering rule-based chatbots, retrieval methods, Seq2Seq architectures, multilingual transformers, and self-supervised speech models such as wav2vec 2.0 and HuBERT. The analysis highlights key linguistic challenges, including agglutination, free word order, transliteration, regional accents, and pervasive code-mixing. Although models like mBERT, XLM-R, and MuRIL significantly improve multilingual understanding, they still struggle with hybrid inputs and domain-specific conversational tasks. Persistent gaps include limited datasets, weak ASR–NLU integration, and insufficient cultural grounding. The review outlines future directions for developing robust, culturally aligned Marathi–Hindi–English chatbots.