|
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
| Volume 187 - Issue 77 |
| Published: January 2026 |
| Authors: B. Sivakumar Reddy, S.K. Harish, Jinka Ranganayakulu, M. Krishna |
10.5120/ijca2026926266
|
B. Sivakumar Reddy, S.K. Harish, Jinka Ranganayakulu, M. Krishna . Deep Adaptive Learning for Robust and Scalable Swarm Coordination in Dynamic Environments. International Journal of Computer Applications. 187, 77 (January 2026), 54-62. DOI=10.5120/ijca2026926266
@article{ 10.5120/ijca2026926266,
author = { B. Sivakumar Reddy,S.K. Harish,Jinka Ranganayakulu,M. Krishna },
title = { Deep Adaptive Learning for Robust and Scalable Swarm Coordination in Dynamic Environments },
journal = { International Journal of Computer Applications },
year = { 2026 },
volume = { 187 },
number = { 77 },
pages = { 54-62 },
doi = { 10.5120/ijca2026926266 },
publisher = { Foundation of Computer Science (FCS), NY, USA }
}
%0 Journal Article
%D 2026
%A B. Sivakumar Reddy
%A S.K. Harish
%A Jinka Ranganayakulu
%A M. Krishna
%T Deep Adaptive Learning for Robust and Scalable Swarm Coordination in Dynamic Environments%T
%J International Journal of Computer Applications
%V 187
%N 77
%P 54-62
%R 10.5120/ijca2026926266
%I Foundation of Computer Science (FCS), NY, USA
Large groups of autonomous agents, like mobile robots or drones, can work together to accomplish complex tasks in unpredictable and dynamic environments thanks to swarm coordination. The flexibility, scalability, and communication effectiveness of traditional rule-based or reinforcement-learning approaches are frequently hampered. To improve swarm coordination's robustness and scalability, this paper suggests a Deep Adaptive Learning (DAL) framework that combines attention-based communication, multi-agent reinforcement learning, and meta-adaptive learning. Reducing communication overhead and increasing coordination efficiency, each agent uses a deep neural policy network with a dynamic attention mechanism to selectively process pertinent neighbour information. Additionally, quick policy adaptation to environmental changes without complete retraining is made possible by an environment-change detection module in conjunction with meta-learning. In contrast to current methods, DAL offers a scalable solution for intelligent swarm systems by achieving faster convergence, higher cumulative rewards, and superior resilience to agent loss and communication noise, as demonstrated by experimental results from dynamic area coverage, target tracking, and formation-switching tasks.