|
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
| Volume 187 - Issue 87 |
| Published: March 2026 |
| Authors: Aliyah Ayco, Kaye Anne Mirador, Glaiza Mei Natividad, Noah Andrea Pagba, James Esquivel |
10.5120/ijca2026926514
|
Aliyah Ayco, Kaye Anne Mirador, Glaiza Mei Natividad, Noah Andrea Pagba, James Esquivel . Signify: A Real-Time Sign to Text and Text to Sign Mobile Application for Dynamic Filipino Sign Language Translation using Transformer Architecture Deep Learning Model. International Journal of Computer Applications. 187, 87 (March 2026), 38-44. DOI=10.5120/ijca2026926514
@article{ 10.5120/ijca2026926514,
author = { Aliyah Ayco,Kaye Anne Mirador,Glaiza Mei Natividad,Noah Andrea Pagba,James Esquivel },
title = { Signify: A Real-Time Sign to Text and Text to Sign Mobile Application for Dynamic Filipino Sign Language Translation using Transformer Architecture Deep Learning Model },
journal = { International Journal of Computer Applications },
year = { 2026 },
volume = { 187 },
number = { 87 },
pages = { 38-44 },
doi = { 10.5120/ijca2026926514 },
publisher = { Foundation of Computer Science (FCS), NY, USA }
}
%0 Journal Article
%D 2026
%A Aliyah Ayco
%A Kaye Anne Mirador
%A Glaiza Mei Natividad
%A Noah Andrea Pagba
%A James Esquivel
%T Signify: A Real-Time Sign to Text and Text to Sign Mobile Application for Dynamic Filipino Sign Language Translation using Transformer Architecture Deep Learning Model%T
%J International Journal of Computer Applications
%V 187
%N 87
%P 38-44
%R 10.5120/ijca2026926514
%I Foundation of Computer Science (FCS), NY, USA
This study presents Signify, a real-time, bidirectional mobile application for dynamic Filipino Sign Language (FSL) translation designed to bridge communication gaps between the Deaf and Hard of Hearing (DHH) community and hearing individuals. Utilizing Long Short-Term Memory (LSTM) and Transformer architectures, the system enables Sign-to-Text (S2T) and Text-to-Sign (T2S) functionalities. To improve model robustness, the researchers expanded the FSL-105 dataset by adding a "Directions" category and recording 80 additional videos per gesture, resulting in a total of 11,530 videos. For S2T recognition, hand landmarks were extracted via MediaPipe. Comparative analysis revealed that the Transformer model significantly outperformed the LSTM baseline, achieving a test accuracy of 98.73%. This was further improved to 99.60% through data augmentation techniques including Gaussian noise injection and temporal jitter. The T2S module utilizes a direct mapping approach to retrieve pre-recorded FSL video segments validated by a certified interpreter for linguistic accuracy. Integrated into an Android application using TensorFlow Lite, the system supports real-time, offline inference. User usability testing yielded a Grand Overall Mean of 4.57 (Excellent), reflecting high satisfaction among signers and non-signers. This research advances inclusive communication in alignment with SDGs 4 and 10.