Abstract:
Cultural & linguistic challenges. The rise of fake news spreading like wildfire across digital platforms, particularly at the low-resource level in languages such as Bangla, where fact-checking tools are scarce, has become a matter of grave concern. To solve this problem, we are investigating content-based automated detection systems developed with deep learning and transformer approaches. In this research, we test four types of deep learning models (LSTM, GRU, BiLSTM, BiGRU) and one transformer model (mBERT), using a Bangla fake news dataset, which was gathered from several news sites, journalism platforms, and social media. The data was pre-processed by standardization, tokenization, and balancing and models were run for measuring accuracy, precision, recall, and F1-score. Results indicate that the best accuracy (97%) was obtained with mBERT, which was superior to all RNN-based models. Among the RNNs, BiGRU fare the best with 95%, followed by LSTM, GRU, and BiLSTM, each at close to 94%. This horizontal contrast confirmed the best contextual comprehension derived from mBERT and the lowest misclassification rate. These observations highlight the effectiveness of transformer based models such as mBERT in detecting Bangla fake news. In future, we will investigate domain- specific training, hybrid models, multilingual transfer learning and integration with dialects, code-mixed text and multimodal features to improve the scalability and real-world use-case coverage.