Abstract:
Text summarization is automatically creating a succinct synopsis of a given text while maintaining its core content, is a critical problem in Natural Language Processing (NLP). This study concentrated on automatic Bangla text summarization, a field with little funding and research. Using big language models, this project aims to investigate and create a reliable abstractive summarization model for Bangla text. Deep learning based encoder-decoder architecture, in which the model uses attention processes to learn how to translate the input text to a concise summary. In order to ensure that the data is clean and appropriately tokenized for efficient model training, this project preprocessed a sizable dataset of Bangla text. BLEU score, accuracy, and loss are some of the measures used to assess the model's performance. Experimental findings demonstrate the success of our strategy, with the sequence to sequence model achieving high accuracy 99.52% and low loss 0.0365 during training and good validation performance 99.61% accuracy, 0.0310 loss. The development of NLP applications for low-resource languages is aided by this study, which shows the potential of deep learning models for Bangla text summarization. Promising directions for future multilingual summarization research are provided by the findings, which emphasize the value of using pre-trained multilingual models to overcome difficulties in resource constrained languages.