| dc.description.abstract |
In the realm of natural language processing, summarizing text in Bangla involves overcoming significant challenges, owing to the language's complex grammatical intricacies and rich linguistic variations. This study explores the effectiveness of state-of-the-art (SOTA) models in summarizing Bangla texts while preserving their essential meaning. The findings reveal that the Gemma series, particularly the Gemma 2 9B model, outperforms the latest Llama series model, Llama 3.1 8B, in capturing the essence of Bangla content, even as context length increases. The Mistral-based Microsoft FILM model, however, emerges as a formidable contender, closely rivaling both Gemma and Llama. Interestingly, despite its advanced architecture, the Llama 3 8B model struggles to match the performance of the Gemma 2B model. |
en_US |