dc.description.abstract |
In modern world, technology has transformed our lives for the better. However, human
attention spans are shortening, and the amount of time people want to spend reading is
dwindling at an alarming rate. As a result, it's critical to provide a quick overview of key
news or article by creating a brief summary of the most important news piece, as well as
the most intuitive summary in accordance with the synopsis. There are enormous amounts
of textual data available in this era of information. Online documents, articles, news, and
customer evaluations of various goods and services are a few examples of sources. The
purpose of document summarizing is to find the core meaning of the original material.
However, it is impossible to create custom summaries for such a vast supply of text
documents. Humans have the ability to make abstraction by reading a article. However,
summarizing using computer is always a hard problem. Abstractive text summarization is
used to improve the topic coverage of automatic summaries by paying more attention to
the semantics of the words and experimenting with rephrasing the input sentences in a
human-like manner improve soundness and readability. Although there has been a lot of
prominent study on abstractive summary in the English language, there have only been a
few publications on Bengali abstractive news summarization (BANS). In this thesis, we
proposed a hybrid model for extracting summary from long articles that combines both
extractive and abstractive approaches. In the extractive part, BERT (BERTSUM) is used
to find the most relevant sentences from the document then using sequence to sequence
(seq2seq) based bidirectional Long Short-Term Memory (LSTM) network model with
attention at encoder-decoder to generate the summary. Experiments were carried out using
publicly available Kaggle datasets (Bengali newspaper dataset). The results verify our
method and show that the suggested hybrid model produces a compact and engaging summary. We evaluated our summaries by observing its generative performance. In this
model, our main goal was to make an abstractive summarizer and reduce the train loss of
that. During our research experiment, we have successfully reduced the train loss to 0.018
and able to generate a fluent short summary note from a given text. |
en_US |