Abstract:
This research investigates recent advancements 2018 to 2024 in abstractive text
summarization. I explore the current state of the art and identify key challenges. This
study proposes a methodology utilizing fine-tuned transformer models like T5, BART,
PEGASUS on a diverse dataset CNN/Daily Mail, Giga Word XSum, TIFU, and
SAMSum with consideration for limited computational resources. I aim to develop a
model that captures the salient points of the source text while generating concise and
human-readable summaries. The effectiveness of the model will be evaluated using
ROUGE score alongside other metrics like BLEU score. Finally, the research will
explore strategies to mitigate bias and ensure data privacy in the text summarization
process. The ultimate goal is to deploy the model in a user-friendly website, making
this technology accessible for real-world applications.