Résumé:
In recent years, there has been an explosion in the amount of text data from a variety
of sources. This data needs to be effectively summarized to be useful.
Text summarization in natural language processing has widely been approached with
extractive methods that stick to selecting parts of the original document to capture
the main topic ideas. What has been less attempted is abstractive summarization.
In our work, we focus on the latter type of automatic summarization. We performed a
series of experiments to judge the effectiveness of abstractive summarization systems,
whether or not they are applicable in a real context. Our choice went towards the use
of the machine learning approach with models inspired by the architecture of transformers.
At first, we focused on the extractive multi-document summarization, then we
finetuned DistilBart, a recent model proposed by the Huggingface team, for abstractive
summarization on different datasets and compared each of the obtained models with
the basic model, and then between them.
We also created an algorithm to be used during preprocessing. The objective of this algorithm
is to replace similar sentences that are grouped in clusters by a single sentence
belonging to that cluster. This algorithm also uses a model based on transformers.
Evaluation is done automatically using the ROUGE scores. Our method, as simple as
it is, has shown promising results since the scores were higher when using that preprocessing.
Keywords: Automatic Summary, Abstract, Multi-Document, Deep Learning, Semantic
similarity, Fine-tuning, Transformers, BERT, GPT-2, BART.