keyboard_arrow_up
Summarizing Arabic Articles using Large Language Models

Authors

Bader Alshemaimri, Ibrahim Alrayes, Turki Alothman, Fahad Almalik and Mohammed Almotlaq, King Saud University, Saudi Arabia

Abstract

This paper explores abstractive and extractive Arabic text summarization using AI,employing fine-tuning and unsupervised machine learning techniques. We investigate the adaptation of pre-trained language models such as AraT5 through fine-tuning. Additionally, we explore unsupervised methods leveraging unlabeled Arabic text for generating concise and coherent summaries by utilizing different vectorizers and algorithms. The proposed models are rigorously evaluated using text-centric metrics like ROUGE [1]. The research contributes to the development of robust Arabic summarization systems, offering culturally sensitive and contextually aware solutions. By bridging the gap between advanced AI techniques and Arabic language processing, this work fosters scalable and effective summarization in the Arabic domain.

Keywords

Arabic Text Summarization, Abstractive Summarization, Extractive Summarization, Natural Language Processing (NLP), Fine-Tuning Language Models

Full Text  Volume 14, Number 10