Authors
Rim Messaoudi, Achraf Louiza Rim and Francois Azelart, Akkodis Research, France
Abstract
Text-based comments play a crucial role in providing feedback for various industries. However, effectively filtering and categorizing this feedback based on custom context-specific criteria requires sophisticated language modeling techniques. While traditional approaches have shown effectiveness, they often require a substantial amount of data to compensate for their modeling deficiencies. In this work, we focus on highlighting the performance and limitations of prompt-free few-shot text classification using open-source pre-trained sentence transformers. On the one hand, our research includes a comprehensive study across different benchmark datasets, encompassing 9 dimensions such as sentiment analysis, topic modeling, grammatical acceptance, and emotion classification. Also, we worked at making different experiences to test Prompt-Free Few-Shot Text Classification. On the other hand, we underline prompt-free few-shot classification limitations when the targeted criteria are complex. As an alternative approach, prompting an instruction-fine-tuned language model has demonstrated favorable outcomes, as proven by our application in the specific use case of "œIdentifying and extracting resolution results and actions from explanatory notes", achieving an accuracy rate of 80%.
Keywords
Language models, Sentence transformers, SetFit, contrastive learning, distillation, intelligence compression, NLP, semantic similarity