ICCK Transactions on Emerging Topics in Artificial Intelligence | Volume 2, Issue 3: 116-130, 2025 | DOI: 10.62762/TETAI.2025.871572
Abstract
Topic modeling is a widely used unsupervised natural language processing (NLP) technique aimed at discovering latent themes within documents. Since traditional methods fall short in capturing contextual meaning, approaches based on large language models (LLMs)—such as BERTopic—hold the potential to generate more meaningful and diverse topics. However, systematic comparative studies of these models, especially in domains requiring high accuracy and interpretability such as healthcare, remain limited. This study compares ten different LLMs (GPT, Claude, Gemini, LLaMA, Qwen, Phi, Zephyr, DeepSeek, NVIDIA-LLaMA, Gemma) using a dataset of 9,320 medical article abstracts. Each model was tasked... More >
Graphical Abstract
