Authors
Hridyanshu, University of California San Diego, United States
Abstract
UmeedVR aims to create a conversational therapy VR game using natural language processing for patients with Speech Disorders like Autism or Aphasia. This study developed 5 psychological task sets and 3 environments via Maya and Unity. The Topic-Modeling AI, employing 25 live participants' recordings and 980+ TwineAI datasets, generated initial VR grading with a coherence score averaging 6.98 themes in 5-minute conversations across scenarios, forming a foundation for enhancements. Employing latent semantic analysis (gensim-corpus Python) and Term-Frequency-Inverse Document-Frequency (TF-IDF), grammatical errors and user-specific improvements were addressed. Results were visualized via audio-visual plots, highlighting conversation topics based on occurrence and interpretability. UMEED enhances cognitive and intuitive skills, elevating average topics from 6.98 to 13.56 in a 5-minute conversation with a 143.12 coherence score. LSA achieved 98.39% accuracy, topic modeling 100%. Significantly, real-time grammatical correction integration in the game was realized.
Keywords
Virtual Reality, Topic Modeling& Coherence, Latent Semantic Analysis, Speech Disorders, Singular Value Decomposition