Key failures in large language models in capturing the structure of psychopathology

My Role: Primary author

Leveraging large language models (LLMs) we examine the relationship between human responses to self-report measures and the semantic properties of the items of those same measures. Using Representational Similarity Analyses on measures that broadly capture psychopathology we uncover key places of misalignment between LLMs semantic representation and psychometric structure of these questionnaires.

Conceptual and Semantic Network of Alcohol and Cannabis Language

My Role: Primary researcher/primary author

Study exploring how cannabis and alcohol co-users understand their use and misuse behaviors. Data collected via Prolific included 200 individuals who use alcohol at different weekly rates and 60% of whom are active cannabis co-users. This study contained 10 free-response questions to gain a naturalistic understanding of how individuals conceptualize these substances in addition to their and their family/friends' alcohol and cannabis use. Participants also completed a questionnaire battery regarding their substance use, personality traits, ideological views, and demographics.

What is addiction? Substance Specific Biases in LLMs and humans Paper published in CogSci 2025 exploring how humans and LLMs have different conceptualizations of what clinical diagnostic features they ascribe to either alcohol or cannabis.