Research

Sound Symbolism in Japanese and the Challenges of NLP Approaches

Capstone paper, presented at the Fall 2024 MA English Graduate Student Conference at San Francisco State

A research paper exploring the current state of cognitive and NLP approaches to sound symbolism research in general and in Japanese. Also presented is a design for an NLP approach using word embeddings to identifying sound symbolism in Japanese, as well as the challenges posed by the Japanese language to this and other NLP applications. Advised by Dr Anastasia Smirnova and Dr Jenny Lederer.

The Furry Fandom and Conceptual Metaphors of Identity Craft

Class paper, presented as part of a Topics in Language Analysis class, Spring 2024, San Francisco State University

This project explores the use of conceptual metaphor by members of the Furry subculture to talk about furry identity and the act of identity craft within the fandom. As part of this project, I built a corpus of ca. 22k words from over 100 respondents to collect data about specific metaphor usage. Undertaken as part of a class taught by Dr Jenny Lederer.

An Initial Investigation into the Cross-linguistic Intelligibility of Japanese Phonesthetic Ideophones

Class paper, presented as part of a Contemporary Semantic Theory class, Fall 2023, San Francisco State University

Serving as a first step towards my CE, this paper experimentally tested whether non-Japanese speakers were able to intuit the understanding of reduplicative Japanese ideophones. Using data gathered from 21 survey respondents, it was possible to conclude that the meaning of certain ideophones may be accessible to non-Japanese speakers. Undertaken as part of a class taught by Dr Jenny Lederer.

Predictability of Human Evaluation Scores of a Machine Translation System from Automated Evaluation Scores across Varied Text Types.

Class paper, submitted as part of a Principles and Applications of Machine Translation class, 2013, Leeds University

Empirical testing of the efficacy of the BLEU method of automatically evaluating machine translation output when applied to technical and literary texts, finding that the method was less predictive of quality for literary texts, but highly predictive with technical texts. This was undertaken as part of a class taught by Dr Bodgan Babych, who had previously undertaken extensive research on the BLEU algorithm.