Research
My current research interests are as difficult to pin down as my music taste.
I am generally motivated by ideas that
- improve our understanding of how things work (e.g. black box models, natural languages, artificial/human intelligence, the universe...),
- promote safety, inclusivity, and fairness,
- become elegant solutions for really hard problems that will work well in the long run.
Here are some topics that I have worked on:
Model Interpretability: I am interested in explaining how large language models make decisions (EMNLP'22) and understanding their capabilities and limitations.
I am also interested in how neural networks process language on a fundamental level and how machine intelligence compares with human cognition.
Sign Language Processing: I am interested in modeling signed languages from a linguistic perspective and extending existing language technologies to signed languages (ACL'21, ECCV'20, COLING'20,
MTSummit21, EMNLP'21). I am also interested in using computational models to help us better understand how signed languages work.
Context-aware Machine Translation: I am interested in when context, either on an intra-sentential (within the current sentence), inter-sentential (across multiple sentences), or extra-linguistic (e.g. social, temporal, cultural) level, is required during translation, and how to model these features in machine translation
(arxiv'21, ACL'21, ACL'21).
Please reach out if you'd like to chat or collaborate! I am generally responsive to emails and Twitter messages, I do not check LinkedIn very often, and I prefer that people do not contact me on social media that I have not listed on this website.