I am passionate about natural language processing generally, with specific interests in natural language understanding/generation and information extraction. I currently work with backend systems and developer operations, architecting services with Kubernetes, Docker, and WebAssembly to support Abstract Wikipedia.
I worked on Google's Speech and Keyboard team for approximately three years, during which time I built language models for a wide array of human languages. During this time, I honed my skills with and understanding of machine learning algorithms, in particular state-of-the-art neural network research.
Before that, I had various internships in which I built full pipelines to extract information like topics, named entities, and relationships between entities. Much of the text with which I worked came from medicine (doctor's notes, etc.); I believe that areas such as medicine hold great promise as socially beneficial uses of NLP technology. In these projects, I primarily relied on older NLP technologies like SVNs, CRFs, HMMs, and Maximum Entropy classifiers.
I am currently experimenting with the use of evolutionary programming in NLP, attempting to bridge the gap between human and machine-internal semantic representations.