I’m interested in building machine learning systems that work with structure rather than against it.
Much of contemporary ML emphasizes scale—larger models, larger datasets, more compute. While this has produced impressive results, it often obscures the underlying structure of the problems being solved. In scientific and technical domains, that structure is not incidental; it is the problem.
My work focuses on graph-based representations, agentic workflows, and modular pipelines that make assumptions explicit and reasoning inspectable. The goal is not to maximize performance on a benchmark, but to build systems that can be understood, extended, and trusted in real research settings.
Over time, I aim to develop tools and frameworks that support scientific inquiry rather than replace it—systems that augment human understanding, preserve interpretability, and remain grounded in the domains they are applied to.
This site documents that direction as it develops.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.