About

Built from frustration. Refined by experience.

About the creator

I am an undergraduate student studying physics, mathematics, and economics. My research interests currently lie in the intersection of physics, mathematics, and machine learning. I have worked at several R1 universities, participated in the most prestigious Research Experience for Undergraduates (REU) in my area of interests, and interned at a National Laboratory.

I know the feeling when you open a research paper in a field you're entering for the first time. The abstract uses three terms you don't know. The first equation has symbols that aren't defined till the latter half of the paper. The related work section references a dozen papers you haven't read. And you have no idea whether the result is surprising or obvious to everyone already in the field.

This feeling is universal and the current tools that exist to help (generic AI summarizers, paper search engines, broad literature tools, etc.) all treat these papers the same way they treat everything else. They extract text. They produce summaries. They have no idea that a convergence proof in functional analysis requires completely different framing than an empirical ML benchmark paper, or that a HEP experiment result lives or dies on its systematic uncertainties.

I built this tool because of this core insight. What matters in a math paper is not what matters in a CS paper, and neither is what matters in a physics paper. A proof-based mathematics paper needs its theorem stated plainly, its proof technique named and explained, and its prerequisites clearly stated. A machine learning paper needs its experimental setup, its benchmark comparisons, and its claimed contribution separated from its actual contribution. A physics paper needs its equations annotated symbol by symbol, its physical intuition separated from its formalism, and its result situated against the landscape of prior measurements. No other tool makes these distinctions. This one does.

What makes this different

Every major tool in this space was built for breadth. Searching millions of papers, screening literature at scale, synthesizing across many sources. Those tools are useful for what they do. This tool was built for depth.

Domain-Expert Summary

Field-specific analysis tuned per subfield, not a generic extraction

Multi-Paper Comparison

Side-by-side analysis with a historical timeline of how ideas evolved

Reproducibility Verification

Structured assessment of methodology, rigour, and real-world impact

Who this is for

If you need to screen 200 papers for a systematic review, there are better tools for that. This is built for a different person: the undergraduate in their first research group trying to understand half a dozen papers with no guidance on how; the early PhD student reading outside their exact subfield for a rotation, a collaboration, or a lit review; the summer research intern dropped into an unfamiliar area on day one with a reading list and a deadline; the engineer or analyst trying to understand a technical paper that's directly relevant to their work but written for an academic audience they're not part of.

Get in touch

Feedback, questions, or anything else — I'd love to hear from you!