I'm entering my 6th year as a PhD student at Carnegie Mellon University, where I'm advised by Professor Matthew O'Toole. Previously, I received my Bachelor's and Master's degrees in Computer Science and Applied Math from Brown University. I'm supported by the Meta PhD fellowship in AR/VR Computer Graphics.
My research lies at the intersection of computer vision, computational imaging, and machine learning.
I am interested in leveraging physics-based light transport and neural fields to design robust systems for inverse rendering and 3D reconstruction.
A more physically-accurate inverse rendering system based on radiance caching for recovering geometry, materials, and lighting from RGB images of an object or scene.
C-ToF depth cameras can't reconstruct dynamic objects well.
We fix that with our NeRF model that takes raw ToF signal and reconstructs motion along with the depth.
A 6-DoF video pipeline based on neural radiance fields that achieves a good trade-off between speed, quality, and memory efficiency.
It excels at representing challenging view-dependent effects such as reflections and refractions.
A practical coded diffraction imaging framework that can decouple mutually incoherent mixed-states, such as different wavelengths.
Applications in computational microscopy.
We apply a phasor volume rendering model to the raw images from C-ToF sensors in order to achieve high-quality 3D torfstruction of static and dynamic scenes.