IllumiNeRF 3D Relighting without Inverse Rendering
3D relighting by distilling samples from a 2D image relighting diffusion model into a latent-variable NeRF.
Building World Labs, zero to one. I am a researcher in 3D computer vision, generative models, and computer graphics. I was previously a research scientist at Google. I received my Ph.D from the University of Washington in 2021 where I was advised by Ali Farhadi and Steve Seitz.
3D relighting by distilling samples from a 2D image relighting diffusion model into a latent-variable NeRF.
Using an multi-view image conditioned diffusion model to regularize a NeRF enabled few-view reconstruction.
Preconditioning camera optimization during NeRF training significantly improves their ability to jointly recover the scene and camera parameters.
By applying ideas from level set methods, we can represent topologically changing scenes with NeRFs.
Given a lot of images of an object category, you can train a NeRF to render them from novel views and interpolate between different instances.
Learning deformation fields with a NeRF let's you reconstruct non-rigid scenes with high fidelity.
By learning to predict geometry from images, you can do zero-shot pose estimation with a single network.
By pairing large collections of images, 3D models, and materials, you can create thousands of photorealistic 3D models fully automatically.